What Will AI Bring to the Cybersecurity Space in 2022
In this article, we will discuss the five ways or possibly the most concerning areas within which AI integration may change the world of cybersecurity in 2022.
Join the DZone community and get the full member experience.Join For Free
Over the last year, artificial intelligence (AI) has become a huge part of our everyday lives, which is something of a mixed bag that has brought along a wide variety of both positive and negative influences. On one hand, there are algorithms that are designed to perform a largely marketing-related series of tasks, which are perhaps those best known to individuals outside of the technical space. Think of the algorithms curating your TikTok feed and personalizing suggestions on YouTube. The AI that calculates your fastest morning commute based on virtual maps, your vehicle, and current traffic conditions is also a fairly visible one that has had its share of media attention.
A particular area, though, in which AI has become crucial is cybersecurity. Cybercriminals are increasingly harnessing AI to automate breaches and crack complex systems. Sophisticated, large-scale social engineering attacks and deep fakes are prime examples of this trend. Perhaps more subtle techniques, such as those involving AI-driven data compression algorithms, will soon become an even more important part of the space in the year to come.
In response, modern cybersecurity providers are also deploying AI and machine learning (ML) technologies to fend off attacks. Here are five ways in which AI may change the world of cybersecurity in 2022. Considering the various risks associated with many of these technologies as well as the fact that not one cybersecurity technique could ever be considered truly perfect, this might also be looked at as five areas of concern that technicians, as well as commentators, will want to keep an eye on over the next year.
1 - Improved Cyber Threat Detection
To start with, AI and ML algorithms have an unparalleled capacity to detect patterns - and deviations from them. When you deploy AI to monitor your company network, for example, it creates an activity profile for every user in that network. What files they access, what apps they use, when, and where. If that behavior suddenly changes, the user is flagged for a deep scan. This is a vast improvement in threat detection. Currently, a lot of time is lost before an attack is even noticed. According to IBM’s 2020 Data Breach Report, businesses take 280 days on average to detect and contain a breach. That’s plenty of time for hackers to cause massive damage.
AI cuts that time short. It instantly spotlights irregularities, allowing businesses to contain breaches fast. One of the major issues with this, however, is the fact that there's always a strong risk that some clean behaviors may appear as though they are problematic when they're not. Current generation ML-based threat detection algorithms rely almost exclusively on the adaptation of neural networks that more or less replicate the perceived functioning of human thought patterns.
These systems use validation subroutines that crosscheck behavior patterns against previous behaviors. Over time, these normally improve themselves as they encounter slightly unusual edge cases more than once. It could take some time before they meet critical mass, however. At the same time, privacy considerations are always an issue, especially if any of the relevant use cases somehow involve extremely sensitive workflows, such as those that may be encountered in banking.
2 - Enhanced Biometric Authentication
Weak passwords are often gaping the vulnerabilities of both businesses and individuals. In theory, complex passwords afford you a greater level of protection over those that have a lower degree of entropy. For example, crackers can instantly break a six-digit numbers-only password (think "123456"). In contrast, a password consisting of 10 numbers, upper- and lowercase letters, and symbols, takes the same cracker 400 years to compromise, assuming that the hashing algorithm used to display the password never revealed it as plaintext.
With two-factor authentication (2FA), which requires verification via a second device to let you log in, you're theoretically even more secure. However, many people balk at extra work, which might explain why so many passwords are woefully simple. As a result, the most common passwords of the last year are still laughably simple. Qwerty and password were actually used as login credentials in 2021, and it's doubtful that they'll be changed in a year.
Nevertheless, this is not just a failing that can be blamed on individuals. Some of the biggest breaches in history were caused by weak password security, which in some cases were actually server-side. Passwords are often still stored with the dated md5sum algorithm, which was compromised years ago. It's likely that many web services will migrate toward something better, such as the 512-bit BLAKE2 cryptographic message digest, over the next year.
In the meantime, AI-based algorithms could eliminate the need for any manual password entry. Algorithms eliminate the need for manual password entry. Instead, we can access accounts with biometrics. It's like unlocking your phone via fingerprint or FaceID - but on a much larger and more secure scale. The newest algorithms instantly 3D-map a person's face even under difficult conditions. Consequently, they offer both security and convenience for the user.
Computer scientists are warning, however, that these still pose problems because it's difficult to revoke biometric credentials once they've been granted.
3 - Better Phishing Protection
On the face of it, phishing should be a dead form of social engineering considering that most netizens are far more familiar with these kinds of potential attacks than they may have been in the past. However, phishing continues to be a scarily effective cyberattack tactic. In fact, recent data suggests that 91 percent of cyberattacks start with phishing mail.
That's because modern phishing attacks are vastly more sophisticated than the 419 scams that largely seemed to hail from the Federal Republic of Nigeria. They exploit membership on popular platforms like Netflix and Amazon, just as much as work relationships. During the height of the COVID-19 pandemic, torrents of phishing emails about the WHO and stimulus checks flooded the internet. The difficulty is that phishing attacks are now so realistic and diverse that no one person can keep track of them. AI agents can, however, and these are likely to become an ever-growing chore in the coming year.
Building on gargantuan and constantly-updated databases of phishing attacks and common scams, algorithms can instantly identify and flag any phishing attempt aimed at an inbox. By deploying them alongside SMS protocols, they could potentially stop some sorts of text messaging-related scams as well. Due to the more libertine policies employed by IRC clients, it may be difficult for even the most sophisticated of AI agents to protect these outlets.
4 - Dark Web Monitoring
Another field in which AI and ML are making a massive difference in the cybersecurity world is keeping our data out of the hands of identity thieves. The worst-case scenario for any of us is that our personal information ends up in the hands of hackers. Our name, birthdate, phone number, email address, social security number, credit card details - spell financial disaster when compromised.
However, no matter how careful we are as individuals, this can always happen. If a company that we trusted with our data is breached, all of it can end up in the criminal corners of the internet. If this happens, the important thing is to quickly take action to prevent identity theft. That’s exactly what dark web monitoring AIs help us do.
As their name suggests, these algorithms constantly scan the dark web - the sphere used by cybercriminals - for your personal data. If they do find private information anywhere, you'll be alerted and informed about the threat level. Then, you can take action before identity thieves do.
5 - Examining Compressed Archive Contents
One final field that AI may make new inroads into is that of compression, but not in the way that one might suspect. A dizzying array of formats are currently in use, and many of these already make heavy use of AI algorithms such as the famous Burrows-Wheeler block-sorting system. New algorithms have come about as a result of the need to manage specific edge use cases, but none of these reflect the direction that AI is set to take.
The proliferation of compression formats has permitted bad actors to stash malevolent code in archives that few traditional heuristics scanning programs could ever disassemble. However, several popular archive management applications are capable of doing so, which makes them a threat to end-users. AI programmers are developing a series of new technologies that predict the odds of an archive containing malware by examining their size and time stamps. This allows at least some degree of detection, even for archives of unrecognizable typing.
While this might end up creating a number of false positives, it might be useful for those examining cloud storage systems. Considering the direction of the market, it's also highly likely that a number of additional AI-based technologies will get released in the next year. That could have a far-reaching impact as far as the cybersecurity scene is concerned.
Opinions expressed by DZone contributors are their own.