Five New Security Threats From Generative AI in 2024

Cybersecurity experts say that while organizations can't always pinpoint the role generative AI plays in cyberattacks, it can be assumed that it has become ubiquitous in phishing and social engineering attacks. With the emergence of tools such as OpenAI ChatGPT and its prominence to attacker users, "you can assume that there has been an increase in sophistication and accuracy, as well as changes in linguistics in phishing and social engineering," said MacKenzie Brown, vice president of security at Blackpoint Cyber, a managed detection and response provider.

But experts say that while generative AI applications are undoubtedly the "most useful" to attackers today, there are still many emerging threats that use the technology to increase the speed and sophistication of attacks.

The bottom line for most organizations, experts say, is that even with advances in generative AI defenses, the threat landscape will continue to be impacted by AI attacks.

Ultimately, for organizations with mature security programs, the arrival of generative AI “just accelerates and improves on attack vectors they’ve known about for years,” said Randy Lariar, director of big data and analytics at Denver-based Optiv.

However, "I think the bigger issue is probably that companies are falling behind in the security space or not fully integrating security with the business mission," Lariar said. "If it's just an option that's being completed or an insurance requirement that's being met, then there are probably some gaps there."

Here’s a look at five new security threats posed by generative AI in 2024.

Speed ​​Up Attack

Security experts say the biggest threat posed by generative AI is not new tactics or techniques, but rather the acceleration of existing methods used by cybercriminals and hackers. Chester Wisniewski, global chief technology officer at cybersecurity giant Sophos, said generative AI allows threat actors to "do what they've always done, but much faster."

"What used to be a one-day window may now be four hours. What used to be a four-hour window may now be 10 minutes," Wisniewski said. He said the amount of work that would have taken an attacker to attack 100,000 victims in the past might now be enough to attack 10 million victims in the same amount of time.

For example, according to Sophos X-Ops research, attackers have been able to use new AI capabilities to expand their reach when carrying out so-called "pig killing" scams. Previously, "they had to send text messages to each victim and then figure out which ones would reply and respond to them. It was a much slower, smaller process," Wisniewski said. But now, "we have evidence that ChatGPT is being used to automate the initial stages of these conversations."

Ultimately, “the scale and scope of these [attacks] through automation exposes more people to more of these threats than we would have seen before the advent of AI,” Wisniewski said. “My back-of-the-envelope calculations suggest that we went from one person trying to deceive 1,000 people a day to one person being able to deceive 100,000 people a day through automation.”

Malware is more likely to evade detection

According to experts, AI-generated code is largely derivative of existing malware, often making it easy for security tools to detect it. This means that generative AI tools are of limited usefulness for generating malware, Wisniewski said.

Meanwhile, generative AI could make existing malware more effective at improving it to evade detection, according to Blackpoint Cyber's Brown. "We're seeing threat actors now use AI to help modify existing malware that doesn't necessarily come with a signature," she said. "If they can modify the way the signature appears, they can evade [antivirus software] that way. They're using generative AI to manipulate malware and create new malware from scratch."

Aggression becomes more personalized

Experts say threat actors could use generative AI to improve the personalization of targeted attacks, with a wide range of uses. Brown said generative AI plays a "huge role in research and reconnaissance, and just as we use AI to be more efficient, threat actors use AI to be more efficient."

In other words, attackers are able to reduce much of the information they need to gather through personal research on the internet, "and they're able to do it in a more efficient way."

"Now they can tailor their attacks to the actual organization, the industry itself," Brown said. "They're leveraging whatever information they can gather to reduce the amount of research and reconnaissance and get initial access faster. They know what external systems to attack, they know what targets to attack, they know what will allow them to get in through social engineering or create some scripts and automation so that once they're in an environment they can spread faster."

Audio forgery further escalates

According to a recent report from identity verification vendor Regula, nearly half of the businesses surveyed said they had been the target of an audio forgery attack in the past two years, which shows how common voice cloning attacks have become. There is no doubt that voice cloning technology has advanced to the point where it is both easy to use and capable of producing convincing fake audio, Wisniewski said.

By comparison, “a year ago, [cloning technology] wouldn’t have been good enough without a lot of work,” Wisniewski said. These days, however, he’s been able to produce convincing fake audio in under five minutes.

“Now that it’s available, it’s much easier for criminals to get it,” Wisniewski said.

Kyle Wilhoit, technical director of threat research at Palo Alto Networks Unit 42, said that while the technology is not yet good enough to generate deepfake audio in real time, it will continue to improve. "I would say it will become more of a possibility in the future," Wilhoit said.

Easier to exploit vulnerabilities

Wisniewski also believes that generative AI could play a role in helping attackers develop new vulnerabilities. He noted that while existing tools can already automatically generate potentially exploitable vulnerabilities in software, humans still have to determine whether they are actually possible to exploit.

Now, however, generative AI tools can speed up the process by helping analyze variants of potential vulnerabilities, Wisniewski said: “The AI ​​might say, ‘This is a potential vulnerability,’ ”

Ultimately, “that would be an area I’d be looking at — is there an economy of services out there for high-end actors where they can pay someone $50,000 to do 25 different attacks?” Wisniewski said.