NATIONAL HARBOR, Md. — Artificial intelligence is turbocharging hackers’ operations, from writing malware to preparing phishing messages. But generative AI’s much-touted impact has its limits, a cybersecurity expert said at an industry conference here on Monday.
Generative AI “is being used to improve social engineering and attack automation, but it’s not really introduced novel attack techniques,” Peter Firstbrook, distinguished VP analyst at Gartner, said at his company’s Security and Risk Management Summit.
Experts have predicted that AI will revolutionize attackers’ ability to develop custom intrusion tools, reducing the amount of time it takes even novice hackers to compile malware capable of stealing information, recording computer activity or wiping hard drives.
There is “no question that AI code assistants are a killer app for Gen AI,” Firstbrook said. “We see huge productivity gains.”
HP researchers in September reported that hackers had used AI to create a remote access Trojan. Referencing that report, Firstbrook said, “It would be difficult to believe that the attackers are not going to take advantage of using Gen AI to create new malware. We are starting to see that.”
Attackers are also using AI in an even more insidious way: Creating fake open-source utilities and tricking developers into unknowingly incorporating the malicious code into their legitimate applications.
“If a developer is not careful and they download the wrong open-source utility, [their] code could be backdoored before it even hits production,” Firstbrook said.
Hackers could have done this before AI, but the new technology is allowing them to overwhelm code repositories like GitHub, which can’t take down the malicious packages quickly enough.
“It’s a cat-and-mouse game,” Firstbrook said, “and the Gen AI enables them to be faster at getting these utilities out there.”
Deepfakes still rare
The integration of AI into traditional phishing campaigns is a growing threat, but so far, the impact appears to be limited. Gartner found in a recent survey that 28% of organizations had experienced a deepfake audio attack; 21% a deepfake video attack; and 19% a deepfake media attack that bypassed biometric protections. Still, only 5% of organizations have experienced deepfake attacks resulting in the theft of money or intellectual property.
Even so, Firstbrook said, “This is a big new area.”
Analysts worry about AI’s potential to make certain types of attacks much more profitable because of the attack volume that AI can create. “If I’m a salesperson, and it typically takes me 100 inquiries to get a ‘yes,’ then what do you do? You do 200 and you've doubled your sales,” Firstbrook said. “The same thing with these guys. If they can automate the full spectrum of the attack, then they can move a lot quicker.”
At least one Gen AI-related fear appears to be overblown — at least for now. Researchers have yet to see it create entirely new attack techniques.
“So far, that has not happened,” Firstbrook said, “but that's on the cusp of what we’re worried about.”
Firstbrook pointed to data from the MITRE ATT&CK framework, which catalogs the strategies that hackers have developed to pierce computer systems. “We only get one or two brand-new attack techniques every year,” he said.