Skip to main content

Knowledge Hub

New page title image

Cybersecurity in the New World of Generative AI

5 min read
Cybersecurity in the New World of Generative AI
Kunal Anand of Imperva (left), Scot Jarkoff of CrowdStrike (top right), and Prof Yu Chien Siang of Amaris AI (bottom right) discussed cutting-edge AI research, the potential impact of AI on cybersecurity, and how some of them are already using it at GovWare 2023.

How will AI influence cybersecurity? At a time when cybersecurity is transitioning from a siloed concern into an integral component of businesses, pivotal technologies such as AI could well be a catalyst in enabling cyber defenders to proactively identify and mitigate threats. 

At GovWare last month, we heard from various presenters as they discussed cutting-edge AI research, the potential impact of AI and generative AI, and how some of them are already using it.
 

Doing cyber faster, better  

For Kunal Anand, who has a dual role as the Chief Information Security Officer & Chief Technology Officer at Imperva, AI is scalable, works tirelessly, and can be more efficient than security analysts in certain use cases. His team at Imperva has already developed two practical ways to use AI to solve real-world problems: Monitoring for suspicious activities across all its web servers, and a tool to build cybersecurity policies. 

For the former, Anand says his team used LangChain, a large-language model (LLM), the log files generated by the open-source osquery tool, and various prompts developed by the team. “Every morning, we get a report that tells us when we see interesting behaviour across our fleet. And we can see things like the most file changes, sensitive directory changes... the depth of this is significant.” 

Anand noted that specifying the security rules and policies to customise security products is not easy, noting that these rules are not easy due to their complexity and the fact that every vendor has their distinct dialect.  

“And this is part of the reason why it takes so long for organisations to build rules and policies. They don't know if it's accurate. Then you have to debug it, you have to understand if it's doing exactly what you want it to do. It takes a lot of time to do this.” 

The team hit on the idea of building a tool to write these rules on their behalf. Using Imperva's own Web Application Firewall (WAF) product for the initial experiment, Anand fed in publicly available documentation and successfully developed an AI model trained on the WAF’s rules and policies. This gave the team an AI-powered tool that could generate relevant rules for the WAF from instructions in English – and even explain them. 

“[AI] is probably the biggest thing I have seen in the last 20 years, that's not just going to change what's happening from an attack perspective. But it's also going to change the way that we as defenders, protect ourselves.”
 

AI use by adversaries 

Can AI be used by cyber adversaries? Of that, Scott Jarkoff, Director, Strategic Threat Advisory Group, APJ & EMEA at CrowdStrike, has no doubt. Indeed, he noted that generative AI could help adversaries conduct attacks in a “much more” efficient manner. 

He cited the breakout time, which is the amount of time it takes an adversary to go from initial access to lateral movement, noting that it was 582 minutes in 2019, 98 minutes in 2021, and 79 minutes in 2023. “Adversaries are becoming extremely sophisticated, and tools like generative AI just going to make it even more treacherous when we're fighting these battles in cyberspace,” he said. 

“You can just ask [generative AI] to create a Python script to do something highly malicious. It'll create all types of code for you – I'm not going say very sophisticated – but you can put together some very solid code that can help you create an attack in short order.” 

Finally, expect AI to help cyber criminals pull off even more convincing phishing campaigns than ever. “Just imagine how a cybercriminal adversary is going to leverage [generative AI] to make authentic sounding lures as like a plea for a donation or to visit this website, where it convinces you to enter your credentials or credit card details.”
 

Exploiting AI weaknesses 

We know AI is not immune to spoofing. But Professor Yu Chien Siang, Chief Innovation and Trust Officer, Amaris AI, really brought it home. In a segment about adversarial attacks on AI, he showed how an AI system could no longer detect a person holding a specially generated image, or “patch”. This has implications for systems that use AI, notes Prof Yu. 

“You can do this with a vehicle to foil an overhead drone. You put it at the top of a stolen car, and an [AI-based system] will no longer be able to detect the license plate. This is something that is known to the Cyber Security Agency of Singapore (CSA), but not very well known to everybody else.” 

Prof Yu says his team was able to successfully disrupt AI systems in black box tests where nothing was known about the underlying AI. “We were able to bring it down or disrupt it… We have not encountered a case that we couldn't breach.” 

“This is the sort of cyber warfare issue that we are contending with. Imagine bypassing the surveillance camera at immigration. But more importantly, the same attack can be used for things such as a loan application [that is processed by AI] – there is a text equivalent of this.”  

To further highlight the fallacy of uploading confidential company materials to ChatGPT, Prof Yu said: “If you upload your confidential data to ChatGPT, then somebody could potentially perform an inference attack to [retrieve] the data that would have been incorporated into its weights.”

“If you upload your confidential data to ChatGPT, then somebody could potentially perform an inference attack to [retrieve] the data that would have been incorporated into its weights.” 
– Professor Yu Chien Siang, Chief Innovation and Trust Officer, Amaris AI 

Imperva’s Anand also took time to emphasise that his firm’s AI-powered monitoring solution was never tested or deployed on the public version of ChatGPT. A private instance of GPT-4 was used to ensure privacy, he said.
 

Conclusion: More to come 

Ultimately, the various AI-centric presentations highlighted the point that research and development around AI are continuing apace, even as cyber defenders continue to new ways to harness it to improve productivity and enhance their security.  

It is impossible to tell what the future will bring but suffice to say that as AI becomes more sophisticated and integrated into our daily lives, the need for robust cybersecurity measures will only grow. 

 

View All Articles
Loading