Skip to main content

Knowledge Hub

New page title image

The AI threat to cybersecurity

5 min read
The AI threat to cybersecurity
Photo credit: temitiman – stock.adobe.com

The public release of ChatGPT last November fired up the imagination of the public as few technologies have in recent years. As technology giants such as Google and Baidu join the race and rush to unveil competing offerings, AI chatbots look set to be firmly entrenched in the mainstream.

But beyond impressive writing and code generation capabilities, what are the real-world implications of AI in the cyber security realm? To Luis F. Gonzalez, Chief Data and AI Officer at Aboitiz Power, the propensity for every tool to be used for either good or evil makes AI a clear and present danger. Gonzalez is also the Director of the AI Asia Pacific Institute in Singapore, a research institute that studies the implications of artificial intelligence in the Asia-Pacific region.

“If the world is becoming a lot more productive and capable with the use of AI technology, it is only logical to think that it can become more dangerous. Just like you could do more with AI for selling or anticipating the needs of people – you could also apply it to organised crime [and other illicit activities],” he explained.

The threat of AI

Cybersecurity leaders are [worryingly] uninformed about AI because they've achieved success solely by people-centric solutions in the past. If you're going to protect people's interests, people's security, and you do not understand AI, then you are really behind. You're fighting, you know, guns with a knife.
– Luis F. Gonzalez, Chief Data and AI Officer, Aboitiz Power

Gonzalez pointed to the potential for a proliferation of virtual kidnapping scams that has afflicted certain parts of the world. AI could potentially take things to the next level with its ability to generate new levels of realism: “Anything along the lines of generating new faces and new voices, new videos or new images that humans cannot distinguish from fakes will be a significant cyber security risk.”

And as AI systems are increasingly deployed, they can also be used to harm operational systems that rely on them. He said: “Assume an acoustic modelling model that evaluates the health of a turbine based on frequency detection. Imagine if you could generate just the right acoustic signal to fake failure. It could force the organisation to shut down the turbine just to investigate the problem – you are essentially performing a denial-of-service by spoofing the AI model.”

Sometimes the truth is stranger than fiction, too. Gonzalez spoke of a study commissioned recently by his team. However, it turned out that the researcher working on it had sold the same work to another organisation, relying on an AI chatbot to cover his tracks. Only a fortuitous turn of events caused the team to stumble upon the duplicate work, which caught their attention because it was startlingly familiar – yet not quite the same. 

A new class of cyberattacks

“The report wasn’t an exact copy, so we couldn’t call it out for that. But being an AI person, I took the report and ran it through an AI system to determine the contextual similarities of language sequence. My efforts proved it was 70% plagiarised… but if I didn’t have access to a natural language model that could do that, we would never have known.”

And the only way to effectively defend against AI might be to use AI.

“The only way to protect against deep fakes is to create an algorithm that acts as a discriminator to the ground truth, real versus what isn't… You want to hire people that are very good at that aspect of generative AI to start designing the algorithms to tell what's fake from what's real.”

So, what are some likely AI-enabled cyberattacks? In Gonzalez’s view, hackers generally go for the lowest-hanging fruits, which means more advanced phishing attacks.

“The more sophisticated attackers are not going to try to jump the already high [cybersecurity] wall. They are going to try and find ways around it, and the best ways are to pit AI against humans. Phishing attacks are very effective because humans don't know how to protect themselves. Expect the next generation of phishing attacks to simulate humans, with a look, tone and feel undiscernible between fake and real.”

“Say, I get a video message from my boss telling me he is going to send an email and to click on that link right away because he needs something done urgently. I may not necessarily question that the video is fabricated, that the instruction is bogus, and the link is compromised. When you integrate an AI-generated attack with humans, you don't think of the fact that this is going to be a cyber attack.”

In the same vein, Gonzalez described a hypothetical man-in-the-middle attack in which a malignant chatbot might be planted to take the place of a legitimate, work-related AI bot. “They may behave the same. But with one, I'm giving some very critical internal information that I shouldn't be.”

Take a closer look at AI-empowered threats today

Speaking with Gonzalez, it is abundantly clear that we have not even begun to make a dent at guessing the types of new cyberattacks that AI will eventually empower. What is needed, he says, is for CISOs to sit up and adapt quickly to the threat that AI poses to cybersecurity, lest they find themselves confronted with AI-enabled cyberattacks that they are in no position to defend against.

And like it or not, CISOs must take a closer look at AI: “If the cybersecurity leader in the company is not a leading voice of understanding artificial intelligence, they should be replaced.”

“Understanding learning algorithms and applied statistics is now a core competence for a CISO. You have to understand statistics, not just for data distribution and fairness or applied mathematics, [but that] you will get hacked. Let's just be real – you are going to get hacked eventually. You are not going to be able to stop that from happening.”

“The only way that you are going to reduce its impact is by knowing how to minimise your risks. I see some very good CISOs behaving like portfolio managers: Managing the risks; hedging against them. To me, those are the CISOs I would hire into our organisation,” Gonzalez concluded.

 

View All Articles
Loading