AI in Cybersecurity: The Good and the Bad
Whether it’s AI or the next shiny new technology, cybersecurity leaders and CIOs must always be pragmatic and apply a business perspective to any technology before implementing it, says Damian Leach, Global CIO of Seaco. Leach was speaking about artificial intelligence (AI) and its potential impact on cybersecurity. Can AI move the needle for cybersecurity leaders? What are some possible pitfalls of AI in cybersecurity and how can organisations get the most out of it? And how should CISOs evaluate AI-powered solutions? Of Opportunities and Dangers“I think AI offers us a tremendous opportunity. Be it summarisation of an event, automation or productivity gains, pen testing, ability to scour the entire network to piece together potential weak points, or scanning to identify changes needed to meet company security standards,” Leach said. As with any technology, there is always the danger of overreliance on AI, he concedes. Moreover, there is also a potential cascade effect to consider if AI is used to automate certain deployments and it makes the wrong decision. And of course, threat actors can be counted on to try to corrupt AI models for nefarious reasons. “While AI is really good at some tasks such as summarising data, finding patterns faster than a human, answering multi-dimensional queries or even creating code, there are inherent dangers. If you use code generated by AI as an example, you've got to make sure that malicious code isn't injected into your applications, especially with open-source products.” “We are already seeing threat actors trying to influence incorrect behaviour in AI models by feeding them lots and lots of spurious data. As such I think there is a danger of overexposure to AI models. You must have guardrails in place to protect your own AI models [and the appropriate] IT policies.” Before Jumping InThe cybersecurity field isn’t immune to the allure of AI’s potential. However, Leach cautions against rolling out AI simply for its own sake. Organisations should always evaluate the business value of AI, build MVPs, test use cases and run pilots before full deployment, he says. “Depending on your strategy, look at productivity gains, gross top-line growth, automation opportunities, the speed of data-focused decision-making, and cost efficiencies to aid net profits. When it comes to adopting AI, don't go for a technology-first approach, look broadly for the business problems that you're trying to solve.” “AI is not going to solve everything; often you'll find that your challenges can be solved with traditional technologies that don’t require AI,” he concluded. AI isn’t useful without the right AI-ready data foundation, and the onus is on businesses to ask tough questions about how their data will be used and what will be shared by external parties, says Leach, who emphasised that AI solutions must be built to be accessible to work effectively. “Question how your data will be leveraged and where the data will go when it comes to AI models,” he said. “Ensure that you're implementing the right guardrails from a trust, transparency, and truth standpoint within your environment.” “Before you even approach cybersecurity vendors for AI-powered capabilities, make sure that you've got a robust IT policy and an AI framework in place that is focused on what’s right for you and the company data you are a custodian of,” added Leach. Evaluating AI in ToolsAs AI makes its way into cybersecurity tools, how should CISOs react? According to Leach, cybersecurity leaders must know how their preferred cybersecurity solutions implement their AI capabilities. There are two primary approaches: AI-integrated or AI-infused. An AI-integrated solution relies on external AI models for its capabilities, such as API calls to a third-party provider. An AI-infused solution typically has AI embedded into the product or makes use of federated learning, which allows the solution provider to train AI models without an external party gaining access to the data. While an AI-infused cybersecurity offering could be an important differentiator, an AI-integrated product isn’t inherently bad: “The vendors are admitting that external parties can build better AI models and have decided to focus on their core products instead.” There is no running away from carefully reading through the fine print: “[There will be] a disclaimer or an NDA or some document that specifies the data sharing agreement between your company and the vendor providing the software services. Read that document carefully, especially the fine print, and make sure you agree to everything.” Leach recommends asking questions about the data exchange. “What is the data journey? What type of information are you sharing? And can you opt out of certain AI capabilities if you disagree with the data policy? In my experience, the vendor is also learning so these questions will help them improve and harden their AI data strategy,” he said. “Read their white papers on how they use your data; are they obfuscating it before sharing? Are they just sharing the numeric values and patterns of the data with their models? In some instances I've researched, vendors don’t just share the patterns to improve AI models but also the raw data to external parties – this is certainly not good practice.” “Question how your data will be leveraged and where the data will go when it comes to AI models… In some instances I've researched, vendors don’t just share the patterns to improve AI models but also the raw data to external parties.” – Damian Leach, CIO, Seaco.
The Human Behind the AIWe are still in the early days of generative AI. However, Leach thinks it will empower a new generation of cybersecurity tools and skills in the workforce that will help bridge disparate point solutions or even address entirely new aspects of cybersecurity. For his optimism, Leach says the human remains the crucial component for driving AI adoption in cybersecurity. “We can continue to talk about technology; we can talk about the advent of AI and how it's going to help us. Ultimately, it takes good people to ensure that we are protecting our organisations. AI can’t by itself control our strategy; we must take a more holistic approach and build the right IT foundations.” “We will need thought leaders to adopt AI in the right way, using the right strategies to protect our organisations. It comes down to having good programmes in place to continually educate the cybersecurity industry and the business on the opportunities that AI provides. AI is not going to replace the need for ‘human in the loop’ decision-making for most companies anytime soon.” |