Skip to main content

Knowledge Hub

New page title image

The data impact of AI and the road ahead

5 min read
The data impact of AI and the road ahead

As artificial intelligence (AI) gets incorporated into chatbots, decisioning systems, productivity apps, or even in Singapore’s Smart Nation infrastructure, what are some cybersecurity risks that might emerge with its pervasive use? Can AI regulation help, and how can businesses tackle the dichotomy posed by the need to innovate and the inherent risks of embracing cutting-edge technology that is barely understood?

We ask Professor Lam Kwok-Yan, the associate vice president of strategy and partnerships at the Nanyang Technological University (NTU) for his thoughts about the underlying challenges to using AI and possible solutions that businesses can adopt.

One area that needs to be handled, but which may not be covered by regulations is the testing and validation of AI systems to ensure fairness… As AI systems start taking over more of our work, there should be a mechanism to test AI systems and verify that they are fair.
– Professor Lam Kwok-Yan, Associate Vice President of Strategy and Partnerships, Nanyang Technological University

The data at the root of AI

According to Prof Lam, to understand the dangers of AI and how organisations can defend against them, businesses must first have a clear appreciation of the role of data.

“With machine learning, you need to gather a lot of data and store them somewhere that can be used to train the models. The collection of all these data by itself becomes a security risk. For the AI model to make good decisions, it must by definition be trained with good, useful data. This makes the data repository an obvious target for attackers,” explained Prof Lam.

It is worth noting that data leakage can potentially happen through an AI model, too. For instance, a hypothetical AI model used to manage an organisation’s cybersecurity defences and trained using privileged information obtained from within could potentially allow attackers to glean information about the network from its responses.

Apart from considering data leakage or theft, organisations must also maintain their integrity from unauthorised modifications, says Prof Lam. As businesses increasingly lean on AI to make decisions or perform work, attackers are incentivised to identify and corrupt the data sources used for the training of the AI model. Specifically, data poisoning attacks can cause the resulting AI model to make erroneous decisions in select circumstances that attackers can exploit. 

A faster attack cadence

In the near term, Prof Lam notes that we can expect a much faster cadence of attacks, as generative AI is abused to rapidly generate convincing fake messages, images, and even videos based on real developments around the world.

In the cybersecurity realm, however, AI is just another tool that can be used for either good or bad, observes Prof Lam. For the good guys, AI can play a vital role to process vast amounts of data to weed out false positives (or false negatives) so that cybersecurity professionals can focus on actual attacks. In this way, this is not that much different from the traditional cat-and-mouse game of cybersecurity – albeit at a much faster pace.

“Security analysts might leverage AI to look for vulnerable software so that they know what to patch. Likewise, attackers can also look for weaknesses using AI to identify components they can attack. It depends on who is faster: The defenders patching the vulnerability, or the attackers seeking to exploit it,” said Prof Lam.

For the huge levels of excitement over AI today, Prof Lam cautioned that “if everyone is essentially relying on the same AI engine” at the moment, then it could be a problem, alluding to the potential of a large number of AI-infused services or suggestions being impacted by newly discovered flaws in the base AI model. 

Regulating AI

Should regulations on AI be introduced? According to Prof Lam, there should be a balance between innovation and protection. “Not necessarily. If you are using it for non-critical services, then it may not be needed. Or use it in a sandbox environment so that there is room for innovation.”

In situations where regulation of AI is required, then it can be incorporated by beefing up existing regulatory frameworks that are already in place. “If you use AI in banking transactions, then I think financial regulations could be updated because it's part of the financial systems; if you use AI in the energy sector, it would fall under critical infrastructure.”

Similarly, data governance guidelines can be leveraged to cover the data used to train AI models. Finally, consideration should be given to ensuring fairness and limiting bias in AI systems.

“One area that needs to be handled, but which may not be covered by regulations is the testing and validation of AI systems to ensure fairness. Could the machine learning data be biased so you end up making biased decisions? As AI systems start taking over more of our work, there should be a mechanism to test AI systems and verify that they are fair,” he said. 

Conclusion: Prioritise data protection

If there is one certainty, it would probably be the renewed interest in hiring data scientists as organisations seek to derive more value from their data, whether using traditional analyses or AI systems. For cybersecurity professionals, this translates to an urgent need to shift beyond traditional cybersecurity strategies and adopt a more holistic approach towards protecting their data.

“If data scientists see value in a given repository of data, it means that this data is just as likely to be valuable to others. So, the protection of this data must be part of the discourse with the CISO, on top of traditional cybersecurity concerns such as configuring the firewall.”

“Of course, the easiest, most conservative way to protect your data is to it lock up; only allow access to the data by their intended users. But you would not be getting any value from your digitalization. Yet to allow them to be accessed by more parties increases your risk, hence the recent emphasis on Privacy-Enhancing Techniques (PET). What you need is a mechanism or tool to manage it and facilitate data access while protecting against data leakage,” he summed up.

 

View All Articles
Loading