The Dual Edge of AI: Promise and Perils for Cyber Defence

  • 30 avril 2024
  • Allessia Chiappetta, JD Candidate, Osgoode Hall Law School, and M. Imtiaz Karamat, Associate, Deeth Williams Wall LLP

Artificial Intelligence (AI) has obtained mainstream celebrity with the success of technologies like OpenAI’s ChatGPT fueling its popularity on a global scale. Apart from the average consumer, AI is revolutionizing traditional approaches for businesses across every industry. Amidst its diverse roles, the use of AI in cybersecurity is becoming an important topic in the broader conversation that is characterized by promise and peril – AI being a major asset for both organizations bolstering their security defences and threat actors working to infiltrate those safeguards.

The AI Takeover

Canadians are embracing AI now more than ever with a recent poll finding that at least 30% of Canadians now use AI tools. On the consumer-level, AI may be used for completing household tasks, listening to music, and navigating social media. However, the capabilities of AI are even more evident in its business applications. For example, organizations are using AI to enhance hospitality services; provide financial planning services; automate their online platforms; and optimize internal operations by reducing the time required for less complex tasks.

AI has also been popularly introduced into cybersecurity strategies as part of threat detection and response. Canadian companies like BlackBerry Ltd. have notably embraced AI in cybersecurity. Once known as a smartphone powerhouse, BlackBerry is now a cybersecurity firm renowned for its Cylance AI products, which aid in detecting malware and preventing cyberattacks. In October 2023, the company announced a generative AI-based cybersecurity assistant, designed to predict customer needs and proactively provide information without requiring manual queries.

By harnessing AI-driven technologies, organizations can proactively identify and mitigate potential cyber risks, thereby safeguarding their digital assets and operations. Yet, alongside these advancements come new challenges, particularly when AI is used by threat actors seeking to exploit an organization’s cyber vulnerabilities. This article explores the multifaceted implications of AI in cybersecurity, drawing insights from industry developments, scholarly research, and government guidance. From the transformative potential of AI in securing critical infrastructure to the considerations surrounding its deployment when in the wrong hands, we navigate through the complexities of this evolving paradigm.

AI Supports Cybersecurity

AI is revolutionizing cybersecurity by significantly enhancing threat identification and prevention capabilities. There are different categories of AI employed in this area that encompass a spectrum of techniques, each tailored to address specific cybersecurity challenges. Machine learning algorithms, for instance, may play a pivotal role in analysing large datasets and detecting patterns indicative of cyber threats. Deep learning techniques, a subfield of machine learning, may also enable AI systems to perform intricate tasks such as image recognition and natural language processing, thereby enhancing threat detection procedures.

One of AI’s pivotal contributions lies in its ability to analyse vast volumes of data from diverse sources to pinpoint and quickly act on potential cyber threats. Utilizing sophisticated algorithms, certain AI models can excel in scrutinizing user behaviour patterns and network activities to monitor and alert an organization upon detecting an anomaly indicative of a cyberattack. For instance, IBM’s Security QRadar Suite harnesses AI technology to learn the typical behavior in an organization’s network and then compares this with incoming network traffic to identify deviations from established norms, alerting cybersecurity teams to potential threats in real-time. The analytical capabilities of this technology can also assist an organization in investigating these potential flags. By automatically data mining threat research and intelligence, the QRadar Suite can assist in identifying affected assets and checking for indicators of compromise in a fraction of the time it would take to conduct a manual review.

Outside of its processing capabilities, AI may play a key role in the differentiation between legitimate and suspicious activities, particularly in the realm of social engineering. By leveraging sophisticated algorithms, AI systems can analyse email content and discern fraudulent emails faster than humans, including sorting spam and phishing emails and redirecting them to junk folders. Indeed, Google recently announced that it is leveraging AI technology in a major Gmail security update that targets phishing, spam, and other issues for its users. This type of capability can empower organizations to thwart cyberattacks and safeguard critical assets from the harm that could stem from a successful phishing attack, such as a data breach.

Beyond automating security processes, AI has the potential to augment cybersecurity measures by continuously learning from data patterns and historical incidents, thereby enhancing threat detection capabilities over time. This iterative learning process may enable AI systems to adapt to evolving threats and detect previously unknown attack vectors, which provides organizations with proactive cybersecurity measures that will bolster their overall security posture.

AI Challenges Cybersecurity

While AI represents a valuable asset in fortifying an organization’s defences, it may also introduce challenges that necessitate careful consideration and mitigation strategies. This is especially evident when it comes to generative AI, which is a type of AI that is fed large datasets and uses this information to generate new content. In KPMG’s recent CEO Outlook, an annual survey conducted by the organization to better understand the challenges impacting CEOs of major companies around the world, it was found that 93% of Canadian CEOs (with 82% of CEOs surveyed globally) are concerned that generative AI will facilitate additional cyber attacks.

This concern is also reflected by government organizations at home and abroad that are further investigating the unlawful use of generative AI models for cybercrime. The Canadian Centre for Cyber Security’s recent guidance on large language models (LLMs), a subset of generative AI that understands and generates natural languages and other content, highlights LLMs as a growing threat to Canada’s information ecosystem. One of the most likely consequences is the involvement of LLMs in social engineering attacks, such as phishing campaigns. By leveraging LLMs, threat actors can quickly generate targeted phishing emails that appear nearly indistinguishable from human-written text. These emails may be used to trick victims into disclosing sensitive information, such as account credentials – an event that can lead to widespread consequences for an organization if the wrong credentials are exposed. Indeed, this use-case was recently tested by a news organization, which used available resources to build their own AI bot and found that they could manipulate this technology to craft convincing spear-phishing messages and other common scam communications.

In addition to social engineer attacks, a common concern with LLMs is whether they can be used to write malicious code for threat actors. Certain LLMs may be capable of writing code snippets in popular programming languages (e.g., JavaScript or Python), which would lower the technical skill level required for threat actors to carry out cyber attacks. For example, a Check Point Research blog post demonstrated how ChatGPT could be used to create malicious code. However, the state of current AI technology may limit the extent to which this may facilitate significant attacks (e.g., a zero-day attack). As the Canadian Centre for Cyber Security explains in its guidance on the matter, it is unlikely in the current circumstances for threat actors to use LLMs to create sophisticated code that could lead to significant incidents like a zero-day attack.

A Not So Simple Future

It is clear that AI does not have a straightforward role in cybersecurity and one can easily get lost in the various use cases. To further complicate the matter, AI is not solely a cybersecurity tool. As discussed above, it represents a multi-faceted technology that has diverse applications across a plethora of industries. Like with any technology, AI systems are also susceptible to risk, and AI models can be hacked by threat actors seeking to exploit the system or data contained within for their own gain. As this technology continues to be widely implemented by organizations, it is important to carefully consider the many aspects of AI and adapt proper policies and procedures to ensure you are prepared for the modern landscape.

Any article or other information or content expressed or made available in this Section is that of the respective author(s) and not of the OBA.