Node4 Cybersecurity News - The Risks of AI in Cyber Security
Skip to content

Node4 Cybersecurity News April 2024 – Navigating the Risks of Integrating AI Into Your Cyber Security Capability

Monthly cybersecurity updates and information from the experts.

Welcome to our monthly Cybersecurity Newsletter! We’re delighted to share expert insight from leading cybersecurity consultancy ThreeTwoFour, the latest of Node4’s strategic acquisitions. This month, we detail the risks that have to be considered when organisations decide to integrate AI into their cyber security programme.  

AI, THOUGH POWERFUL, IS NO MAGIC WAND

AI is a powerful tool and institutions should embrace its strengths, but whilst also staying mindful of its pitfalls.

The key lies in a delicate balance between technology and humanity. Use AI to augment defences, not replace them. Keep human expertise in the loop and treat data with the respect it deserves.  Only then can you navigate the complex terrain of cyber security with AI as part of your defences. 

Ransomware attacks are expected to escalate with the help of AI, according to the UK’s National Cyber Security Centre (NCSC). Malicious actors are already using AI to: 

  • Craft hyper-personalised phishing emails that bypass traditional filters. Imagine an email from a “co-worker” with perfect grammar, familiar style, and references to inside jokes – even AI will struggle to spot the fake.
  • Automate reconnaissance and vulnerability scanning, identifying weaknesses in your defences faster than ever before. It’s like having a team of digital burglars with Google Maps and master keys.
  • Develop self-propagating ransomware that infects entire networks in minutes.  A digital wildfire, spreading out of control before you even know it has started. 

So, it’s no surprise that the call to include AI in cyber security measures is growing louder. It promises enhanced protection and automated vigilance, but before blindly investing in AI to bolster your cyber security, it’s vital to consider the implications of introducing AI. 

An incident involving a DPD AI chatbot highlighted some of the obvious risks, but other tools can create a false sense of security and even introduce additional vulnerabilities.

ADVERSARIAL ATTACKS

Think AI can’t be outsmarted? Think again. Malicious actors are crafting clever ways to manipulate AI systems, feeding them poisoned data, inject malicious samples or twist signatures that throws them off guard.  

For example, adversaries might intentionally mislabel malicious software samples as benign or incorrectly label benign files as malware during the AI training phase resulting in the AI-enabled antivirus system failing to recognise genuine malware. 

OVER-RELIANCE ON AUTOMATION

Yes, automation is AI’s superpower, but handing over the cyber reins entirely can be a recipe for disaster. Organisations may neglect the importance of human intervention and oversight if AI handles all security-related tasks. Therefore, organisations may fall victim to the illusion of invincibility when relying solely on AI for cyber security.   

For example, a company that solely relies on an AI-driven automated patch management system, runs the risk of missing patches or unnecessary disruptions to operations if the AI system fails to comprehend emergent threats or zero-day vulnerabilities.  

FOCUS ON THREETWOFOUR SERVICES –
CYBER SIMULATION TRAINING SESSIONS

It’s not enough to have a comprehensive Cyber Security strategy and cutting-edge technology to protect your organisation. If your end users aren’t savvy enough to detect threats, then your protection can be very quickly undermined.

That’s why ThreeTwoFour offers real-world cyber simulation training, delivered with Cognitas Global. Teams are run through a simulated cyber security event, preparing them for the real thing, expanding their security awareness and increasing the effectiveness of your own cyber strategy at the same time.

HOW CAN WE HELP

Creating a false sense of security

There is a tendency to see AI as the silver bullet in the ongoing fight to safeguard against cyber attackers. Terms such as “AI-enhanced threat detection”, “AI-powered protection” or “AI-driven threat intelligence” are frequently used across the industry and various product descriptions, creating a false sense of security.

While AI is a valuable tool in enhancing security, it’s vital that organisations are aware of the limitations of AI and how to affectively apply its capability in practice. Stakeholders with constrained budgets might be looking at AI as their silver bullet, but it’s not.

Lack of AI expertise and transparency

Many AI algorithms, particularly in deep learning, operate as ‘black boxes’, making it challenging to interpret their decision-making processes. Key suppliers using ‘AI-enabled’ tools are unlikely to share the inner workings of their solution with companies increasing the lack of transparency and promoting blind trust in their solution. This lack of transparency poses a significant risk, as companies may struggle to understand why a particular threat was flagged or, conversely, overlooked.  

This could lead to mistrust in AI systems and hinder effective response strategies if the individuals using the systems do not trust the data it produces.  

PRIVACY RISKS

AI needs data; mountains of it. The mishandling of such data poses a severe risk to privacy and compliance with regulations. For example, an AI-based tool used for monitoring might inadvertently collect and store personal data beyond its intended scope. This could put organisations at risk of breaching data retention regulations but also increases the amount of data that attackers can target. Regulators are also taking note with the EU’s AI Act being the first comprehensive AI law that will apply from 2025 at the earliest. 

The AI Advantage: a team, not a replacement

This insight is not to say AI is the enemy. Far from it.  

AI is a powerful tool, but like any tool, it needs careful operation. The key lies in a balanced approach, where AI augments human expertise, not replaces it.  

To help avoid some of the risks we highlighted, companies should consider the following: 

  • Keep humans in the loop: AI excels at crunching numbers, but humans understand intent and nuance. Therefore, humans should oversee AI decisions and provide critical context by reviewing AI-based decisions and actions through systematic monitoring, assessment, and audit. Governance processes are therefore needed to provide companies with transparency and accountability for AI to help manage the risks that AI tools will introduce.  
  • Data integrity is paramount: Corrupted data could lead your AI-based defences to misinterpret benign activity as threats…or vice versa. Therefore, companies must ensure that they treat AI training data with the respect it deserves through the use of strong security measures that include access controls, integrity checking mechanisms and threat detection and monitoring. 
  • Transparency matters: Demystify AI decision-making. Don’t let your security system become a black box – understand why it flags certain threats and how it arrives at its conclusions. Users should be provided effective training in the functionality and limitations of the AI tools they are implementing to ensure that they understand how it works and why it may make certain decisions or recommendations. 

You can find out more about ThreeTwoFour and its services below.

(This piece originally appeared on ThreeTwoFour’s website, which can be read in full here: https://three-two-four.com/insights/risks-of-integrating-ai-into-your-cyber-security-capability/)