How AI Impacts Security - Threat or Defense?
As AI adoption accelerates, businesses face mounting cyber threats—and urgent choices about secure implementation.
In an era of rapid technological advancement, artificial intelligence (AI) promises to revolutionize industries from enhancing efficiency to unlocking completely new capabilities. AI systems, with their ability to learn, adapt, and perform complex tasks, are a powerful technology. However, the risk associated with artificial intelligence spans a wide spectrum that, if not managed properly, could lead to significant challenges and unintended consequences.
From healthcare, finance, transportation, and more, the very characteristics that make AI so valuable—its autonomy, speed, and data-processing capabilities— can also be sources of potential hazards. AI Risks range from cybersecurity to ethical dilemmas, legal issues, and social impacts. Moreover, for AI, it is impossible to predict all risks at the outset.
The development, implementation, and use of AI must at all times be accompanied by careful consideration of its implications as it evolves. As with any other type of business risk, adopting an AI Management System (AIMS) can help companies continually manage and mitigate risks.
What is artificial intelligence (AI)?
Artificial Intelligence (AI) is a multifaceted field of computer science that focuses on creating systems capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding. At its core, AI is about creating algorithms that enable machines to perform cognitive functions, akin to the human brain.
The development of AI involves various subfields, including machine learning, where algorithms are trained to make predictions or decisions based on data; natural language processing, which enables machines to understand and respond to human language; and computer vision, which allows systems to interpret and make decisions based on visual data.
AI's capabilities are not just limited to mimicking human intelligence. It extends to enhancing our abilities to analyse and process vast amounts of data, leading to insights and efficiencies that were previously unattainable. AI systems can learn from experience, adapt to new inputs, and perform human-like tasks with increasing accuracy and autonomy.
As AI continues to evolve, it is becoming an integral part of various industries, driving innovation and efficiency. From healthcare, where it assists in diagnosing diseases, to finance, where it helps detect fraudulent activities, AI's applications are vast and transformative. It is also a key player in the realm of cybersecurity, where it aids in detecting and responding to threats, and in marketing, where it personalises customer experiences.
When done right, cybersecurity isn’t a siloed cost centre or a block on innovation and growth, as it’s often seen by business leaders. On the contrary, it can be a powerful business enabler. A mature cybersecurity posture could help an organisation to:
- Build customer trust and drive competitive differentiation
- Provide the foundations on which successful digital transformation initiatives can be built
- Enable flexible working, which in turn can empower staff to be more productive, while improving work-life balance for many
- Support expansion into new markets, if local laws and regulations require enhanced levels of cybersecurity
By the same rationale, AI-powered cybersecurity could supercharge these benefits. That’s certainly the impression our respondents gave. In fact, 81% are already using AI-driven tools as part of their cybersecurity strategy, with a further 16% exploring options. Additionally, over two-fifths (42%) say implementing automation or AI-driven tools is a top priority for improving cybersecurity in the next 12 months.
Over half (52%) say they’re happy to use AI for essential day-to-day security-related processes like automated asset discovery, risk prioritisation, and anomaly detection. That’s just the tip of the iceberg. AI offers a wealth of capabilities that can help to improve:
Data protection: AI can be used to discover, classify, and encrypt sensitive information, as well as monitor access to data stores and flag immediately if they have been breached.
Endpoint security: AI can be a key ingredient in endpoint detection and response (EDR)—analysing behavioural data and context to detect and block suspicious activity, malware, and other threats.
Cloud security: AI algorithms can do the same for cloud environments, monitoring for unusual activity that deviates from a “learned” baseline and alerting security teams.
Advanced threat hunting: By trawling through vast quantities of network data, AI tools can spot threat actors before they have time to cause lasting damage.
Identity and access management (IAM): AI can make IAM more intelligent, creating unique behavioural profiles for individuals based on various aspects such as keystrokes and mouse movements. It supports continuous authentication for enhanced security and zero-trust operations.
The impact on the attack surface
However, as optimistic as IT and security leaders are about the potential for AI to transform cybersecurity, they are also concerned that the technology may open them up to new risks. Nearly all (94%) respondents told us they think AI will have a negative impact on attack surface management (ASM) in the next 3-5 years.
The size of the corporate cyber-attack surface has long been a concern for IT security leaders, who have seen digital investments outpace their ability to mitigate escalating risk. Now they are worried that a new fleet of AI tools may make this job even harder. Their concerns include:
- Sensitive data exposure
- A lack of transparency around data processing/storage
- Exploitation of proprietary data by untrusted AI models
- Compliance challenges
- More endpoints and APIs to monitor
- Shadow AI or Unsanctioned AI
This is not an exhaustive list. In fact, OWASP has a whole Top 10 devoted to Large Language Model (LLM) risks. The National Cyber Security Centre (NCSC) recently warned that such models could be especially vulnerable to attack if developers rush them to market without adding adequate security provisions. Among the most commonly cited threats are prompt injection, supply chain attacks, and data poisoning. They could lead to sensitive data theft, and manipulation of models to produce unintended outputs—potentially sabotaging operations or enabling wider system access.
AI looms large over the threat landscape
AI represents a multi-sided threat to global organisations. It’s not just about the risks posed to their attack surface from AI systems themselves, but also potential AI-powered attacks. Over half (53%) of respondents believe that the complexity and scale of these attacks will drastically increase in the future, requiring a new approach to cyber risk management.
It’s a threat flagged by the NCSC, which has warned that the coming two years could see:
- An increase in the “frequency and intensity” of cyber threats, including reconnaissance, vulnerability research and exploit development (VRED), social engineering, basic malware generation, and data exfiltration
- More threat actors are using AI-as-a-service offerings
- More automation in various parts of the cyber-attack chain
- AI used to develop zero-day exploits
Assurances and next steps
Some 44% of respondents say they need to understand more about the technology before they consider using AI-powered security tools. That’s understandable given their concerns about AI expanding the attack surface. Nearly half (46%) currently manage their attack surface risks by regularly assessing and monitoring third-party vendors for vulnerabilities, conducting thorough security assessments. They will surely want to expand these checks to AI security vendors before adopting the technology.
Other steps to consider to manage risk across the AI attack surface could include:
Developing a comprehensive AI security strategy incorporating advanced threat modelling, threat hunting, AI-based risk assessments, AI security controls, and detailed incident response plans.
Ensuring the quality, integrity, and reliability of AI training data to ensure AI models are as accurate and effective as possible, and to address concerns of bias.
Implementing industry-standard AI security frameworks and best practices like those from NIST, MITRE, OWASP, Google, and ISO.
Integrating AI security with existing security and cybersecurity processes for seamless end-to-end protection across all environments.
Conducting regular employee training and awareness programs to create an AI security-aware culture.
Continuously monitoring, assessing, and updating AI models to check for and remediate vulnerabilities, and improve accuracy, performance, and reliability.
More generally, organisations should consider updating their security strategy to account for the elevated threat from AI-powered attacks. AI security tools can help by:
- Analysing large volumes of data to detect anomalies in real-time
- Scanning for vulnerabilities and misconfigurations, and other security gaps
- Identifying/mitigating cyber-attacks in real time
- Automating threat detection and response tools to free up stretched security teams
- Leveraging the latest threat intelligence to stay one step ahead
- Closing security skills gaps by assisting security analysts
The opportunity from AI security, as for AI in general, is too great to ignore. But only by assessing and then taking steps to continually manage associated risks can organisations truly hope to harness their full potential.
Share
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
