AI on the Offensive

Across multiple industries we are witnessing a palpable surge in advanced AI-powered cyberattacks, which has triggered a significant and, frankly, necessary shift in organisational priorities. Crucially, Artificial Intelligence and Large Language Models (LLMs) have now unequivocally surpassed ransomware as the top cybersecurity concern for security leaders.


This isn’t just a fleeting trend; it’s a profound re-evaluation of risk driven by several critical developments:

AI-Driven Threats: The New Arsenal

The adversaries are getting smarter, faster, and more scalable. Attackers are now expertly leveraging AI to craft more sophisticated malware that can adapt and evade traditional defences. Automated phishing campaigns are becoming indistinguishable from legitimate communications, and the speed and scale at which vulnerabilities can be exploited are unprecedented. We’re seeing AI being used to pinpoint the most valuable data for ransomware, generate highly personalised Business Email Compromise (BEC) emails, and even produce convincing deepfakes for fraud and impersonation. The traditional defensive “human in the loop” can no longer keep pace with AI-augmented attacks.

LLM and Model Security: The Enterprise’s New Frontier

As LLMs become ever more deeply integrated into critical enterprise workflows – from customer service to code generation and data analysis – the focus on their inherent security risks has intensified dramatically. This month, security leaders are grappling with:

  • Prompt Injection: The insidious art of manipulating LLMs through specially crafted prompts to force unintended or malicious behaviours, bypassing guardrails, or even extracting sensitive information.
  • Data Poisoning: The highly concerning risk of deliberately manipulated training, fine-tuning, or embedding datasets introducing vulnerabilities, biases, or even hidden backdoors into a model. Imagine a ‘sleeper agent’ AI, waiting for a specific trigger.
  • Model Extraction (or Model Theft): The unauthorised access to LLMs to extract proprietary data or replicate their core functionality, potentially undermining competitive advantage and intellectual property.
  • Algorithmic Jailbreaking: Techniques to circumvent an LLM’s safety mechanisms, leading it to generate harmful or restricted content.

These aren’t theoretical risks; they are active threats that demand immediate and robust defensive strategies.

Policy and Oversight: Governments Stepping Up

Recognising the profound implications of AI for national security and societal well-being, regulatory bodies are accelerating their efforts. The UK AI Security Institute, for example, is actively prioritising research and guidance aimed squarely at mitigating critical AI-enabled threats. Their focus includes ensuring robust evaluation and oversight of advanced AI systems, particularly those with dual-use capabilities that could be maliciously repurposed. This governmental focus signals a clear intent to establish standards and accountability in the burgeoning AI landscape.

Corporate Anxiety: From Awareness to Action

A significant shift in corporate sentiment is undeniable. A majority of business leaders now openly cite AI data privacy and security as their leading concern. This anxiety is translating directly into increased spending and a rapid escalation of incident response efforts specifically geared towards addressing these emerging risks. Organisations are realising that the potential for data breaches, reputational damage, and regulatory penalties stemming from AI vulnerabilities is immense. The recent “State of Cybersecurity Resilience 2025” report from Accenture, for example, paints a stark picture: 90% of organisations are not adequately prepared to secure their AI-driven future.

June 2025: The Defining Security Issue

The confluence of rapidly evolving AI capabilities, their increasingly sophisticated exploitation by threat actors, and the subsequent reactive (and proactive) responses from regulatory bodies and enterprises alike, has cemented the security of AI models and systems – with a particular emphasis on LLMs – as the defining security issue of June 2025.

As security professionals, our focus has fundamentally shifted. While traditional threats remain, the intelligent, adaptive nature of AI-driven attacks demands a new paradigm of defence. We must embed security by design into every AI initiative, continuously monitor for novel attack vectors, and ensure our security strategies evolve as rapidly as the AI technology itself. This is not just a battle against new tools; it’s a new kind of war, where the very intelligence we seek to harness can be turned against us.