AI on the Offensive
Across multiple industries there’s a very real shift happening: AI isn’t just another line item on the risk register anymore, it’s the thing keeping security leaders awake at night. Talk to CISOs right now and you’ll hear the same theme again and again – Artificial Intelligence and Large Language Models (LLMs) have overtaken ransomware as the top concern in many boardrooms.
And to be honest, that feels about right. Ransomware is still brutal, but at least we understand the playbook. With AI, the playbook is being rewritten in real time.
This isn’t some fleeting Gartner hype cycle; it’s a hard reset of how organisations think about risk, driven by a few uncomfortable realities.
AI-driven threats: a new kind of arsenal
Attackers have always been creative, but AI has basically handed them power tools.
We’re seeing malware that can dynamically change its behaviour to evade traditional detection, phishing campaigns that read like something your actual finance director would send, and automated reconnaissance that can map your exposed assets faster than any human red team. AI is now being used to:
- sift through stolen data and identify the most valuable targets for extortion
- generate highly tailored business email compromise (BEC) lures that feel eerily personal
- crank out deepfake audio and video convincing enough to push payment approvals over the line
The ugly bit is the speed and scale. Once these attacks are automated and tuned, they can hit thousands of organisations with very little extra effort. The comforting old idea of “a skilled attacker carefully targeting you” is giving way to industrialised, AI-assisted campaigns that adapt as they go. A traditional “human in the loop” defence model simply can’t keep up on its own anymore.
LLM and model security: the new enterprise attack surface
As LLMs creep into everything—from customer support to code review to data analysis—they quietly become high-value targets in their own right. A lot of the conversations with security leaders right now revolve around a handful of specific worries:
- Prompt injection – Attackers craft inputs that hijack the model’s behaviour, bypass guardrails, leak internal data, or get the system to perform actions it really shouldn’t.
- Data poisoning – Training or fine-tuning data gets manipulated so the model learns the wrong things or bakes in hidden backdoors, ready to be triggered later. Think “sleeper agent” rather than simple bug.
- Model extraction and theft – Systematic querying to clone a proprietary model’s behaviour, or outright theft of weights and training data, undermining both security and competitive advantage.
- Jailbreaking – Techniques designed to coax supposedly “safe” models into generating restricted or harmful outputs, often by chaining prompts or abusing tools and plugins.
None of this is theoretical anymore. These are live issues in production systems, and they’re forcing organisations to treat model security as seriously as application security or identity.
Policy and oversight: governments finally paying attention
Governments have realised that AI isn’t just about productivity and innovation; it’s now squarely a national security issue.
In the UK, the AI Safety Institute has been set up specifically to focus on the risks from advanced AI systems, including models that can be repurposed or misused for cyber operations, disinformation, and other lovely things we don’t want spreading unchecked. Similar efforts are cropping up elsewhere, all circling the same themes: evaluation, transparency, accountability, and sensible guardrails around powerful models.
That’s a signal to enterprises. When regulators and governments start publishing testing standards and assurance expectations for AI systems, you can safely assume those will turn into audit questions and, eventually, legal obligations.
Corporate anxiety: from “interesting” to “urgent”
Inside organisations, the mood has shifted from curiosity to anxiety.
Boards are asking much sharper questions about AI data privacy, model governance, and how exposed they really are if one of these systems goes rogue or gets abused. Security teams are getting pulled into every AI initiative, often late in the day, and being asked to bless something that’s already half in production.
Recent resilience and risk reports all tell a similar story: most organisations are nowhere near ready to secure the AI they’re rushing to deploy. The gap between “what we’re building” and “what we can safely defend” is widening, and that’s driving a noticeable uptick in spend on AI-specific security controls, red-teaming, and incident response playbooks tailored to model failures and abuse cases.
June 2025: when AI security took centre stage
Right now, in mid‑2025, it feels like a tipping point.
The combination of rapidly evolving AI capabilities, well-funded threat actors experimenting with them, and regulators starting to move with purpose has pushed AI and LLM security to the top of the agenda. Traditional threats haven’t gone anywhere, but they’re no longer the only (or even the main) story in many risk discussions.
For security professionals, that means a real shift in mindset. It’s no longer enough to bolt on some generic controls and call it done. We need:
- security by design for every AI initiative, not just a last-minute review
- continuous evaluation and adversarial testing of models, not one‑off pen tests
- clear ownership for model risk, spanning security, data science, and the business
At the end of the day, this isn’t just a new vulnerability class; it’s a new kind of arms race. The same intelligence we’re so keen to harness for productivity and insight can, if we’re careless, be turned back against us with frightening efficiency.
The job now is to make sure that as AI moves deeper into the core of how our organisations operate, security moves with it—not two years behind, trying to clean up the mess.
