The Promise of AI

AI in Cybersecurity: When the Shield Becomes a Sword

You can’t open a browser tab these days without someone banging on about how AI is going to either save humanity or turn us all into paperclips. It is bloody everywhere. And in our corner of the universe—cybersecurity—it has created this properly fascinating, if slightly terrifying, dynamic that is fundamentally changing how we think about defence.

As an architect, I look at AI and Machine Learning and I see enormous potential. But I’d be lying if I said I didn’t also see a massive headache brewing. Because here is the uncomfortable truth we don’t talk about enough: the exact same technology that promises to help us predict threats before they materialise is simultaneously giving attackers a significantly bigger stick to beat us with. We are essentially engaged in an arms race where both sides are wielding the same weapons. The question isn’t whether AI will transform cybersecurity—it already has—but whether we can stay ahead of the curve.

The Arms Race Nobody Asked For

I know calling it a “double-edged sword” is a bit of a cliché, but honestly, if the shoe fits. On one side, we’ve got this promise of unprecedented efficiency. Imagine algorithms churning through your raw log data—mountains of it—spotting patterns and anomalies that even the most caffeinated human analyst would miss in a thousand years. That’s not science fiction anymore; that’s Tuesday afternoon in a modern SOC.

But flip that coin over and you’ve got the threat landscape, which has evolved dramatically. By 2025, we are looking at a scenario where AI-driven attacks have surged by over 70%. Every networked device remains a potential gateway, and we are now dealing with increasingly sophisticated adversarial attacks where clever bastards manipulate the data we feed our models to trick them into missing threats entirely. It’s like digital camouflage that adapts in real-time.

And let’s not pretend the attackers aren’t using AI themselves. Cofense reported in their 2025 analysis that they are detecting a new AI-powered phishing threat every 42 seconds. They are weaponising machine learning to automate attacks, making them faster and more sophisticated than anything we faced even two years ago. We used to rely on typos and bad grammar to spot phishing; now, AI writes better business English than half the people in my office.

Research from CrowdStrike’s 2025 Threat Hunting Report confirms what many of us suspected: adversaries are now using AI to gain unauthenticated access, establish persistence, and deploy malware. Lower-skilled attackers—the ones who used to struggle with basic scripting—are now abusing generative AI to automate tasks that once required advanced expertise. We are seeing malware families like “Funklocker” and “SparkCat” that appear to be largely GenAI-built, which is both impressive and deeply worrying.

The Defence Toolkit Actually Looks Decent

Despite the gloom, the defensive toolkit has matured considerably. We aren’t bringing a knife to a gunfight here, even if sometimes it feels that way at three in the morning when you’re responding to an incident.

Take Darktrace. I’ve always liked their “Enterprise Immune System” analogy, and with Version 5, they’ve really leaned into it. It’s a smart way of conceptualising the problem—not just building a perimeter wall and hoping for the best, but developing a system that genuinely understands the difference between “self” and “non-self” in your network. They’ve extended their coverage beyond traditional networks into SaaS applications and cloud environments, which is exactly where the attack surface has expanded.

CrowdStrike has also stepped up, launching “Falcon AI Detection and Response” (AIDR) specifically to protect the AI interaction layer itself—stopping prompt injections and malicious agents before they can cause havoc. And IBM’s QRadar has evolved significantly; their new Investigation Assistant powered by watsonx is using LLMs to generate attack summaries and recommend responses, parsing through unstructured data that human analysts simply can’t process quickly enough.

These tools excel at the heavy lifting—analysing colossal datasets, spotting anomalies that would take human teams weeks to identify, and reacting at machine speed. But they aren’t magic boxes. You can’t just plug them in, walk away, and expect perfect security. They require proper tuning, continuous feeding of quality data, and—critically—skilled humans to interpret what the machines are telling you.

Getting the Fundamentals Right

Strategy isn’t just about buying the shiniest new AI-powered tool and ticking the “innovation” box on your quarterly objectives. It’s about getting the basics right first, because all the machine learning in the world won’t save you if your fundamental architecture is rubbish.

Take IoT, for example. It is still the Wild West out there. If you aren’t isolating those devices on their own network segments with proper Zero Trust principles, you are basically inviting trouble through the front door. I’ve lost count of how many breaches I’ve investigated where the initial access vector was some smart device still running “admin/admin” because nobody thought the office coffee maker could be a security risk.

And then there’s the human element. You can have the most sophisticated AI in the world, but if Dave in accounts clicks a link he shouldn’t, you’ve got a problem. Here is where AI actually helps us. Companies like Cofense are leveraging a global network of over 35 million trained employees reporting phishing threats. They combine this human intelligence with Bayesian machine learning to identify patterns faster than any human team could manage. Their new AI-powered spam filter processes emails locally—preserving privacy—while reducing analyst overhead by about 30%. This means we can spot the really clever phishing lures before they land in Dave’s inbox.

What I’m Actually Building

Speaking of getting hands-on with this technology, I’ve been spending my evenings—and more than a few early mornings—messing around with anomaly detection in my home lab. It keeps me out of trouble, or so my wife claims, though she might have a different opinion about the electricity bill.

I’m currently training models on logs I’ve generated from my AWS and Azure test accounts. Nothing groundbreaking, just practical experimentation. I’m using Python and working primarily with the Isolation Forest algorithm.

The beauty of Isolation Forest is that it banks on a fundamental principle: anomalies are rare and distinct, making them easier to isolate from normal data. Rather than trying to profile what “normal” looks like—which is increasingly difficult in dynamic cloud environments—it focuses on isolating the outliers.

I’m currently deep in the weeds of tuning the “contamination” parameter—that’s the setting that tells the model roughly what proportion of the data should be considered an outlier. It’s a delicate balance. Set it too high, and you drown in false positives; set it too low, and you miss the subtle signals of a breach. The challenge is finding that sweet spot where you catch genuine threats without driving yourself mad with alerts about perfectly normal but statistically unusual behaviour.

The Road Ahead

The intersection of AI and cybersecurity is moving faster than I’ve seen any technology space move in my entire career. We are heading toward a future of machine-versus-machine conflict, where AI systems engage in real-time combat at speeds human operators can’t even perceive.

What is clear is that treating AI as just another tool you can install and forget is a recipe for disaster. This requires continuous learning and adaptation. The threat actors certainly aren’t standing still; they are innovating constantly, and they don’t have compliance departments slowing them down.

At the same time, we can’t let perfect be the enemy of good. Waiting until you have the perfect AI security solution means you are already years behind. Start somewhere sensible. Combine the power of AI with human expertise, maintain that sceptical security mindset, and keep the fundamentals solid.

That’s the approach that actually works. The boring bits—proper architecture, defence in depth, continuous monitoring—combined intelligently with new capabilities. Not replacing one with the other, but leveraging both to strengthen your posture rather than just creating new attack surfaces wrapped in marketing buzzwords.

Until next time…