.
.
Cloud Breaches are Runtime Breaches

Cloud Breaches are Runtime Breaches

Cloud security still gets framed as a configuration problem: open storage, overly permissive security groups, the usual “how did this bucket become public?” postmortem.

That’s not wrong. It’s just not the interesting part anymore.

The incidents that really hurt tend to be runtime stories: stolen credentials, abused roles, compromised workloads, and quiet data access through legitimate APIs. In cloud environments, the control plane is the battlefield. And the control plane speaks identity.

[Read More . . .]

.
Build Systems are Production

Build Systems are Production

Supply chain security has a habit of refusing to fade, and it’s not because the industry enjoys pain. It’s because the attacker’s logic is perfect: why break into production when you can get your pipeline to deliver the compromise for you?

That’s not a theoretical risk. It’s a simple inversion of trust.

[Read More . . .]

.
Identity is the Control Plane

Identity is the Control Plane

Every security trend eventually circles back to identity, because identity is how modern systems decide what’s allowed to happen. That’s true in cloud. It’s true in SaaS. It’s true in DevOps. And it’s becoming brutally true in AI.

The twist is that identity isn’t mostly people anymore.

[Read More . . .]

.
When AI Agents Get Privileges

When AI Agents Get Privileges

The first wave of generative AI security was obsessed with content. Hallucinations. Toxicity. Brand risk. What the model might say if you asked it the wrong thing, the wrong way.

The second wave is about something more dangerous: capability.

Once an LLM can call tools—open tickets, query internal systems, send emails, push code, trigger workflows—it stops being a chat interface and starts looking like a new kind of privileged workload. Not malicious by design. Not even unreliable in the way people assume. Just connected to real systems with real permissions. And that’s enough.

[Read More . . .]

.
Securing AI-Driven Supply Chains for Industry 4.0

Securing AI-Driven Supply Chains for Industry 4.0

As we’ve moved through the second half of 2025, I’ve found myself spending a lot more time in the weeds of Industrial Control System threats. Not because it’s trendy, but because the shape of the modern supply chain is changing so quickly it’s hard to ignore. Next-gen automation, AI-driven decisioning, digital twins, and edge compute are turning supply chains into something closer to a nervous system than a chain—signals flowing constantly between software and the physical world. [Read More . . .]

.
A Security Architect's Guide to MITRE ATLAS

A Security Architect's Guide to MITRE ATLAS

If AI security still feels like a chaotic mix of research papers, vendor promises, and scattered best practice, that’s because—until recently—it lacked a shared vocabulary. In “normal” cybersecurity, that vocabulary has existed for years. When someone says “initial access” or “credential dumping”, a whole chain of assumptions and countermeasures snaps into place. In AI and ML, those conversations have often been fuzzier, full of hand-wavy phrases like “model manipulation” or “data poisoning” without a consistent way to describe what’s actually happening.

That’s why MITRE ATLAS matters. It’s not magic. It doesn’t secure anything on its own. But it gives defenders something the AI space has badly needed: a structured way to talk about adversarial behaviour against AI systems, grounded in real techniques and mapped to mitigations that can be engineered, tested, and improved over time.

[Read More . . .]

.
Mastering Threat Modelling for Next-Gen Workloads

Mastering Threat Modelling for Next-Gen Workloads

The cybersecurity landscape has shifted in a way that’s easy to miss if you’re still looking for the usual signatures: misconfigurations, missing patches, careless access control. AI hasn’t replaced those problems. It has simply added a new layer where the failure modes aren’t always obvious, and where an attacker doesn’t need to “break in” if they can influence what the model learns or how it decides.

AI is no longer a skunkworks experiment running in a separate corner of the organisation. It’s being embedded directly into fraud detection, forecasting, customer support, quality assurance, routing and logistics, security analytics, and decision-making workflows that carry real commercial weight. That changes what “risk” looks like. It also changes what threat modelling needs to cover. The demand for threat modelling and risk assessments tailored to AI workloads is rising fast for a simple reason: traditional approaches, while foundational, often under-describe what can go wrong when the system’s behaviour is learned rather than explicitly coded.

[Read More . . .]

.
AI on the Offensive

AI on the Offensive

Across multiple industries there’s a very real shift happening: AI isn’t just another line item on the risk register anymore, it’s the thing keeping security leaders awake at night. Talk to CISOs right now and you’ll hear the same theme again and again – Artificial Intelligence and Large Language Models (LLMs) have overtaken ransomware as the top concern in many boardrooms. And to be honest, that feels about right. Ransomware is still brutal, but at least we understand the playbook. [Read More . . .]

.
Federated Learning Security: Training Together, Staying Safe

Federated Learning Security: Training Together, Staying Safe

Federated Learning Security: Training Together, Staying Safe Federated Learning is one of those ideas that sounds almost too convenient when you first hear it. “Train a model across lots of organisations, but don’t move the data.” In a world where data is radioactive—healthcare records, financial histories, anything covered by regulation or common sense—that’s an enticing promise. And it’s a real shift. Instead of dragging sensitive datasets into a central lake and hoping governance keeps up, you ship the learning out to where the data already lives. [Read More . . .]

.
Federated Learning Security: Training Together, Staying Safe

Federated Learning Security: Training Together, Staying Safe

Federated Learning Security: Training Together, Staying Safe Federated Learning is one of those ideas that sounds almost too convenient when you first hear it. “Train a model across lots of organisations, but don’t move the data.” In a world where data is radioactive—healthcare records, financial histories, anything covered by regulation or common sense—that’s an enticing promise. And it’s a real shift. Instead of dragging sensitive datasets into a central lake and hoping governance keeps up, you ship the learning out to where the data already lives. [Read More . . .]

.
Securing the Edge: Lightweight Architectures for Robust AI

Securing the Edge: Lightweight Architectures for Robust AI

Securing the Edge: Lightweight Architectures for Robust AI

Edge AI is one of the most exciting shifts in modern architecture, not because it’s new, but because it’s finally usable. The pitch is simple: move intelligence closer to where data is created, reduce latency, keep sensitive information local, and stop treating connectivity like a guarantee. For industrial systems, retail analytics, logistics, smart environments, and countless sensor-heavy use cases, that shift can be the difference between “interesting demo” and “operationally valuable”.

But it comes with a security cost, and it’s a familiar one. The moment you distribute capability, you distribute risk.

Edge AI isn’t just cloud AI deployed somewhere else. It’s cloud AI stripped down, quantised, compressed, and pushed onto devices that were never designed to behave like hardened servers. It’s also frequently deployed into environments where physical access is plausible. That changes the threat model entirely. The attacker isn’t always remote, and they don’t always need a zero-day. Sometimes they just need a screwdriver and a quiet moment.

[Read More . . .]

.
Zero Trust for AI: Securing Intelligence in a Distributed World

Zero Trust for AI: Securing Intelligence in a Distributed World

Zero Trust for AI: Securing Intelligence in a Distributed World

Traditional perimeter security has been dying for years. AI just accelerates the funeral.

The reason is simple: modern AI isn’t a single application sitting behind a firewall. It’s a distributed system of systems—data pipelines, feature stores, training jobs, model registries, inference endpoints, retrieval services, agent tools, serverless glue, observability layers—often spread across cloud accounts, regions, vendors, and increasingly, edge devices. In that world the idea of a “trusted internal network” stops being a control and starts being a comforting story.

If an attacker gets a foothold anywhere inside that story—an over-permissioned service account, a compromised CI runner, a leaked token in a notebook, a misconfigured endpoint—the blast radius can be spectacular. Not because AI is magic, but because AI systems are connected, privileged, and hungry for data. Lateral movement becomes an architectural feature.

This is why Zero Trust isn’t a branding exercise for AI. It’s the minimum viable security stance for distributed intelligence.

[Read More . . .]

.
Hardware-Enforced AI Security: Fortifying Your Models from the Ground Up

Hardware-Enforced AI Security: Fortifying Your Models from the Ground Up

Hardware-Enforced AI Security: Fortifying Your Models from the Ground Up I’ve spent a lot of time thinking about the layers of defence we keep piling onto AI systems—secure coding, locked-down pipelines, adversarial testing, red-teaming, the whole lot. All of it matters. But there’s a blunt truth underneath the lot of it: if the underlying hardware and execution environment can’t be trusted, you’re ultimately building a very clever house on sand. [Read More . . .]

.
Adversarial Robust Pipelines

Adversarial Robust Pipelines

Adversarial Robust Pipelines: Building AI That Bends, But Doesn’t Break Let’s be honest, AI has properly arrived. I use models on my laptop daily, I’ve got agents scurrying around the darker corners of the internet digging up research for me, and half the tools I use now have some flavour of “intelligence” baked in. It’s brilliant. But, as we all know, whenever we invent a new way to do something clever, someone else invents a new way to break it. [Read More . . .]

.
Security Considerations for MACH Architectures

Security Considerations for MACH Architectures

Securing the Future: Navigating MACH Architectures with Confidence It is a fiercely competitive and economically challenging business landscape out there, and the relentless pursuit of digital innovation is non-negotiable. Businesses are constantly striving for agility, scalability, and, crucially, rock-solid security to keep pace with relentless change. One architectural paradigm that has truly taken hold is MACH (Microservices, API-First, Cloud-Native, Headless). While I’ve witnessed glimpses of this in enormous organisations wrestling with decades of legacy systems, it was only when I had the privilege of helping a client build an entire banking infrastructure from the ground up in the cloud that I truly grasped MACH’s transformative power – its ability to enable blistering speed and rapid adaptation in the face of fierce competition and emerging technologies. [Read More . . .]

.
In search of a Secure Mobile Phone

In search of a Secure Mobile Phone

In search of a Secure Mobile Phone The older you get in security, the less you believe in absolutes. “Secure” becomes a moving target. “Private” becomes a trade-off. And the smartphone—this glowing slab that knows where you are, who you talk to, what you read, what you buy, and what you think—is where those trade-offs become uncomfortably personal. As an iPhone user, I’ve had my share of friction with Apple’s approach to data protection. [Read More . . .]

.
Securing IoT Devices

Securing IoT Devices

Designing zero trust architectures for clients running thousands of IoT devices changes how you think about “the network”. These aren’t laptops and servers sitting behind a predictable boundary. They’re sensors tracking warehouse capacity, systems automating movement of stock, retail analytics devices counting footfall and interaction—tiny computers scattered across physical spaces, often deployed faster than they can be governed.

The problem isn’t that IoT is inherently insecure. It’s that it’s inherently uneven. You’re dealing with a mixed ecosystem of manufacturers, firmware baselines, protocols, and operational ownership, and then you’re expected to treat the whole thing like a coherent fleet.

[Read More . . .]

.
Microsegmentation in Cloud

Microsegmentation in Cloud

Microsegmentation has emerged as a critical technique for enhancing security by isolating workloads and reducing the attack surface. Unlike traditional network segmentation methods, which tend to carve up environments into broad zones based on subnets or VLANs, microsegmentation pushes the boundary down to where modern breaches actually play out: between individual workloads, services, and identities.

[Read More . . .]

.
The Next Frontier in Cryptography

The Next Frontier in Cryptography

With the industry’s attention locked onto AI, it’s easy to forget that another computing shift has been accelerating in the background. Quantum computing used to live comfortably in the “interesting theory, far from production” bucket. It doesn’t anymore. The serious money, the serious lab time, and the steady cadence of capability gains have pushed it into a category security architects can’t ignore, because cryptography is a foundational dependency, and foundational dependencies fail loudly when the underlying assumptions change.

[Read More . . .]

.
Beyond Cryptocurrency Security

Beyond Cryptocurrency Security

Beyond Bitcoin: Blockchain’s Role in Next-Gen Security Architectures Following on from our recent deep dives into cloud and container security, today’s post takes us to a technology that, while often synonymous with digital currencies, holds far broader implications for our security landscape: blockchain. Its truly disruptive potential extends well beyond cryptocurrencies, with its decentralised and immutable nature making it an incredibly attractive solution for securing various applications, particularly in complex industries such as supply chain management and identity verification. [Read More . . .]

.
Serverless Architectures

Serverless Architectures

Serverless architectures sell a very particular dream: your code runs when it needs to, scales when demand arrives, and costs you less because you stop paying for idle capacity. It’s a persuasive pitch, and in many cases it’s true. But the most profound change serverless brings isn’t cost or scaling. It’s a security boundary shift.

In a traditional world, security teams built mental models around hosts. Patch the operating system. Harden the image. Control inbound ports. Monitor the box. In serverless—particularly Function-as-a-Service—that mental model collapses, because the “box” is not something you own, and in many cases it barely exists long enough to be observed in the way we’re used to.

[Read More . . .]

.
The Security of AI: The Art of Incident Response

The Security of AI: The Art of Incident Response

AI and LLMs are transforming how organisations build products and run operations. They’re also changing what “an incident” looks like. When a traditional system is compromised, you often get familiar signals: malware, lateral movement, suspicious binaries, noisy C2 traffic. With AI systems, the first sign can be quieter, more ambiguous, and far more dangerous to ignore.

Not a service outage. Not an obvious intrusion. Just a model that starts behaving… differently.

I’ve spoken to a lot of security professionals who feel comfortable with classic incident response but get uneasy when the discussion shifts to production AI. That unease is rational. Attackers don’t need to take your platform down to harm you. They can steer it. They can degrade it. They can manipulate it in ways that look like “business variance” rather than a cyber event—until the impact is real.

This post is about responding to incidents in AI systems: what to look for, how to reason about the signals, and how to organise a response when the system is technically healthy but operationally compromised.

[Read More . . .]

.
The Security of AI: The Inexplicability Threat

The Security of AI: The Inexplicability Threat

In the last post I focused on securing the model development pipeline—the supply chain that turns raw data into deployed behaviour. This time I want to tackle a quieter problem. It doesn’t show up neatly in most “Top 10” lists, but it keeps surfacing in real AI deployments, especially when models get embedded in high-consequence workflows.

It’s the inexplicability threat: the security risk created when you cannot reliably explain why a model did what it did.

Not “AI is complex” in the abstract. Not “deep learning is hard”. The practical version: when you can’t account for a decision, you can’t confidently detect manipulation, prove integrity, or recover quickly after something goes wrong.

[Read More . . .]

.
The Security of AI: Securing the Model Development Pipeline

The Security of AI: Securing the Model Development Pipeline

If model inversion is the attack where you interrogate a system until it starts leaking what it learned, then pipeline compromise is the attack where you don’t bother interrogating the model at all. You simply change what gets built.

That’s the quiet truth about AI security: the model is only as trustworthy as the machinery that produces it. And the machinery is bigger than most people assume.

The model development pipeline is the full chain that turns raw data into deployed behaviour—collection, preprocessing, feature engineering, training, evaluation, packaging, deployment, and retraining. It looks like “engineering” on a diagram, but it behaves like a supply chain. It crosses teams. It crosses platforms. It crosses trust boundaries. And it’s full of artefacts attackers would love to control: datasets, labels, notebooks, feature stores, training code, weights, evaluation reports, container images, API keys, and deployment manifests.

[Read More . . .]

.
The Security of AI: Detecting and Mitigating Model Inversion Attacks

The Security of AI: Detecting and Mitigating Model Inversion Attacks

Last time I discussed training data poisoning: the upstream attack where adversaries influence what a model learns by manipulating the dataset you train on. This time the threat flips direction. Instead of corrupting the input to training, the attacker interrogates the trained model itself.

Model inversion attacks aim to infer sensitive information about the data a model was trained on. In the worst cases that can mean reconstructing attributes of real people—health indicators, financial details, identifiers—or revealing statistically sensitive features about a dataset that was assumed to be private.

The uncomfortable premise is simple: if a model internalises patterns from sensitive data, then a determined attacker may be able to tease those patterns back out through carefully chosen queries.

[Read More . . .]

.
The Security of AI: Training Data Poisoning

The Security of AI: Training Data Poisoning

In the last post I explored prompt injection: the trick of turning “content” into “instructions” once an LLM is embedded inside a wider application. Training data poisoning is different. There’s no jailbreak moment, no single malicious prompt. Instead, the attack happens upstream, quietly, while you’re building the thing.

It’s the most uncomfortable kind of security problem because it targets the part of the system that’s hardest to reason about: the data you trust.

[Read More . . .]

.
The Security of AI: Prompt Injection

The Security of AI: Prompt Injection

Large Language Models are being stitched into more and more products, quietly changing what “an interface” even means. After years in cybersecurity, it’s tempting to shrug and say: it’s software, it’s data—what’s new?

The uncomfortable answer is that the boundary between software and data is blurrier than we’re used to. In classic systems, untrusted input sits on one side of a parser and code sits on the other. With LLMs, natural language is both the user interface and—effectively—the control plane. That’s where prompt injection lives.

[Read More . . .]

.
Securing Generative AI

Securing Generative AI

Generative AI security stops being a theoretical debate the moment you run an uncensored model and realise how quickly it will comply with the wrong kind of curiosity. Over the past few months, a lot of my own time has gone into building machine learning models for cybersecurity in a personal project, and the practical takeaway has been blunt: capability is easy to unlock; control is the hard part.

That’s the real engineering challenge with modern language models. The risk isn’t simply that they can write code. It’s that they can write code at speed, at scale, and with a confidence that looks persuasive even when it’s wrong. If you’re a security engineer, this is not just “another developer tool”. It’s a new production dependency with a new class of failure modes.

[Read More . . .]

.
Understanding Identity and Access Management Roles for ECS/EKS

Understanding Identity and Access Management Roles for ECS/EKS

IAM Roles for ECS/EKS: Right Permissions, Right Place Continuing our journey through the headache that is cloud security, today we are tackling the absolute linchpin of container hygiene: Identity and Access Management (IAM) roles for ECS and EKS. As a security architect, I spend half my life banging on about the principle of least privilege. But nowhere is this principle ignored more frequently—and more dangerously—than in container orchestration. I cannot tell you how many times I’ve audited an environment where the infrastructure is locked down tight, but the actual container task is running with AdministratorAccess because a developer couldn’t get S3 writes working at 4 PM on a Friday. [Read More . . .]

.
Securing Container Images

Securing Container Images

Securing Container Images: Best Practices for a Robust Containerised Environment Throughout 2024, my blog posts will primarily draw upon my security engineering and architecture experience, sharing the best practices I’ve implemented and the challenges I’ve conquered in AWS over the past decade. In this month’s dive, I’m heading into the intricate world of container security, aiming to shed some much-needed light on the best ways to fortify your containerised infrastructure. [Read More . . .]

.
Cultivating Cyber Resilience

Cultivating Cyber Resilience

Throughout my journey in different organisations over the past two decades, one thing has stayed stubbornly consistent: the decisive factor in cybersecurity is rarely the tooling. It’s culture. You can buy world-class controls, build immaculate architectures, and hang your walls with policies and certifications… and still get flattened because the organisation behaves in a way that makes security impossible to sustain.

That’s not a dig at people, either. It’s just reality. Security is a human system running inside a business system, and the human system has moods, habits, pressure, fatigue, and deadlines. If you want resilience—the kind where the organisation bends when it gets hit and doesn’t snap—then you’re not just designing controls. You’re shaping behaviour.

[Read More . . .]

.
Pentests and the SOC

Pentests and the SOC

Penetration testing is a critical part of any robust cybersecurity strategy. But a “successful” penetration test shouldn’t be judged only by what the testers found; it should also be judged by what the organisation detected, how quickly it made sense of what it saw, and whether the monitoring stack told a coherent story of what happened.

That’s where the SOC comes in.

[Read More . . .]

.
CISO Series - Organisation

CISO Series - Organisation

Being in a leadership role in information security requires you to hold an odd mix of things in your head at the same time: enough technical depth to smell nonsense, enough strategic thinking to steer the ship, enough leadership to keep people moving in the same direction, and enough business and communication skill to make any of it land with the people who control budgets and priorities.

That’s a lot. And it’s on top of the day job: projects, incident noise, risk decisions, the “hot topic of the day”, and the never-ending parade of meetings that all feel urgent right up until they collide with each other in your calendar.

Over many client engagements I’ve tried all sorts of methods to keep myself from spinning out—some fancy, some simple, some frankly a waste of time. What follows is what I still use today because it actually works in the real world, not just in an executive coaching book.

[Read More . . .]

.
CISO Series - Communication

CISO Series - Communication

The ability to communicate effectively with a diverse array of stakeholders—from your own security team to C-level executives—is likely the single most important aspect of security leadership. It’s where the technical rubber meets the business road, and getting it wrong usually means failing to get the budget, support, or cultural buy-in you need. Striking the right balance is an art form. Over the years, I’ve been fortunate enough to hold leadership roles across different organisations, and I’ve had mentors who drilled one lesson into me above all else: when you are talking to the C-suite, stop talking about technology and start talking about risk. [Read More . . .]

.
CISO Series - The Fortnight Foundation

CISO Series - The Fortnight Foundation

As I get ready to step into my next assignment, I keep coming back to the same truth: the first couple of weeks in a CISO role can make or break the next twelve months. Not because you’ll “fix security” in a fortnight. You won’t. Nobody does. But because those first days are when you learn what you’re really walking into—how the organisation behaves under pressure, where the trust sits (and where it doesn’t), and whether the security team is seen as a partner… or as the people who turn up late and say no. [Read More . . .]

.
Rethinking Cyber Security Prioritisation

Rethinking Cyber Security Prioritisation

Working as an independent consultant gives you a strange advantage: you get to watch the same organisational patterns repeat across completely different industries. New leadership arrives. A new strategy deck appears. A familiar set of “transformations” roll through. And somewhere in the noise, the security team is expected to keep the lights on, keep the auditors happy, and somehow also “be more proactive” against a threat landscape that changes faster than most businesses can plan.

This came up recently in a catch-up with a mentee. Their organisation was going through leadership change, and the security function was feeling the same gravitational pull it always does in those moments: everything becomes urgent, everything becomes visible, and everything becomes a priority.

Which brings us to a syndrome that turns up again and again, especially with new managers trying to prove they’re decisive: “Everything is Priority One”.

[Read More . . .]

.
Mastering 3rd Party Risk Assessment: A Strategic Imperative for Business Leaders

Mastering 3rd Party Risk Assessment: A Strategic Imperative for Business Leaders

Third-Party Risk Assessment: Why Your Security’s Only as Strong as Your Weakest Vendor I’ve been doing this security architecture gig long enough to spot the pattern. You spend a fortune building a digital fortress. You patch everything, you train your staff until they’re sick of your voice, and you deploy Zero Trust architectures that would make a bank jealous. You feel good. You feel secure. Then, inevitably, the phone rings. [Read More . . .]

.
The Promise of AI

The Promise of AI

AI in Cybersecurity: When the Shield Becomes a Sword You can’t open a browser tab these days without someone banging on about how AI is going to either save humanity or turn us all into paperclips. It is bloody everywhere. And in our corner of the universe—cybersecurity—it has created this properly fascinating, if slightly terrifying, dynamic that is fundamentally changing how we think about defence. As an architect, I look at AI and Machine Learning and I see enormous potential. [Read More . . .]

.
Creating and Supporting Cybersecurity Teams

Creating and Supporting Cybersecurity Teams

As a security leader, one of the most important parts of the job has never been the technology on its own. The technical work matters, obviously—but what really decides whether security succeeds or becomes theatre is the team behind it. Not the org chart, not the tooling stack, not the maturity model slides. The actual humans who show up, carry the pressure, make the judgement calls, and keep going when everyone else only notices security during a crisis.

We say it all the time: your cyber defences are only as strong as the people safeguarding them. It’s true, but it’s also a bit of a cop-out, because the unspoken bit is this: if you want a strong security function, you have to build the conditions for it. You have to shape the team, protect it from nonsense, and give it room to grow. That’s not fluffy leadership talk—it’s operational reality.

[Read More . . .]

.
Managing cybersecurity risks in supply chain management

Managing cybersecurity risks in supply chain management

The task of managing cybersecurity risks in supply chain management isn’t just “nice to have” anymore. It’s basic survival. Modern supply chains aren’t a neat line from supplier to factory to customer; they’re a living network of companies, contractors, platforms, software components, and outsourced services, all stitched together with APIs, portals, shared data, and a lot of trust.

And trust, in security, is a dangerous thing to hand out by default.

[Read More . . .]

.
Reclaiming our online privacy

Reclaiming our online privacy

Take a moment to reflect on your typical day.

You wake up and reach for the phone beside your bed, thumb through notifications, skim the news, and half-consciously accept that your morning begins inside someone else’s platform. Maybe you order a flat white through an app. Maybe you call someone overseas. Maybe you open your email and step into work. None of it feels remarkable anymore, because the modern web has done its best work when it feels invisible.

[Read More . . .]

.
Robust Security Operations Teams

Robust Security Operations Teams

Securing our businesses from invisible invaders is an imperative. It requires orchestration that looks, from a distance, like a symphony: people, process, and technology working together with the timing and discipline to detect the quiet signals early, make sense of them quickly, and respond without chaos. The challenge is that most organisations try to build this capability while facing a scarcity of skilled personnel, inconsistent funding, and a culture that only pays attention when something has already gone wrong.

[Read More . . .]

.
Secure Summits

Secure Summits

In the French Alps, the weekend ritual is as predictable as the weather isn’t. You see the same choreography at trailheads and lift queues: people tightening boots, checking bindings, shouldering packs, and doing the quiet mental arithmetic of risk. Ropes. Layers. Food. A headtorch “just in case”. And increasingly, a small constellation of electronics that promise to make the mountain feel a little more knowable.

A watch that tracks altitude and heart rate. A phone full of offline maps. A satellite messenger clipped to a strap. A beacon, dormant until the worst day of someone’s life. These devices are sold as safety, and often they are. But they also represent something else—something we don’t tend to think about in the cold wind, with gloves on and a ridge line ahead.

They’re computers. Networked computers. And the mountain doesn’t make them less hackable.

[Read More . . .]

.
The Power of Threat Intelligence

The Power of Threat Intelligence

Digital business operations continue to expand, and the threat landscape evolves in lockstep—more complex, more professional, and more opportunistic. Attackers are no longer “finding vulnerabilities” in the abstract; they’re running an ecosystem. They share tooling, reuse techniques, buy access, and iterate faster than many internal teams can patch. In that context, the question isn’t whether threats exist. It’s whether an organisation is forced to learn about them only after impact.

That’s where threat intelligence earns its place.

[Read More . . .]

.
Threat Modelling in the Development Pipeline

Threat Modelling in the Development Pipeline

Security is easiest to add when nothing has been built yet.

That sounds obvious, but it’s the most expensive lesson teams learn the hard way. Once a system is deployed, every “security fix” becomes a negotiation with time, budget, and inertia. In the design phase, it can be a decision. That’s what people mean by secure by design, and threat modelling is one of the few practices that makes that phrase real.

[Read More . . .]

.