Securing IoT Devices

Designing zero trust architectures for clients running thousands of IoT devices changes how you think about “the network”. These aren’t laptops and servers sitting behind a predictable boundary. They’re sensors tracking warehouse capacity, systems automating movement of stock, retail analytics devices counting footfall and interaction — tiny computers scattered across physical spaces, often deployed faster than they can be governed.

The problem isn’t that IoT is inherently insecure. It’s that it’s inherently uneven. You’re dealing with a mixed ecosystem of manufacturers, firmware baselines, protocols, and operational ownership, and then you’re expected to treat the whole thing like a coherent fleet.

The things that make IoT hard

IoT security tends to break down along a few axes that reinforce each other: heterogeneity, constraint, and operational ownership.

Heterogeneity means there’s no “standard workstation build” equivalent. Some devices are cheap sensors with barely enough CPU to function. Others are complex embedded systems running an OS you’d recognise. Some speak modern protocols. Others speak whatever the vendor shipped and never meaningfully updated. That diversity makes blanket controls brittle, and brittle controls become exceptions, and exceptions become your attack surface.

Constraint is the second part. Many IoT devices are simply not built to carry heavyweight security features. You can’t assume agent-based endpoint protection. You can’t assume full-dress cryptography with generous key sizes and frequent handshakes. You often can’t even assume decent logging. Security architecture has to work with what’s there, which means designing controls that are lightweight, enforceable, and — most importantly — operable.

Then there’s operational ownership. IoT devices frequently sit in a gap between IT, OT, and facilities teams. Nobody fully owns them, which means nobody fully secures them. Shadow IoT — devices connected to the network without formal approval or inventory — makes this worse. You can’t protect what you don’t know exists, and in large estates the gap between what’s inventoried and what’s actually on the network is often significant.

Patching is necessary, but it’s not a strategy

Firmware updates are the obvious baseline. They matter, and they’re still one of the most effective ways to remove known vulnerabilities from the fleet. But firmware updates are also where IoT security goes to die in practice, because they rely on two things that are often missing: user discipline and vendor support.

Plenty of devices don’t get updated because operations teams are understandably cautious about bricking equipment or disrupting a production environment. Plenty of devices don’t get updated because the manufacturer stops shipping updates, or only ships them through awkward tooling that never made it into operational routines. The result is an estate where known vulnerabilities persist for years, not because the organisation “doesn’t care”, but because the ecosystem isn’t designed for secure lifecycle management.

Default credentials deserve a specific mention here because they remain one of the most reliably exploited weaknesses in IoT. Mirai showed this in 2016, and the underlying problem hasn’t gone away — devices still ship with factory-set passwords, and changing them at scale across a fleet is harder than it should be. Any serious IoT security programme treats credential hygiene as a first-order problem, not an afterthought.

Encryption matters, even when it hurts

Encryption is critical for protecting data in transit between devices, gateways, and control platforms. In IoT, though, “just encrypt everything” quickly collides with reality: constrained CPUs, limited memory, battery budgets, and sometimes protocols that weren’t designed with modern cryptographic expectations.

This is where lightweight cryptography matters. NIST’s standardisation of Ascon as a lightweight authenticated encryption scheme is a meaningful step here — it’s designed specifically for constrained environments where AES-GCM or ChaCha20 would be too expensive. But the bigger point is architectural: if a device can’t do robust crypto, you need compensating design. That can mean pushing secure termination to a gateway, minimising what the device transmits, and ensuring the rest of the system treats device-originated data as untrusted until validated. In other words, don’t let weak devices drag your whole trust model down with them.

Hardware roots of trust

One area that deserves more attention than it typically gets is hardware-level integrity. Secure boot, hardware roots of trust, and trusted platform modules (even lightweight variants for embedded systems) provide assurance that a device is running the firmware it’s supposed to be running. Without that foundation, you’re trusting the software layer on devices that may have been physically tampered with, shipped with compromised firmware, or modified in the supply chain before they ever reached your network.

Not every device in a fleet will support hardware-backed integrity, but for devices that do, enforcing secure boot and validating firmware signatures is one of the highest-value controls available. It shifts the trust anchor from “we hope this device is clean” to “we can verify it”.

Segmentation is your blast-radius control

Network segmentation remains one of the most pragmatic, high-impact controls for IoT. It doesn’t “secure the device”, but it limits what a compromised device can reach, and that’s usually what matters when attackers land. IoT devices are often used as footholds, not final objectives.

The trick is that segmentation can’t be a one-off VLAN exercise anymore. Estates change. Warehouses expand. Stores get refitted. Devices move. If segmentation isn’t aligned to identity, function, and expected communication paths, it becomes either too permissive to be useful or too restrictive to operate. In mature environments, segmentation becomes policy-driven and continuously maintained, because static segmentation decays.

The shiny new ideas: blockchain and AI

Blockchain-based authentication gets proposed frequently in IoT because it sounds like the perfect fit: decentralised identity, tamper-evident records, and a ledger that can prove what happened. The reality is more nuanced. The underlying goal — strong device identity and non-repudiation of interactions — is valid. But the mechanism introduces consensus overhead, latency, and integration complexity that constrained devices and real-time operational environments struggle to absorb. Most serious deployments achieve device identity through lighter approaches — PKI with hardware-backed certificate stores, or managed identity services — long before distributed ledgers enter the picture.

AI-driven threat detection is the other recurring promise: anomaly detection across device behaviour, spotting compromised sensors, detecting lateral movement patterns in near real time. The idea is sound, and at fleet scale it can genuinely add value — particularly for detecting patterns that no human team could spot across thousands of telemetry streams. But the results in the real world are mixed, largely because the hardest problem isn’t the model — it’s the data. IoT telemetry is messy, inconsistent, and often missing the context to distinguish “weird but normal” from “weird and worth escalating”. A temperature sensor reporting an unusual spike could mean a compromised device, a faulty unit, or simply a heatwave.

There’s also an organisational truth here: AI doesn’t replace ownership. Even if anomaly detection flags something, you still need a team who can interpret it, validate it, and act without taking down operations unnecessarily. Many “AI for IoT security” solutions fall short because they optimise for detection demos rather than operational decision-making.

Regulation is catching up

Regulators have started forcing the issue. The UK’s Product Security and Telecommunications Infrastructure (PSTI) Act, in effect since April 2024, bans default passwords, requires a vulnerability disclosure policy, and mandates transparency on security update periods for consumer-connectable products. The EU’s Cyber Resilience Act goes further, imposing security-by-design obligations across the product lifecycle for anything with a digital element sold in the EU market.

These aren’t optional guidance — they’re enforceable requirements with real penalties. For organisations deploying IoT at scale, they also create useful leverage: you can now hold vendors to a regulatory standard when evaluating products, and “the manufacturer doesn’t patch” becomes a procurement disqualifier rather than an accepted risk.

What actually works in practice

The most effective IoT security programmes I’ve seen don’t start with blockchain or AI. They start with boring, foundational discipline: asset inventory that’s accurate, lifecycle management that’s enforced, identity that’s real, segmentation that’s maintained, and a trust model that assumes compromise is inevitable somewhere in the fleet.

If you treat IoT as “endpoints, but smaller”, you lose. If you treat IoT as an untrusted, constrained edge that must be controlled through architecture — identity, policy, isolation, and continuous validation — you end up with something survivable.

That’s the real IoT security challenge: not inventing exotic controls, but designing a system where the weakest devices don’t get to define the security posture of the whole business.