Serverless Architectures
Serverless architectures sell a very particular dream: your code runs when it needs to, scales when demand arrives, and costs you less because you stop paying for idle capacity. It’s a persuasive pitch, and in many cases it’s true. But the most profound change serverless brings isn’t cost or scaling. It’s a security boundary shift.
In a traditional world, security teams built mental models around hosts. Patch the operating system. Harden the image. Control inbound ports. Monitor the box. In serverless—particularly Function-as-a-Service—that mental model collapses, because the “box” is not something you own, and in many cases it barely exists long enough to be observed in the way we’re used to.
The security boundary moves
The unit of execution in serverless is the function, and that sounds innocuous until you realise what it implies. A function is a tiny, purpose-built compute slice wired into a wider system of triggers, queues, events, and APIs. It has code, dependencies, environment variables, and permissions. It wakes up, does a thing, and disappears.
So the security question stops being “is the server secure?” and becomes “is this function secure, is its identity constrained, is its input trustworthy, and can we prove what happened when it ran?”
That’s a harder question than it looks, because serverless encourages sprawl. Instead of one large service, you end up with dozens—sometimes hundreds—of small functions. Each function is an entry point. Each one has a deployment pipeline. Each one carries dependencies. Each one can leak secrets if it logs carelessly or is misconfigured. This is how serverless increases attack surface without anyone explicitly “adding more exposure”. You simply build more moving parts.
Ephemerality adds its own twist. You don’t patch a running function in the classic sense. You patch by redeploying code, images, or layers, and the reality is that many functions live for long periods without being revisited because they “just work”. When a vulnerability drops in a common library and half your estate depends on it, serverless doesn’t make you safer by default. It makes you dependent on how quickly you can find where that library is used and ship an update everywhere.
Dependencies are the quiet liability
Serverless functions are often small, but their dependency trees are not. Modern development is built on libraries, and serverless amplifies that because the easiest way to add capability is to import it. The risk is obvious: third-party packages become part of your runtime. If they’re outdated, abandoned, compromised, or simply flawed, your function inherits it.
The dangerous part is that this doesn’t always look like a breach. It looks like normal execution, just with an extra behaviour path. The more serverless you build, the more your security posture becomes a supply chain problem: what you ship, what you import, and how quickly you can respond when something upstream shifts.
APIs become the front door
Serverless is often glued together with APIs. Functions talk to each other, to data stores, to third-party services, and to clients through gateways. That makes API security central rather than peripheral. If an API is misconfigured, overly permissive, or missing proper auth, you’ve effectively built a public control plane for internal capability.
The gateway pattern helps, but it can also create complacency. A gateway can enforce authentication, rate limiting, and validation, but only if it’s configured deliberately. A badly configured gateway is worse than none at all, because it gives you a false sense of control while still passing the dangerous traffic through.
Identity is where serverless lives or dies
In serverless, the most important control is often the least glamorous: permissions. Functions should have only the access they need, and no more. That sounds like least privilege 101, but in practice it’s where most serverless architectures quietly fail, especially early on.
Over-permissioned functions are common because they’re convenient. Someone wants to ship quickly, so they attach broad rights “temporarily”. Temporarily becomes permanent. And when a function is compromised—through a vulnerable dependency, a logic flaw, or an exposed secret—the attacker inherits whatever that function can reach.
Serverless doesn’t reduce risk if your IAM is sloppy. It concentrates risk into identity.
Observability is the compensating control
Because functions are ephemeral, visibility is the thing that keeps you honest. Logging and monitoring can’t be optional. You need centralised logs, correlated traces, and alerting that’s built around behaviour rather than host-level assumptions.
The trick is that serverless generates a lot of telemetry. If you don’t design observability properly, you drown in it. If you do design it properly, you get something powerful: near real-time insight into the exact paths requests take through your system, and a clearer ability to spot anomalous interactions—unexpected calls, unusual data access, suspicious spikes in invocation patterns.
This is also where advanced analytics can actually help, because the shape of serverless environments makes “normal” difficult for a human to eyeball. But analytics only works if the data is reliable and the signals are meaningful. Otherwise you end up with machine learning that confidently tells you nothing useful.
Runtime protection and zero trust, without the hype
Runtime protections can add value in serverless, but they need to be treated as guardrails, not armour plating. If you rely on runtime defence to save insecure code, you’ve already lost. Where it helps is in catching the things you didn’t predict: exploitation attempts, suspicious execution paths, unexpected outbound connections, or abuse of language/runtime features.
Zero trust fits serverless naturally if you interpret it correctly. It’s not a product, it’s a stance: every call is authenticated, every permission is minimal, every request is validated, and you assume compromise is possible somewhere in the graph. In serverless, that stance isn’t optional—it’s how you stop one compromised function becoming a pivot point into everything else.
Serverless architectures offer real benefits. But the security story is not “less ops means less risk”. The real story is that the unit of risk changes. You’re no longer defending servers; you’re defending identities, dependencies, event flows, and APIs. If you build that deliberately, serverless can be remarkably resilient. If you build it casually, you get a system that scales beautifully and fails spectacularly.
