Serverless Architectures

Serverless architectures sell a very particular dream: your code runs when it needs to, scales when demand arrives, and costs you less because you stop paying for idle capacity. It’s a persuasive pitch, and in many cases it’s true. But the most profound change serverless brings isn’t cost or scaling. It’s a security boundary shift.

In a traditional world, security teams built mental models around hosts. Patch the operating system. Harden the image. Control inbound ports. Monitor the box. In serverless — particularly Function-as-a-Service — that mental model collapses, because the “box” is not something you own, and in many cases it barely exists long enough to be observed in the way we’re used to.

Serverless does genuinely eliminate certain attack classes. There is no SSH to brute-force, no persistent operating system to rootkit, no long-lived process to hook into. That’s a real win. But the risks don’t disappear — they move, and the new risks are less familiar to most teams.

The security boundary moves

The unit of execution in serverless is the function, and that sounds innocuous until you realise what it implies. A function is a tiny, purpose-built compute slice wired into a wider system of triggers, queues, events, and APIs. It has code, dependencies, environment variables, and permissions. It wakes up, does a thing, and disappears.

Under the shared responsibility model, the cloud provider owns the infrastructure — runtime patching, host hardening, network isolation. You own everything above: your code, your configuration, your permissions, and your data. That line is easy to misunderstand, and misunderstanding it is where many serverless security failures begin.

So the security question stops being “is the server secure?” and becomes “is this function secure, is its identity constrained, is its input trustworthy, and can we prove what happened when it ran?”

That’s a harder question than it looks, because serverless encourages sprawl. Instead of one large service, you end up with dozens — sometimes hundreds — of small functions. Each function is an entry point. Each one has a deployment pipeline. Each one carries dependencies. Each one can leak secrets if it logs carelessly or is misconfigured. This is how serverless increases attack surface without anyone explicitly “adding more exposure”. You simply build more moving parts.

Ephemerality adds its own twist. You don’t patch a running function in the classic sense. You patch by redeploying code, images, or layers. The reality is that many functions live for long periods without being revisited because they “just work”. When a vulnerability drops in a common library and half your estate depends on it, serverless doesn’t make you safer by default. It makes you dependent on how quickly you can find where that library is used and ship an update everywhere.

Dependencies are the quiet liability

Serverless functions are often small, but their dependency trees are not. Modern development is built on libraries, and serverless amplifies that because the easiest way to add capability is to import it. The risk is straightforward: third-party packages become part of your runtime. If they’re outdated, abandoned, compromised, or simply flawed, your function inherits it.

Shared layers and custom runtimes add another dimension. A vulnerable layer shared across dozens of functions means a single point of failure that multiplies across your estate, and it’s easy to lose track of which functions consume which layers.

The dangerous part is that this doesn’t always look like a breach. It looks like normal execution, just with an extra behaviour path. The more serverless you build, the more your security posture becomes a supply chain problem: what you ship, what you import, and how quickly you can respond when something upstream shifts.

APIs become the front door

Serverless is often glued together with APIs. Functions talk to each other, to data stores, to third-party services, and to clients through gateways. That makes API security central rather than peripheral. If an API is misconfigured, overly permissive, or missing proper auth, you’ve effectively built a public control plane for internal capability.

The gateway pattern helps, but it can also create complacency. A gateway can enforce authentication, rate limiting, and validation, but only if it’s configured deliberately. A badly configured gateway is worse than none at all, because it gives you a false sense of control while still passing dangerous traffic through.

Event injection: the attack class people forget

Functions are triggered by events — HTTP requests, queue messages, storage changes, database streams. If a function trusts the shape or content of an incoming event without validation, an attacker who can influence that trigger can manipulate what the function does. This is event injection, and it’s the serverless equivalent of unsanitised input in a web application.

The fix is the same principle applied differently: validate and sanitise every event payload, regardless of whether the trigger source feels “internal”. Internal trust boundaries in serverless are blurrier than they appear.

Identity is where serverless lives or dies

In serverless, the most important control is often the least glamorous: permissions. Functions should have only the access they need, and no more. That sounds like least privilege 101, but in practice it’s where most serverless architectures quietly fail, especially early on.

Over-permissioned functions are common because they’re convenient. Someone wants to ship quickly, so they attach broad rights “temporarily”. Temporarily becomes permanent. And when a function is compromised — through a vulnerable dependency, a logic flaw, or an exposed secret — the attacker inherits whatever that function can reach.

Secrets deserve their own mention here. Storing sensitive values in environment variables is common in serverless, but it’s not ideal. Environment variables can surface in logs, error messages, and debugging output. A secrets manager — with the function granted only the specific secrets it needs — is a more defensible pattern.

Serverless doesn’t reduce risk if your IAM is sloppy. It concentrates risk into identity.

Observability is the compensating control

Because functions are ephemeral, visibility is the thing that keeps you honest. Logging and monitoring can’t be optional. You need centralised logs, correlated traces, and alerting built around behaviour rather than host-level assumptions.

The trick is that serverless generates a lot of telemetry. If you don’t design observability properly, you drown in it. If you do design it properly, you get something powerful: near real-time insight into the exact paths requests take through your system, and a clearer ability to spot anomalous interactions — unexpected calls, unusual data access, suspicious spikes in invocation patterns.

This is also where advanced analytics can help, because the shape of serverless environments makes “normal” difficult for a human to eyeball. But analytics only works if the data is reliable and the signals are tuned to meaningful thresholds rather than generic baselines. Otherwise you end up with tooling that generates alerts without insight.

Runtime protection and zero trust, without the hype

Runtime protections can add value in serverless, but they work best as guardrails, not armour plating. If you rely on runtime defence to save insecure code, you’ve already lost. Where it helps is in catching the things you didn’t predict: exploitation attempts, suspicious execution paths, unexpected outbound connections, or abuse of language and runtime features.

Zero trust fits serverless naturally if you interpret it correctly. It’s not a product, it’s a stance: every call is authenticated, every permission is minimal, every request is validated, and you assume compromise is possible somewhere in the graph. In serverless, that stance isn’t optional — it’s how you stop one compromised function becoming a pivot point into everything else.

Serverless architectures offer real benefits. But the security story is not “less ops means less risk”. The real story is that the unit of risk changes. You’re no longer defending servers; you’re defending identities, dependencies, event flows, and APIs. If you build that deliberately, serverless can be remarkably resilient. If you build it casually, you get a system that scales beautifully and fails spectacularly.