Reclaiming our online privacy
Take a moment to reflect on your typical day.
You wake up and reach for the phone beside your bed, thumb through notifications, skim the news, and half-consciously accept that your morning begins inside someone else’s platform. Maybe you order a flat white through an app. Maybe you call someone overseas. Maybe you open your email and step into work. None of it feels remarkable anymore, because the modern web has done its best work when it feels invisible.
The question almost nobody asks—because it’s awkward, because it’s boring, because it’s easier not to—is where all that data goes. Every tap, every search, every location ping, every purchase, every scroll: it creates a trail. Not a metaphorical one. A literal one. And that trail is valuable.
In a hyper-connected economy, data is the raw material. The biggest platforms are the miners, refiners, and traders. Search engines learn your intent. Social networks map your relationships and influence. Retail platforms learn your appetite and your timing. Telecoms learn where you go. By using services that feel free or frictionless, you end up assembling a digital persona that you don’t really own, can’t fully inspect, and can’t easily erase.
Personalised ads are the visible tip of that iceberg. What sits beneath it is more consequential: prediction. The ability to infer what you’ll do next, what you might buy, how you vote, how you behave under stress, which messages you respond to, and what you ignore. It’s not just that the diary is open on the kitchen table; it’s that someone is reading it, indexing it, and selling summaries.
And once that kind of insight exists, it’s naive to think only advertisers will want it.
Governments have always been interested in intelligence. In the digital era, that interest collides directly with encryption, because encryption is one of the few technologies that reliably limits access—whether the party trying to gain access is a criminal, a corporation, or a state. The recurring proposal is always some variation of the same idea: access should be possible for the “good guys”, under the banner of national security or tackling crime. The mechanism is usually framed as a “backdoor”.
That word is doing a lot of work. It makes the proposal sound like a controlled entry point. Something that can be used carefully, rarely, and correctly. In practice, a backdoor is a structural change to how trust works.
To understand why people react so strongly, it helps to look at what “backdoor access” often means in real implementation terms. One model is key escrow, where a third party holds a copy of the keys needed to decrypt communications, releasing them under legal demand. Another approach is splitting keys across multiple holders so no single party can access data alone, aiming to force oversight through collaboration. A third option is software-level bypass, where the system itself includes a hidden mechanism to circumvent encryption, whether through a privileged access path, a special account capability, or an engineered weakness.
The details change. The outcome doesn’t.
If you create a mechanism that makes end-to-end encryption decryptable by someone other than the participants, you haven’t created “lawful access”. You’ve created a second security model that sits alongside the first. And second security models are exactly what attackers go looking for, because they’re new, complex, and often poorly understood outside a small circle.
The core promise of end-to-end encryption is simple: only the sender and the recipient can read the message. Not the service provider. Not an intermediary. Not a third party holding a “just in case” key. If you weaken that promise, you aren’t just creating a new investigative capability; you’re changing the default threat model for everyone.
That change creates multiple failure modes. Privacy is the obvious one. If access exists, monitoring becomes technically possible, and technical possibility has a habit of expanding into policy, then into routine. Security is the next. Backdoors don’t discriminate. A mechanism created for one actor can be discovered, stolen, replicated, coerced, or abused by others. The history of security is littered with “only for authorised use” features that became the entry point for the unauthorised.
Then there’s trust. If people know that private communications are designed to be decryptable, it changes behaviour. Some people self-censor. Others move to darker tools. Others stop trusting digital channels altogether. That erosion doesn’t just affect activists or journalists; it affects commerce, healthcare, legal privilege, and the everyday assumption that private communication is possible at all.
And it’s never only local. The internet doesn’t respect borders. If one jurisdiction forces systemic weakening, it sets a precedent other jurisdictions will demand. Some will have far less restraint, and far less accountability. The same technical architecture that enables “lawful access” in one country can become a blueprint for coercive surveillance in another.
There’s also an uncomfortable practicality here: the people governments most want to monitor are often the ones most able to route around constraints. If mainstream platforms become less secure, sophisticated criminals and hostile actors don’t politely keep using them; they migrate. The net effect can be a world where ordinary people are exposed, while the truly high-risk targets become harder to see.
So, what about governance? What about oversight and checks and balances?
Oversight matters, but it’s not the same thing as technical constraint. A legal process can limit when access is requested. It can’t reliably limit who ultimately exploits a built-in vulnerability if that vulnerability exists. A warrant system can be imperfect but still provide friction. A universal access mechanism removes the friction at the technical layer and replaces it with a promise that the mechanism will never be misused, never be stolen, never be expanded, and never be compelled by regimes with different values. That’s not a promise any architect should be comfortable making.
Which brings us to the more practical question: if the macro forces are moving against privacy, what can individuals and organisations do to reclaim some control?
Part of the answer is cultural. Platforms should stop treating privacy as a marketing slogan and start treating it as a design constraint that works for everyone, not just the “easy” accounts. When privacy features are inconsistent, fragile, or hidden behind support escalations, they become the kind of security that exists mainly as a press release.
The other part is individual choice, and it’s less dramatic than people want it to be. It’s about using services that build privacy into the default experience, not as an optional toggle buried in settings. It’s about reducing the amount of data shared by default, and being more deliberate about which services get which parts of your life. It’s also about habits: updating devices, being careful with links, resisting the urge to install every convenient app, and using strong, unique passwords stored in a password manager.
If you want a place to start learning without drowning in vendor noise, the Electronic Frontier Foundation is one of the strongest resources available. Their Surveillance Self-Defense work is especially useful because it treats privacy as a practice, not a product, and it gives practical guidance for real people rather than idealised threat models.
