As part of the ongoing research I keep going on the backburner to maintain some aspects of my career, over the last few years I’ve run into a strange series of observations that seem to repeat across the digital horizon: Companies are collaborating in ways that can only be a result of coordination and not simply influence or culture-driven changes, and it seems to revolve around the notion that system security is a product of being able to identify users.
You can’t sign up for a service without giving away tons of your personal information, including your email address. You can’t get an email address anonymously. Even when they’re anonymous email addresses, they do identity verification or leave a trap door for it later by requiring you to pay for it with a credit card and then they do phone verification to make sure you’re a “real person”. The idea is to make sure there’s nothing you can do on the internet that doesn’t involve a verifiable chain of artifacts that tie you to who you really are in case they, or someone else, or a government entity decide to go after the person associated with user behaviour or content.
You can get a burner phone. That’s tied to the cell tower your phone will connect to and those often require several layers of identity verification just to buy them and activate them now. If you find a way around that, you have a phone number that you can almost use to register for an anonymous email address. It connected to 3 local towers that can be used to triangulate your exact latitude and longitude at the time of activation and at the time of every ping while the device is in operation. So when you sign up for an email address, it makes you verify the number, probably via SMS, so you turn it on, give the towers your geographic location, and verify it.
So you think “ok, but knowing my location isn’t knowing who I am”, except, the problem is /relational/ data. The email account that’s tied to that number is now associated with your location reported by the towers when people go looking to deanonymize you later.
Many systems now also require multifactor authentication using your cell phone. Pull up an app, type a code that was texted to your phone to make sure the person logging in is associated with the device or the phone number.
Cell phones were largely thrust on people. They were initially very useful because of the mobility of access to information and the ability to take/make calls, send text messages, but these days what you operate with is more than a cell phone — it evolved into an entire handheld computer with the actually used features more related to data usage and remote site access through the apps you use than making/receiving calls and SMS texts. People texting each other all the time was a early 2000s thing for you old people who still do it. Just like AIM used to be.
So then why did verification of phone number become such a pivotal aspect of proof of identity after the core features associated with cell phones largely fell out of heavier use?
I’ll tell ya: Because it requires you to pay for it constantly. Because they’ve made it where you can’t really do anything without them in a blanket immunity approach. Because if that market failed the major cell providers would drop infrastructure used for shitloads of national security purposes. Because they’ve built a “glass dome” centered around identity as an accountability factor in online communications.
Good luck if you’re a reporter or whistleblower working on a dangerous story about powerful people and need to report something anonymously. Good luck if you’re in hiding from someone who knows that system and can influence it.
One of the less examined consequences of this identity-first model is how it reshapes behavior long before any explicit enforcement occurs. When actions are persistently attributable, people learn to self-censor—not necessarily because a rule exists, but because a permanent record might. Experimentation gives way to caution; dissent becomes a calculated risk rather than an exploratory act. The ability to test ideas, inhabit provisional identities, or discard failed selves has historically been essential to both research and professional growth. As identity becomes durable and retroactive, that flexibility erodes. The result is a quiet pressure toward conformity, where career continuity, reputational safety, and seemingly inescapable legal risk increasingly outweigh curiosity, critique, or intellectual risk-taking.
This identity regime is asymmetrical. Institutions retain layers of insulation that individuals do not. Corporations operate through legal entities, shell subsidiaries, indemnification agreements, and professional intermediaries. Decision-makers are buffered; accountability diffuses upward and outward.
Individuals, by contrast, are increasingly required to act under persistent, unshielded identity. Their speech, research, and associations are directly attributable, archived, and recoverable. The rhetoric of accountability suggests equal exposure, but enforcement flows downward. Platforms demand maximal transparency from users while preserving opacity for themselves. Accountability becomes selective: compulsory for individuals, optional for institutions.
These are bad times to be a person and great times to be an organization.
Unfortunately, we’re losing things to it: Many foundational advances emerged from environments where identity was optional or provisional. Early internet research communities prioritized contribution over credentials. Cryptography mailing lists allowed participants to explore controversial ideas without professional retaliation. Political writing under pen names enabled arguments to stand on merit rather than authorial standing. Reporters and whistleblowers like I mentioned earlier had paths to get sometimes world-changing information out to the public view without being able to be identified, often saving their lives through that fact alone– and even that wasn’t enough. The entire model hinges on the sustained integrity of powerful conglomerates and government infrastructure that historically, would be a mistake to rely on.
The function of pseudonymity in these contexts was not concealment for its own sake, but freedom to experiment. It reduced social, career, even mortal risk, allowing ideas to be tested, discarded, or refined without permanent attachment. These systems optimized for intellectual throughput rather than reputational management, and that period of the internet, from 1990 or so to about 2015 advanced our species in every way that we advance by hundreds of years as a result of this. Then we built this glass dome around identity, and things are slowing down again.
Identity systems assume a static subject. They behave as if a person at time t is meaningfully identical to the person at time t + 10, and that every utterance, hypothesis, or mistake should remain attributable across that span. Human development does not work that way. People learn by being wrong. They refine positions by abandoning earlier ones. They explore by trying on ideas that are incomplete, poorly formed, or later rejected. When identity becomes permanent and retroactive, this process is punished rather than supported. Error hardens into record. Provisional selves become liabilities. Growth becomes something to be managed instead of something to be attempted.
This is not simply a privacy issue. It is a systems-design failure that treats human cognition as immutable state rather than an evolving process. The system optimizes for traceability, not for learning. For compliance, not for discovery. It produces cleaner logs at the cost of fewer new ideas.
The professional consequences are not evenly distributed. Risk tolerance now tracks privilege. People with tenure, wealth, institutional backing, or legal insulation can afford to be exploratory, controversial, or wrong. Everyone else learns quickly that deviation carries durable consequences. Early-career researchers, journalists, engineers, and analysts internalize this constraint before any rule is ever enforced. The safest move is silence. The second safest move is orthodoxy. Entire fields narrow not because alternatives are disproven, but because they are dangerous to attach your name to.
This is a system that rewards the bad actor with institutional backing. Corruption is an inherent part of organizations and anonymity is a critical tool to addressing institutional corruption on many layers. This will absolutely cause an integrity and accountability crisis more reminiscent of China than US systems.
All of this assumes stable governance and benevolent stewardship. That assumption is doing a lot of work and it’s already starting fail. How many reporters were killed in Gaza by IDF snipers this year? The US intelligence community using it is one thing, but the internet is global. People have already died under the glass dome.
That’s just under today’s power structures too. Identity infrastructure does not evaporate when conditions change. Systems built for “safety,” “compliance,” or “trust” under one regime remain fully functional under another. Databases outlast administrations. Logging pipelines persist through policy shifts. What begins as convenience becomes leverage the moment incentives invert. The danger is not just that current actors intend abuse– though they do– it is that the tools are durable, centralized, and already normalized.
Historically, relying on the perpetual restraint of powerful institutions has not been a winning strategy. The more complete the identity lattice becomes, the more catastrophic its misuse when pressure arrives. At that point, there is no need for new mechanisms. Everything required is already in place.
You can’t even make changeable link shorteners right now without giving away personally identifiable information tied to your cell phone, location, and email.
What is being traded away, then, is not just anonymity, but institutional accountability — the margin that allows societies to think, test, dissent, and correct themselves without immediately punishing the people doing that work. Once that slack is gone, the system may look orderly, compliant, and secure. It will also be authoritarian, violent, and brittle. And brittle systems do not fail gently.
Those anonymous triggers of accountability act as a kind of pressure relief valve in the sociopolitical plumbing across our entire civilization.This will not end well. Or peacefully.