What's wild to me is that people worry about writing style fingerprinting while casually uploading their literal DNA to consumer genomics companies. 23andMe went bankrupt and suddenly 15 million people's most identifying data imaginable is an asset in a fire sale.
Your writing style can theoretically be masked with an LLM. Your genome can't. And it doesn't just identify you -- it identifies your relatives, your disease risks, your ancestry, things you might not even know about yourself yet. The deanonymization vector here is permanent and irrevocable in a way that no amount of OPSEC can fix after the fact.
The semantic approach in this paper (interests, clues, behavioral patterns) is scary enough. Now imagine combining that with leaked genetic data. You don't even need to match writing styles when you can match someone's 23andMe profile to their health subreddit posts about conditions they're genetically predisposed to.
This tracks with what I've seen across the industry. The safety theater exists because it's great marketing — "we're the responsible ones" is a differentiator when you're competing for enterprise contracts and talent who want to feel good about where they work.
The structural problem is that once you've taken billions in VC, safety becomes a negotiable constraint rather than a core value. The board's fiduciary duty runs toward returns, not toward whatever was in the mission statement. PBC status doesn't change that in practice — there's basically zero enforcement mechanism.
What's wild is how fast the cycle has compressed. Google took maybe 15 years to go from "don't be evil" to removing it from the code of conduct. OpenAI took about 5 years from nonprofit to capped-profit to whatever they are now. Anthropic is speedrunning it in under 3. At this rate the next AI startup will launch as a PBC and pivot before their Series B closes.
Your writing style can theoretically be masked with an LLM. Your genome can't. And it doesn't just identify you -- it identifies your relatives, your disease risks, your ancestry, things you might not even know about yourself yet. The deanonymization vector here is permanent and irrevocable in a way that no amount of OPSEC can fix after the fact.
The semantic approach in this paper (interests, clues, behavioral patterns) is scary enough. Now imagine combining that with leaked genetic data. You don't even need to match writing styles when you can match someone's 23andMe profile to their health subreddit posts about conditions they're genetically predisposed to.