We Saw This Coming in 2020. You're Only Noticing Now Because It Made the News.
The Grok Genie, The Bottle, and Why Politicians Announcing Bans After Technology's Already Distributed Is Performance Art By John Langley - The Almighty Gob.
[Politician pushing digital water uphill representing futile legislative attempts to control distributed AI technology]
You know that feeling when someone announces they’re solving a problem that people warned about years ago, except by the time they’re announcing the solution, the problem’s already metastasised beyond control?
The Pattern Nobody Wanted To See.
Back in 2020-21, those of us professionals in adult content production were watching AI image manipulation evolve. Classic 20/80 split: 20% with operational knowledge could see 80% of problems coming, whilst 80% of noise came from the 20% who had no clue.
This pattern recognition didn’t come from credentials or institutional authority. It came from what I call The Three S’s - Stillness, Silence, Solitude - the conditions where direct observation functions without institutional frameworks overriding what’s actually visible. The adult industry experience provided the data. The inner listening provided the clarity to recognize the pattern before others saw it.
The 13th-century Zen master Dōgen Zenji taught this: stop outsourcing your perception to authorities. Quiet the borrowed opinions. Let direct awareness function without interference. Your observation doesn’t need their validation to be accurate.
You’ve seen this. People who know get ignored. When it goes exactly as predicted - collective amnesia.
The technology existed. Faces swapped onto bodies. Capability improved exponentially. The endpoint - sexualised images without consent, including children - was obvious to anyone with pattern recognition.
But the question wasn’t “is this harmful?” The question was “who benefits from ignoring it?”
Technology companies. Investors. Politicians avoiding headlines. The 20% who could see problems didn’t control the 80% of decision-making.
Now Grok makes headlines. AI-generated abuse material. Politicians announcing laws. Shocked press conferences. Solemn promises.
You’ve probably noticed this pattern before.
When Incompetence Becomes Function.
There’s this useful principle: never attribute to malice that which can be adequately explained by stupidity. But it has a limit, doesn’t it?
When the same “mistakes” keep producing the same beneficial outcomes, incompetence stops being adequate explanation. At some point you’ve got to ask: is this incompetence, or is this how the system’s designed to work?
Take the DWP. Thursday, January 9th, 2026 - Public Accounts Committee published findings on disability benefit processing. Three years ago, the DWP promised improvements. This week? “We are now told that they are a further three years off.”
Six years. From promise to “maybe eventually.”
Is that incompetence? Or does a backlogged system serve institutional interests? Fewer staff needed. Ministers avoid accountability. Disabled people remain manageable supplicants.
The AI response follows identical logic. Ignore warnings (benefits tech companies). Wait until harm is undeniable (gives politicians something to “solve”). Announce unworkable laws (appearance of action). Problem continues unchanged.
At some point, the pattern isn’t incompetence. It’s function.
You Can’t Un-Invent Mathematics.
Here’s where this gets uncomfortable.
You cannot un-invent distributed technology.
It’s like pushing water uphill. Indonesia just became the first country to block Grok - Saturday, January 10th, 2026. “Temporarily blocked access” to protect citizens.
Lovely sentiment. Completely meaningless in practice.
AI models run locally. Open-source versions exist globally. VPNs mask location. Encryption makes content invisible.
Indonesia’s ban stops exactly nobody who was already creating illegal content. They weren’t using Grok’s official website anyway. They were using local models, encrypted networks, tools designed to avoid detection.
Legislative theatre for an audience that doesn’t understand the technology.
You can’t enforce borders on mathematics. You can’t put the genie back in the bottle.
Where This Really Leads.
And before you think this is just about dodgy images and videos, understand where this leads.
Autonomous weapons making kill decisions. Election manipulation through AI deepfakes released hours before voting. Critical infrastructure - power, water, healthcare - controlled by algorithms one failure away from mass casualties. Every major power developing these capabilities because NOT developing them means losing. The pattern’s identical: technology emerges, warnings ignored, capability distributes, harm becomes undeniable, legislative theatre begins, problem continues.
Except with existential threats, you don’t get a second chance.
Legislative Theatre.
So when politicians announce laws to “crack down on AI-generated abuse material,” what are they doing? Creating legislation affecting only compliant platforms. Criminalising researchers, satirists, artists - whilst actual perpetrators continue. Giving themselves something to point at.
It’s theatre.
Can you see the difference?
Promoted Beyond Capability
Politicians competent at getting elected are now trying to regulate technology they don’t understand. Out of their depth on distributed systems and the mathematical impossibility of controlling open-source code.
Ever watched a parliamentary technology hearing? It’s like watching someone regulate the tides by passing a law against water.
They’ve been promoted beyond capability. The job requires systems thinking, cryptography, distributed computing. They’ve got soundbites.
Same dynamic. Different salary band. Same bollocks.
Serving The Institution.
Here’s something you’ll have noticed if you’ve been paying attention.
No matter how democratic an organisation’s founding principles, it inevitably develops leadership serving organisational interests over stated mission.
Parliament’s stated mission: protect citizens.
Parliament’s organisational interest: look like it’s protecting citizens whilst not threatening relationships with tech companies, investors, or international competitiveness.
Which one wins?
You can see it in the timeline. Warnings ignored (doesn’t serve organisational interest). Problem metastasises (organisational interest demands being seen to care). Laws announced (organisational interest satisfied). Laws prove unenforceable (doesn’t matter - organisational interest was the announcement, not the outcome).
Whether it stops harm? Secondary to whether it looks like it’s trying.
How many times have you watched this exact pattern?
The Three Questions.
Let’s run this through the framework:
Is it practical?
No. You cannot enforce legislation on technology that runs locally, uses encryption, operates across borders, exists in open-source form. And illegal content doesn’t respect borders. Someone operating from Azerbaijan, Belarus, parts of Russia - what’s Britain going to do? Send a strongly worded letter?
Is it logical?
No. People committing serious crimes weren’t using mainstream platforms anyway.
What’s the likely outcome?
Visible platforms add restrictions. Researchers face compliance burdens. Politicians get headlines. Actual harm continues unchanged.
Britain’s perfected dismissing early warnings, then acting shocked when predictions materialise.
My adult industry experience; especially from 2000 onwards, gave me front-row seats to how technology gets misused. I saw the trajectory from basic manipulation to sophisticated deepfakes. But people raising concerns in 2020-21 got filed under “future problem,” promised improvements that never materialise.
The Grok situation isn’t surprising anyone who was paying attention.
You’re trying to legislate against technological capability rather than addressing human behaviour. It’s like banning locks because burglars use lockpicks.
What strikes me about 2020: it wasn’t just predicting misuse. It was recognising there’s no realistic way to prevent misuse once capability exists at scale.
Current legislators can’t grasp this. Sometimes the only winning move is not playing. But once technology exists at scale, that option’s gone.
We’re busy feeding AI whilst trying to starve it. Its hunger is greater than our attempts at starvation. The genie’s out. The bottle’s smashed.
Four years later, the exact scenario we described is headline news. Politicians announce laws that can’t work. Everyone acts surprised.
This isn’t pessimism. It’s pattern recognition.
Britain waits until snow’s fallen, then announces action. Except with AI abuse material, snow’s six feet deep, turned to ice, and you’re trying to shift it with plastic shovels whilst claiming you’re solving problems.
You’re not solving problems. You’re performing the solving of problems.
People experiencing harm can tell the difference. Can’t you?
We saw this coming in 2020. You’re only noticing now because it made the news. That gap - between people who saw it coming and people who are only paying attention now that it’s headlines - tells you everything about who institutions actually listen to and what they’re designed to do.
Not prevent harm. Manage the optics of harm after it’s already happened.
References:
Dōgen Zenji (1200-1253) - Japanese Zen Buddhist master and philosopher. Founder of the Sōtō school of Zen. Taught that direct observation and inner awareness function without need for external authority or institutional validation. His principle of “practice itself is realisation” emphasises immediate experience over borrowed frameworks.
Vilfredo Pareto - The 80/20 Principle: 80% of effects come from 20% of causes, applied here to how a small minority with operational knowledge foresaw problems the majority ignored.
Nabaz (Kurdish writer) - On political prostitution and institutional self-interest: institutions serve themselves, not their stated purposes.
Hanlon’s Razor - “Never attribute to malice that which is adequately explained by stupidity” - but recognising its limits when patterns become systematic and consistently benefit specific parties.
Laurence J. Peter - The Peter Principle: people are promoted to their level of incompetence, explaining why elected officials competent at campaigning prove incompetent at regulating technology they don’t understand.
Robert Michels - Iron Law of Oligarchy: organisations inevitably develop leadership serving organisational interests over stated mission, regardless of founding principles.



