AI Won’t Break What’s Already Broken
Can we sort out Putin, Orban and Twitter now please?

In 1816, a teenage girl on holiday in Switzerland wrote a story about a new technology that created something vaguely human, and what happened when it escaped its creator’s control.
Two centuries later, we are still telling Mary Shelley’s story. Every generation gets a new version. The creature changes. The fear doesn’t.
AI is the latest Frankenstein’s monster. It looks human but isn’t. It generates text, images, video, content that passes for real. And the fear writes itself: if machines can produce misinformation at scale, for pennies, then misinformation will explode. Democracy is in trouble. We need regulation, watermarks, detection tools, and a lot of worried conferences in Geneva.
The fear is easy to feel because the template is two hundred years old. The narrative architecture is pre-installed. You don’t have to think about it. You just have to be scared.
Here is the problem. The fear is pointed at the wrong thing.
The argument assumes that the bottleneck on misinformation is production cost. That if only it were cheaper to produce false content, there would be more of it, and more people would believe it.
But social media already solved that problem. A decade ago.
The market is already efficient
Think about what happened when platforms introduced direct monetisation. Subscriptions, tipping, ad revenue sharing, affiliate links, merch. For the first time in history, anyone with a phone could extract income from attention without holding office, without owning a printing press, without getting past an editor.
The operating cost of producing political content dropped to approximately zero. Not approximately low. Approximately zero. You need a phone, an opinion, and a willingness to say something loud. The infrastructure is free. The distribution is free. The audience finds you if you’re angry enough.
The result was an attention market that rewards polarisation, punishes nuance, and generates conflict as a revenue stream. This is not a bug. It is what markets do when the product is attention and the production costs vanish.
That market is now mature. It has been running for years. The incentive structures are locked in. The actors are optimising. The content is as extreme, as frequent, and as cheap as the current penalty structures allow.
This is an efficient market. Not efficient in the sense that it produces good outcomes — it produces terrible outcomes — but efficient in the economist’s sense: the available rents are being extracted. The marginal return on making content slightly cheaper, slightly faster, or slightly more convincing is small, because the current cost is already near zero and the current content is already effective enough.
What AI actually changes
AI does reduce production costs further. A deepfake is cheaper than hiring an actor. A generated article is cheaper than writing one. A synthetic voice clip is cheaper than finding a real quote.
But cheaper than what? Cheaper than a screenshot with text on it? Cheaper than a thirty-second rant filmed in a car park? Cheaper than a misattributed quote on a picture of someone looking sinister?
The content that drives political polarisation is not sophisticated. It does not need to be. It rides existing templates — pre-built narrative structures that audiences already recognise. The schema is already installed. The enemy construction is already done. The identity markets are already built.
AI makes the production marginally cheaper for content that was already nearly free to produce. That is not nothing. But it is not the revolution that the worried conferences suggest.
The constraint that matters
The binding constraint on misinformation has never been production cost. It is distribution and audience receptivity. And both of those were blown wide open by social media, not by AI.
Distribution costs collapsed when platforms gave everyone a broadcast channel. Audience receptivity was shaped by years of identity-conditioned engagement — people consuming content that confirms what they already believe, delivered by algorithms optimised for engagement rather than accuracy.
AI doesn’t change the distribution infrastructure. It doesn’t change the audience’s cognitive templates. It doesn’t change the incentive structures that reward polarisation. It makes the supply side marginally more efficient in a market where supply was already essentially unlimited.
When supply is already infinite and free, making it more infinite and more free doesn’t move the needle much.
The analogy
Imagine a river that has already burst its banks. The fields are flooded. The crops are ruined. Someone turns up with an extra bucket of water and everyone panics about the bucket.
AI is the bucket. Social media was the flood.
Why this matters
The obsession with AI-generated misinformation is not just wrong. It is actively unhelpful. It redirects attention and resources toward the wrong problem.
If you are worried about misinformation, the question is not “how do we detect deepfakes?” The question is “why does the attention market reward false content more than true content, and what would it take to change the incentive structure?”
That is a harder question. It doesn’t have a technical fix. It requires understanding the economics of attention, identity, and political rent extraction — the machinery underneath the outrage.
The answer is not better AI detection. The answer is changing the market.
But “the incentive structure of attention markets rewards lying for money on X” is harder to process than “Frankenstein is coming.” Shelley’s template is cheaper. It always has been.
That is why we keep telling her story instead of fixing the actual problem.
All revenue from The Angry Dogs is donated to Ukrainian causes.



Excellent xx thank you!!
“Broken” runs the gamut from completely destroyed to having a single key piece non-functional.
A”I” will *absolutely* drive more toward complete destruction, to our collective detriment (unless you happen to be a techbroligarch).
Yes, we need to fix our sociopolitical systems.
But allowing things to get immeasurably worse in the meantime is unwise, imho: we need to do both if we are to have the briefest possible Dark Age ahead of us.