Why Hari Seldon Was Wasting His Time
Maths works - no need to keep guessing.
Seldon model:
elite mathematicians → predict society
Real internet:
10,000 brain damaged cartoon dogs with memes → change society
A friend on mine compared my framework to psychohistory this week.
I’ve never actually read Foundation. I tried. Like Dune, I found it tedious and pretentious. So everything I know about Hari Seldon comes from people telling me I’m him, which is an odd way to learn about a fictional character.
From what I can gather, the comparison is meant as a compliment wrapped in a dismissal. Sounds cool. Can’t work. It’s science fiction.
Here’s the thing. They’re right that my model is deterministic. They’re wrong about everything else. And Seldon? He was doing it the hard way.
Hari Seldon’s psychohistory works like this: gather enough data on enough people, apply statistical mechanics to human populations, and predict aggregate behaviour. It’s probability at scale. It needs trillions of humans to work. It breaks when you zoom in to individuals. It’s a top-down model that observes patterns and hopes they hold.
My framework does none of that.
I don’t model people. I model markets.
Specifically, I model the cost structure of the market in which political identity is produced and consumed. If polarised content is cheaper to produce than nuanced content, and generates more revenue, the market produces more of it. That’s not a prediction about what humans will do. It’s a price signal. It’s Ricardo, not Asimov.
David Ricardo didn’t need to know what any individual farmer was thinking to show that trade follows comparative advantage. He mapped the gradient. The gradient determines the flow.
Here’s where it gets interesting.
Seldon built his model top-down. Observe aggregates, derive laws, predict downward. The individual is noise. The population is the signal.
I built mine bottom-up. I started with the individual. What does it cost a single person to adopt, maintain, or switch a political identity? What’s the return? What’s the penalty for getting it wrong? Those are the micro equations. They’re deterministic. Plug in someone’s local parameters — their cost structure, their existing identity capital, their position in the attention market — and their behaviour follows cleanly.
The macro behaviour emerges from aggregation. It isn’t assumed. It’s derived.
This matters because Seldon’s model genuinely breaks at the individual level. Mine was built there.
“But at the individual level, behaviour looks random.”
No. It looks random from a distance because you can’t see the local parameters. Information is local and access to it is asymmetric — it depends where you are in the continuum. From the top, you see aggregate flows and the individual cases look stochastic. Zoom in with the right instruments and the individual is just as predictable as the crowd.
Different equations. Same determinism.
Think of it like a river. From a satellite, you see the river flowing to the sea. You can’t track individual water molecules. Doesn’t mean they’re moving randomly. They’re responding to local forces you can’t resolve from orbit. Get close enough and every molecule’s path is determined by the local gradient, the local pressure, the local temperature.
My model works the same way. The macro flow is visible from anywhere. The individual case requires local data. Neither is random. The constraint is information access, not theoretical limits.
So when someone says “that’s a bit Hari Seldon,” the correct response is not to back away from determinism. It’s to walk towards it.
Yes. It’s deterministic. That’s the whole point.
Seldon needed probabilities because he was modelling people. I model prices. Prices are deterministic. If the incentive gradient points downhill, the market flows downhill. You don’t need to survey every molecule to predict which way the river runs.
Seldon needed millions of people because his model broke at the individual level. Mine doesn’t. It works at every scale. You just need the local parameters.
Asimov imagined the hardest possible version of this problem — predicting human behaviour from the top down using statistics. It made for brilliant science fiction. But the actual economics is much simpler than that. You don’t predict behaviour. You map incentives. The behaviour follows.
The real twist is this: Seldon’s fictional critics said psychohistory couldn’t work because human behaviour is too complex. My actual critics say the same thing. Both are wrong for the same reason. They think the model is about humans. It’s about markets. Markets are simple. Humans just live in them.
The framework described here is the Rent Theory of Political Identity, currently under peer review at Cambridge Political Economy. The Outrage Dividend, the book that applies this framework to everything from the British monarchy to jihadi recruitment to planning permission delays, is with literary agents. If you want the equations, they exist. They’re just not in the book, because the argument doesn’t need them.
All revenue from The Angry Dogs is donated to Ukrainian causes.




Great follow up, Matt, and glad I could spur this further.
FWIW, I don’t use Hari Seldon as a derogatory comparison, but as an expositive one:
When I took Stat Mech way way back, my very first thought after deriving Boyle’s Law from a few simple axioms around elastic scattering of individual constituents was that Asimov/Seldon had it wrong, but had the right idea of wanting to understand societal change as an *emergent property* of individual behaviours.
(Just as one refers to Navier-Stokes when modelling fluid flows, as accurately modelling individual molecules is is both practically impossible & also runs into Heisenberg’s uncertainty inequality)
So here is my real question, or two:
1. What makes you believe that your axioms are true?
2. Are individual humans as predictable as you require them to be when engaging with markets, or are you running into the same problem as economists with “rational actors”?
In effect, the key question for both top-down & bottom-up models is where do the models break down.
In Seldon’s case, Asimov used the periodic apparitions of AI Seldon to enable course correction; in a bottom-up case, at what point do the accumulating error bars result in meaningless noise?
Good luck to you, Matt; the better we model human behaviour, the more likely we are to come up with valuable insights into how to change that behaviour through incentives and disincentives. Just remember that we’re really complex, and that empiricism demands that you compare your model against past results as well as make new predictions to test 🙏