7 Comments
User's avatar
Digital Canary ๐Ÿ’ช๐Ÿ’ช๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ—ฝ's avatar

Great follow up, Matt, and glad I could spur this further.

FWIW, I donโ€™t use Hari Seldon as a derogatory comparison, but as an expositive one:

When I took Stat Mech way way back, my very first thought after deriving Boyleโ€™s Law from a few simple axioms around elastic scattering of individual constituents was that Asimov/Seldon had it wrong, but had the right idea of wanting to understand societal change as an *emergent property* of individual behaviours.

(Just as one refers to Navier-Stokes when modelling fluid flows, as accurately modelling individual molecules is is both practically impossible & also runs into Heisenbergโ€™s uncertainty inequality)

So here is my real question, or two:

1. What makes you believe that your axioms are true?

2. Are individual humans as predictable as you require them to be when engaging with markets, or are you running into the same problem as economists with โ€œrational actorsโ€?

In effect, the key question for both top-down & bottom-up models is where do the models break down.

In Seldonโ€™s case, Asimov used the periodic apparitions of AI Seldon to enable course correction; in a bottom-up case, at what point do the accumulating error bars result in meaningless noise?

Good luck to you, Matt; the better we model human behaviour, the more likely we are to come up with valuable insights into how to change that behaviour through incentives and disincentives. Just remember that weโ€™re really complex, and that empiricism demands that you compare your model against past results as well as make new predictions to test ๐Ÿ™

Mattppea's avatar

Great questions - let me take them in order.

1. The axioms are deliberately minimal: actors respond to incentives, attention is monetisable, identity is durable. I don't need rational actors in the Chicago sense, just that people respond to return gradients. That's a much weaker assumption and closer to behavioural econ than neoclassical.

2. This is the bit that surprises people: the macro predictions don't require individual predictability. Same reason gas laws work without tracking individual molecules. Individual noise washes out in aggregate. The return function *can* predict individual behaviour if you have the local parameters, but the field-level predictions don't depend on it. You're assuming I need micro-predictability for macro-prediction, it's actually the opposite.

3. The error bar question is the sharpest one and the answer is resolution. At low resolution (national elections, large identity bodies) the error bars are tight. At high resolution (individual actors, short timescales) they widen. The model specifies its own validity domain, below the ignition threshold, individual noise dominates and the field description stops being useful. That's a feature, not a bug.

On Seldon: the key structural difference is that my framework doesn't predict a single path forward. It predicts the shape of the incentive landscape, which is observable and updatable. No need for periodic hologram apparitions to course-correct. The landscape itself is the prediction.

And yes I have validated against past results (Gorton & Denton, Hungary 2022, US, uk elections) using the same equations at different resolution parameters. Currently have an out-of-sample test locked for Hungary in April.

Appreciate the questions. they are bloody good ones ๐Ÿซก

Digital Canary ๐Ÿ’ช๐Ÿ’ช๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ—ฝ's avatar

Damn, I hate it when the SubStack app loses a response ๐Ÿคฆโ€โ™‚๏ธ

Short redo:

I look forward to watching & engaging, and I hope youโ€™ll be as or even more successful as Iโ€™ve been with converting execs to a Bayesian, quantified risk management approach โ€” thatโ€™s my professional โ€œthink global/act localโ€ focus.

And as for Asimovโ€™s single path, thatโ€™s really just a reality of good sci-fi storytelling: your McGuffin needs to be committed to, so that the human story can be exposed. (And so he could ultimately connect this sprawling tale of human empire, collapse, and resilience, to his even more sprawling Robot novels) ๐Ÿ˜‰

Mattppea's avatar

oh yes. I understand that totally. do you want to know the fix? you provide people with kindness, money, health, support and compassion to replace the bad value they get from attention. this is not a traditional model. it has compassion baked into the equations. and it works.

Digital Canary ๐Ÿ’ช๐Ÿ’ช๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ—ฝ's avatar

100% agreement, Matt: so many of the societal incentives are backward today, focused on promoting individual ascension above the constraints of the collective (yay, oligarchy!) and on demeaning the value of that collective.

Weโ€™re not quite so far down that path here in Canada, but the reality of our tech-centred world is that we *all* get exposed to much of the same necrocapitalist propaganda, rotting support for collective care.

I hope against hope that youโ€™ll have success, as evidence-based approaches to extracting ourselves from the deep hole (mass grave?!) that weโ€™ve been tricked into digging are going to be needed โ€” otherwise weโ€™ll just end up even deeper underground.

Mattppea's avatar

I am on it

Mattppea's avatar

I literally saw it in nafo