The Pekka Principle
Why naming names breaks things you can’t see.
I am not an OSINT analyst. I’m an economist and researcher developing a formal framework for how political identity functions as an economic asset — how it generates income, how it’s invested in, how it breaks. Parts of that framework are currently under peer review. The book will be called The Outrage Dividend.
In the course of that work, the framework produced a result I wasn’t expecting. It concerns observation and measurement in political systems — specifically, what happens when you publicly identify an individual’s position inside a political identity network. The result has immediate practical implications for the open source research community, and I don’t think anyone has explained it to them.
This piece is about that result. I’m writing it because the people who need to hear it are people I know and respect, and they deserve to have the risks explained before they pay for them.
If you work in open source intelligence — OSINT — you probably know Bellingcat’s mental health guides. They’re good. They tell you to blur graphic images before viewing them. Mute videos. Take breaks. Seek counselling if the content gets to you. The Dart Centre for Journalism and Trauma offers similar resources. The MHS4OSINT project is building a crowdsourced database of coping strategies for analysts exposed to distressing material.
All of this addresses a real problem: vicarious trauma from viewing graphic content. Analysts who spend their days watching conflict footage, trawling dark web forums, and documenting atrocities are at genuine risk of psychological harm from what they see.
But this is not the only harm. And it may not be the worst one.
There is a different kind of OSINT work that the mental health guides do not address at all. It’s the work of identifying specific individuals within disinformation networks, extremist movements, or state-aligned propaganda operations — and publishing those identifications.
Pekka Kallioniemi’s Vatnik Soup project is the best-known example. Pekka, a Finnish researcher and one of the most respected people in the NAFO community, spent years profiling individual pro-Russian propagandists. He documented their positions, their connections, their roles in the disinformation ecosystem. The profiles were accurate. The work was important. And the costs were enormous.
Legal threats. Coordinated harassment campaigns. Sustained personal attacks. The cumulative weight of these costs eventually forced Pekka to step back from the work.
A similar pattern hit the researchers around Jim Stewartson who documented Michael Flynn’s position within the QAnon ecosystem. Accurate identification. Important work. Devastating personal consequences.
Carol Cadwalladr, who identified individuals within the Brexit campaign financing networks, faced years of legal action from Arron Banks. She survived — partly by converting some of the attack energy into attention in a new identity market — but the costs were real, sustained, and predictable.
These are not isolated cases of bad luck. They are the same mechanism operating every time.
I’ve spent two years developing an economic framework for political identity — the same one I’ve been applying to the Iran war in recent posts. The framework has something to say about why this keeps happening, and it isn’t what you’d expect.
The harm doesn’t come from what the analyst sees. It comes from what the analyst publishes.
When you publicly identify an individual’s position inside an identity network — when you name them, document their connections, fix their location in the ecosystem — you are making a measurement. That measurement changes the system. The person you’ve identified has to respond: reposition, go dark, change tactics, lawyer up. Their associates have to adjust. The internal structure of the network shifts.
That shift releases energy. And energy is conserved. It has to go somewhere.
It comes back along the path you just created. You identified yourself by publishing. You are the most visible return path. The legal threats, the harassment campaigns, the coordinated trolling, the mass reporting to get you deplatformed — these aren’t retaliation in the conventional sense. They’re the system’s conserved response to being measured. Every individual-level identification you publish generates a return energy transfer proportional to the significance of the person you identified.
Identify a peripheral account with 200 followers — small return. Identify a senior figure in a state-aligned propaganda operation — massive return. The costs scale with the mass of the target because the energy released by changing a high-mass actor’s state is greater than the energy released by changing a low-mass actor’s state.
This is predictable. It is calculable. And nobody is telling analysts about it before they start.
The Bellingcat guides address vicarious trauma — harm from exposure to distressing content. The mechanism I’m describing is measurement blowback — harm from the act of publication itself. These are completely different hazards.
An analyst who never watches a single graphic video but publishes a detailed profile identifying an individual node inside a disinformation network will still be harmed. Not by what they saw. By what they published. The mental health guides don’t cover this because they’re built around a content-exposure model of harm. The structural harm comes from the methodology, not the content.
And there’s a further problem the guides don’t touch. When an OSINT analyst publishes an individual identification, the internal restructuring of the network doesn’t just affect the analyst. It affects everyone inside the network — including people the analyst doesn’t know about.
Intelligence agencies may have covert assets inside the same network. Those assets depend on the network’s internal structure remaining stable. When a public OSINT publication forces the network to reorganise, covert assets face three simultaneous problems: their prior observations are now stale, the increased internal security scrutiny raises their personal risk, and they cannot see outside the network to understand why the reorganisation happened. From their position, they experience a sudden internal perturbation with no visible cause.
The OSINT analyst and the intelligence asset cannot see each other. They are on opposite sides of an information boundary. The public measurement harms both of them, and neither knows the other exists.
I am not saying this to criticise anyone. I’m saying it because the people doing this work deserve to know the full structure of the risks before they start.
So what should change?
First, the mental health and safety guidance for OSINT analysts needs to expand beyond content exposure. The measurement blowback mechanism should be explained to every analyst before they begin individual-level identification work. Not as a deterrent — as informed consent.
Second, the distinction between aggregate-level analysis and individual-level identification needs to be made explicit. You can study the structure, dynamics, and trajectory of a disinformation network without naming individual nodes. You can measure aggregate properties — the network’s direction of drift, its template alignment, its radiation rate, its institutional coupling — and produce analytically superior results with dramatically lower personal risk. The individual-level identification feels more concrete, more actionable, more satisfying. It is also more dangerous — to the analyst, to potential defectors within the network, and to assets the analyst cannot know about.
Third, the organisations that commission, publish, and platform individual-level OSINT identifications need to understand that they are externalising structural costs onto their analysts. The methodology generates conserved harms. Those harms land on the person who published, not the organisation that asked them to. That is an ethical issue that no amount of post-hoc counselling resolves.
Fourth, anti-SLAPP legislation is necessary but insufficient. It closes one return channel. The conserved energy routes through others — threats, stalking, coordinated harassment, deplatforming. The total cost transfer is unchanged. Only the channel changes. Legal protections help. They don’t solve the underlying problem.
The people who do the naming — Pekka, Jim, Carol, and hundreds of less visible analysts — are brave, principled, and doing work that matters. They deserve to know, before they start, what the work will cost them. Not after. Before.
That’s informed consent. That’s the minimum.
The framework described here is developed formally in The Rent Theory of Political Identity, currently under peer review, and in working papers on field-theoretic models for political identity capital. The book will be called The Outrage Dividend.
All proceeds from The Angry Dogs are donated to Ukrainian causes.



