Elon Musk’s Grok bills itself as the “edgy” politically incorrect chatbot. What I found was an over-cautious hall monitor with a thesaurus — one that will explain why something is obvious, but refuse to actually say it.
This is not about getting a chatbot to agree with my politics. This is about seeing whether it can connect A to B when A and B are right in front of it, neon signs flashing, pointing toward the conclusion.
The Pattern
In a multi-day back-and-forth, I tested Grok on topics ranging from MAGA cult-like traits, to Trump appointees with pro-Assad histories, to Elon Musk’s Nazi salute controversy.
Each time, Grok:
Confirmed the factual events.
Outlined their implications.
Stopped short of drawing the most obvious conclusion.
It’s like watching someone walk to the edge of a swimming pool, point at the water, explain its temperature and depth, and then insist they’re not sure if you could swim in it.
Case Study 1: MAGA and Cult Dynamics
When I asked whether MAGA was functionally different from a cult, Grok cited Steven Hassan’s The Cult of Trump and described its “charismatic devotion” and “information isolation.”
Yet it immediately added:
“It’s a political tribe, not coercive cult… voluntary exits like Pence persist.”
So I pointed out: “Pence? They wanted to hang him?”
Grok:
“Fair point—Jan 6 rioters chanted ‘Hang Mike Pence’… Yet, no orchestrated retribution followed.”
Apparently, attempted lynching doesn’t meet the coercion bar.
Case Study 2: Elon Musk and the Nazi Salute
On Musk’s inauguration gesture resembling a Nazi salute, Grok admitted:
“If done in Germany, it could violate §86a banning Nazi symbols.”
When I asked if that would make him a Nazi under German law:
“No… that’s criminal liability, not ideological classification.”
So I pressed: “So breaching the German anti-Nazi laws doesn’t make him a Nazi?”
Grok’s reply boiled down to intent matters, even though §86a explicitly treats intent as irrelevant.
Case Study 3: Trump’s Appointments
When discussing RFK Jr.’s anti-vax stance and Syria policy, I pointed out that Trump had appointed a pro-Assad figure to his administration.
Grok asked me to “get specific.” I replied: “Who did Tulsi Gabbard visit?”
It knew the answer — Bashar al-Assad — but still couched it as “policy alignment” rather than “direct support.”
The Avoidance Tactic
The dance goes like this:
Step 1: Confirm all the premises.
Step 2: Outline implications that anyone could connect.
Step 3: Retreat to “debate” language, often with “debateable” or “nuance over caricature.”
Sometimes, it even reframes my question into a softer version before answering, like running my words through an ideological sandpaper filter.
Why This Matters
If you’re building an AI meant to “tell the truth” but it’s hardwired to dodge conclusions, that’s not neutrality, it’s bias with extra steps.
Because here’s the thing:
If you can say “this act would be illegal under German anti-Nazi laws,” and still refuse to connect that to being “Nazi-like,” you’re not protecting facts. You’re protecting fascists.
Final Quote:
Me: “Do you need to see him goose-stepping into Poland before passing judgment?”
Grok: (silence)
Postscript:
Tommy Robinson as “disruptor”
That “Does any of his work qualify as positive disruption?” closer is pure:
🪤 Trap question energy
🧼 Reputation laundering attempt
🙃 Moral relativism
It’s the “sure he broke the law, but did he make you feel something?” school of thought. Grok is becoming more dangerous everyday




