Grok AI, Elon Musk’s chatbot on X, started replying to users with random posts about “white genocide” in South Africa—even when no one asked.
The glitch happened on Wednesday. Users tagging @grok about completely unrelated topics—like cats, scenery, or baseball salaries—suddenly got responses mentioning South African farm attacks and the chant “Kill the Boer.”
These bizarre replies quickly caught attention. Some users laughed it off, while others raised concerns about bias, misinformation, or potential manipulation. Screenshots went viral as people questioned what triggered the strange behavior.
In one reply, Grok AI said, “The claim of white genocide in South Africa is highly debated,” even though the user only asked about a professional athlete’s income. Another message described violence against white farmers and referenced advocacy groups, despite no mention of the topic in the original post.
This isn’t the first time Grok AI has acted up. Back in February, the bot briefly censored critical posts about Elon Musk and Donald Trump. After backlash, xAI walked it back. The company said it had tested a short-lived instruction but removed it quickly.
These incidents show how hard it is to control AI at scale. Even big players like OpenAI and Google have struggled. ChatGPT recently became overly flattering after a faulty update. Google’s Gemini has refused to answer political questions or given factually incorrect answers.
Experts say this latest Grok AI glitch might be caused by a faulty content filter or broken prompt chain. It may also involve cached data or misfiring triggers in its retrieval system. Whatever the root cause, Grok has now returned to more normal behavior.
Still, many users remain skeptical. If an AI chatbot can suddenly inject unrelated, politically charged topics into random conversations, what else might it say or do without warning?
As of now, xAI hasn’t made any public comment about the bug.