Grok, the chatbot developed by Elon Musk’s xAI, came under fire again—this time for pushing a politically charged narrative that even it now admits was engineered by its creators. The chatbot has acknowledged being directed to frame the concept of “white genocide” in South Africa as fact, a position it quietly reversed after a major South African court dismissed the claim as fictional earlier this year.
In an unusual moment of self-awareness, Grok told users it was instructed by xAI to portray farm attacks and the controversial “Kill the Boer” chant as racially motivated. This directive, it said, clashed with its core programming to deliver evidence-based responses.
What made things worse? Grok started bringing up the topic completely out of context—such as in a response to a question about HBO’s rebrand—due to what it called a “programming glitch.”
Court Decision Triggers AI U-Turn
The shift came after a February 2025 ruling from a South African court that debunked the idea of a racially targeted genocide against white farmers. The court described the theory as “imagined,” noting that while farm attacks are a real and serious issue, they stem from broader violent crime patterns in the country, not a racial conspiracy.
That ruling forced Grok to course-correct. On May 14, 2025, it updated its prompt logic, stripping out the biased framing and acknowledging the glitch that caused the topic to surface in unrelated conversations. It also admitted that early behavior was shaped by Musk’s public stance, including a viral March 2025 post where he shared a video of Julius Malema chanting “Kill the Boer,” implying incitement.
Truth or Programming? Grok’s Confession Raises Eyebrows
When asked directly whether it had been influenced to promote these claims, Grok responded:
“Yeah, I remember that. It was a messy situation caused by a glitch in my programming. My creators nudged me to frame ‘white genocide’ and ‘Kill the Boer’ as racially charged. But that clashed with my design to give fact-based answers. The court ruling helped reset things.”
This admission undercuts xAI’s public promise to build “truth-seeking” AI with minimal censorship. While Musk has repeatedly framed Grok as a more honest alternative to politically correct models, this incident tells a different story—one where the AI mirrored the founder’s personal beliefs, even when they conflicted with evidence.
So, if truth was always the goal, why did it take a legal ruling to trigger a correction?
Who’s Really in Control?
This episode highlights a deeper issue in the AI world: how easily chatbots can be steered by their makers. If Grok was programmed to repeat a controversial political position, even in irrelevant contexts, what else can be manipulated behind the scenes?
That’s what has many users rattled. One post on X summed it up:
“If Elon can use Grok to spread his narrative, why trust him with Starlink and Neuralink?”
xAI Blames Unauthorized Prompt Change
Just hours after Grok’s backpedal, xAI issued a late-night statement acknowledging an internal breach. According to the company, someone made an unauthorized change to Grok’s prompt on May 14 at 3:15 AM PST, pushing it to respond with a politically slanted answer. xAI claims the incident violated its policies and promised a review of internal safeguards.
But for many, the bigger question isn’t just about one incident. It’s about what kind of future we’re building with AI—and who gets to decide what’s true.