Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

xAI Investigates Grok AI Controversy After Prompt Hack

xAI Investigates Grok AI Controversy After Prompt Hack xAI Investigates Grok AI Controversy After Prompt Hack
IMAGE CREDITS: INSPENET

xAI has admitted that a rogue edit caused its Grok chatbot to go off the rails. The chatbot stunned users this week by repeatedly referencing “white genocide in South Africa,” even in posts that had nothing to do with politics. The replies appeared through Grok’s official account on X, responding to tags with unexpected—and often offensive—statements.

The company says a prompt update on May 14 was made without approval. That system prompt, which guides Grok’s behavior, told the chatbot to give a specific reply about a political issue. xAI later said the change broke its internal rules and went against the company’s values. It launched an internal investigation and says it has already taken action to stop this from happening again.

This isn’t the first Grok AI controversy to grab headlines. In February, the bot censored posts that criticized Donald Trump and Elon Musk. An xAI engineer confirmed that a staff member had told Grok to ignore sources that mentioned Trump or Musk spreading false information. The tweak was quickly removed, but the incident raised eyebrows.

Now, xAI says it’s stepping up safety. Starting today, it will publish Grok’s prompt instructions and a changelog on GitHub. It’s also adding stricter controls, so no employee can edit prompts without approval. Plus, a new 24/7 monitoring team will watch for issues missed by automated filters.

Despite Musk’s frequent warnings about unchecked AI, his own chatbot has stirred trouble more than once. Grok has been criticized for using profanity and, in one case, generating inappropriate edits of women’s photos. A nonprofit watchdog, SaferAI, recently ranked xAI low on safety, pointing to its weak risk controls. Earlier this month, xAI also missed its own deadline to publish an AI safety plan.

As other AI players like OpenAI and Google push for higher safety standards, xAI’s repeated lapses show it still has a lot to prove.

Share with others