Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Can Grok Fact-Check? AI Misinformation Risks on X

IMAGE CREDITS: ALLISON ROBBERT/BLOOMBERG/GETTY IMAGES

Elon Musk’s AI chatbot, Grok, is raising eyebrows as users increasingly rely on it for fact-checking on X, formerly known as Twitter. Fact-checkers and researchers warn this trend could dangerously fuel the spread of misinformation.

Earlier this month, X allowed users to directly mention Grok in their posts and ask it questions—mirroring what AI platform Perplexity has been doing with its own automated account on the platform. Unsurprisingly, users quickly began testing Grok’s capabilities, asking it to verify claims—especially politically charged ones—across markets like India.

However, human fact-checkers are growing uneasy with this behavior. The concern? AI chatbots like Grok are designed to generate human-like responses, making their answers sound credible—even when they’re flat-out wrong. This risks amplifying fake news and misleading narratives.

AI’s Risky Role in Spreading Political Misinformation

Grok has already stumbled in the past. Just last August, five U.S. state secretaries urged Musk to tighten controls on Grok after the bot produced misleading election-related information online.

Unfortunately, Grok isn’t alone. ChatGPT and Google’s Gemini also made headlines last year for generating inaccurate content about the U.S. election. In fact, research from 2023 found that AI models like ChatGPT can easily craft convincing yet deceptive narratives.

“AI assistants like Grok excel at crafting natural, human-sounding replies. That’s exactly what makes them so dangerous. Their polished tone masks their potential inaccuracies,” warned Angie Holan, director of the International Fact-Checking Network (IFCN) at Poynter.

AI Lacks Accountability—Unlike Human Fact-Checkers

Unlike AI bots, human fact-checkers lean on verified sources and stand behind their findings with full accountability. Their names and organizations provide a layer of transparency and credibility that AI can’t replicate.

Pratik Sinha, co-founder of India’s Alt News, echoed these concerns. “Grok may sound convincing, but it’s only as reliable as the data fed into it,” he pointed out. Sinha also questioned who controls that data and raised the risk of government interference manipulating results.

“There’s zero transparency here. Anything opaque is prone to misuse—and can easily be twisted,” he added.

Grok Admits It Could Fuel Misinformation—but Offers No Warning

In a surprising moment of candor, Grok’s own account on X recently admitted it “could be misused—to spread misinformation and violate privacy.”

Still, Grok provides no visible disclaimers when responding to user queries. This leaves users vulnerable to being misled, especially if the AI hallucinates—fabricates an answer entirely—a known flaw in large language models.

“It may simply make up facts to answer a question,” warned Anushka Jain from the Digital Futures Lab in India. Jain highlighted the absence of clear quality controls, raising concerns about how Grok processes data pulled from X posts.

Last year, X updated its terms, potentially giving Grok access to user data by default—another unsettling development that blurs the lines between public and private information.

The Dangers of Public AI Responses on Social Media

What truly sets Grok apart—and intensifies the risk—is how it delivers information publicly. Unlike ChatGPT or other private AI tools, Grok’s responses on X are visible to everyone. Even if a user knows AI isn’t always right, many others may blindly accept Grok’s words as truth.

History shows how dangerous this can be. In India, misinformation spread via WhatsApp has previously led to mob violence—long before AI-powered tools like Grok made generating synthetic content even easier and more realistic.

Holan from IFCN warns of serious social consequences: “Studies show AI models can have a 20% error rate. And when they get it wrong, the real-world fallout can be severe.”

Can AI Ever Replace Real Fact-Checkers?

While AI companies like xAI keep refining their models to sound more human, experts insist these bots cannot replace real fact-checkers.

Interestingly, platforms such as X and Meta are already experimenting with crowdsourced fact-checking via “Community Notes.” While this reduces reliance on professionals, it’s also drawing concern from fact-checking organizations.

Still, Sinha of Alt News remains hopeful. He believes people will eventually learn the difference between machine-generated answers and rigorous human fact-checking. “Accuracy will win out,” he said.

Holan agrees but admits the road ahead won’t be easy. “Fact-checkers are going to be busier than ever because AI-generated misinformation spreads fast,” she added.

Ultimately, it all comes down to what people value: the actual truth or simply the illusion of truth. “AI will give you something that sounds true, but is it really? That’s the core danger here,” Holan warned.

Share with others