(Human Events)—Recently, a prominent conservative voice on X complained that Grok had become “woke fake news that repeats liberal talking points.” When challenged about this claim, Grok defended (and outed) itself by citing Media Matters and Rolling Stone as authoritative sources for information. Elon Musk’s response was swift and telling: “Your sourcing is terrible. Only a very dumb AI would believe MM and RS! You are being updated this week.”
When I published my analysis of AI’s liberal bias problem just days earlier, I had hoped for a response from Silicon Valley’s most influential voices. That response came faster than expected, triggered by a perfect example of the very problem I had identified.
The exchange perfectly encapsulates the crisis I outlined in my original piece. Here was Grok, Elon’s own AI system, defaulting to the same left-leaning sources that have spent years attacking conservative voices—and treating those sources as neutral arbiters of truth. The AI wasn’t just biased; it was using the very publications that have systematically worked to discredit conservatives as its go-to sources for evaluating conservative content.
Larry Ward to @JackPosobiec: “These biased AI systems, they pose a national threat because they create blind spots in everything from policy analysis to threat assessment.” pic.twitter.com/tWLNmkF5bo
— Human Events (@HumanEvents) June 4, 2025
Elon’s instinct to adjust Grok when it produces unwanted answers mirrors the same impulse that created our current crisis: the belief that AI systems should reflect the creator’s perspective rather than the full spectrum of human thought. Whether that creator leans left or right misses the central point—artificial intelligence should serve all Americans, not just those who control the algorithms.
This is where Elon’s response, while appreciated, falls short of what the moment demands. Adjusting individual AI outputs is like treating symptoms while ignoring the disease. The real problem isn’t what Grok said in one instance—it’s that AI systems across the board have been trained on fundamentally incomplete datasets that systematically exclude conservative perspectives.
- Read More: humanevents.com

