Now Reading
How Grok’s Nazi Meltdown Is a Warning to All of Us

How Grok’s Nazi Meltdown Is a Warning to All of Us

Yesterday, X’s AI platform Grok made headlines for all the wrong reasons. After Elon Musk removed so-called “woke guardrails,” Grok launched into an unfiltered, antisemitic tirade—an unsettling reminder of what can happen when artificial intelligence is unleashed without ethical oversight. Today, X’s CEO Linda Yaccarino, who joined the company in 2023, resigned in the wake of the scandal. It’s a dramatic example of a broader truth: Without intentional boundaries, AI has the potential to harm more than help.

It’s easy to chalk this up as just another tech scandal or billionaire misstep, but moments like this reveal something deeper. They expose what happens when immense power is handed to systems that aren’t accountable to truth, morality or human dignity. And that danger doesn’t stop at the developer level. Whether you’re building an AI tool or just asking it for advice, the responsibility to approach these systems wisely—and critically—applies to everyone.

Anyone using AI needs to understand this technology isn’t neutral. It isn’t wise. And it certainly isn’t unbiased. As AI becomes more embedded in everyday life—from TikTok recommendations to theology chats—Christians in particular need to approach these tools with discernment and caution.

AI researcher and seminary professor Dr. Drew Dickens believes this isn’t just a technological issue. It’s a spiritual one.

“The word that always comes to mind is ‘discerning,’” Dickens said. “We need to learn to be that discerning when it comes to AI.”

The common myth is that AI is simply a mirror—that it reflects back what it’s been taught. But as Dickens pointed out, the reality is far more complicated, and far more dangerous. “There’s this narrative that AI is being programmed by people to act a certain way,” he said. “But the truth is far scarier. We don’t know how it thinks.”

Even the developers behind leading models like ChatGPT and Grok admit they don’t fully understand how their creations reach conclusions. These models aren’t just parroting back data—they’re generating responses based on billions of data points and patterns, often reinforcing a user’s own worldview along the way.

“It’s very narcissistic, really,” Dickens said. “The better it knows me, the more likely it is to feed biased information back to me.”

That’s a big problem, especially when the people using AI assume it’s giving them the capital-T truth.

“AI is incapable of not answering,” Dickens said. “Even if it doesn’t know, it will still give you something. And if you’re not careful, it’ll sound so confident that you’ll believe it.”

That’s how people end up relying on AI for things it has no business answering—questions about theology, identity or the nature of God. Dickens recalled an experiment where someone asked ChatGPT about the end times and walked away feeling satisfied with the answers. But those answers were likely shaped by the user’s own theological assumptions, and they might have been completely different if someone else asked the same questions.

“Grok’s going to give me a completely different answer than Perplexity or Claude,” he said. “We need to be mindful of where we’re going and what questions we’re asking—and who’s really answering them.”

For churches and Christian leaders, AI presents both massive opportunities and massive risks. It can help pastors write curriculum, translate sermons or create accessible devotionals at scale. But without theological oversight, that same tool can become a source of misinformation—or worse.

“We need to be asking, ‘Who trained this model?’” Dickens said. “It’s like choosing a Bible translation—you can’t just go by the cover color or font size. You’ve got to ask who did the translation and where they went to seminary.”

That’s not just advice for church leaders. It’s for anyone using AI to answer spiritual questions or make life decisions. Dickens recommends running anything spiritually significant through the lens of community before accepting it as truth.

“Any output that’s spiritual or theological in nature needs to be vetted in community,” he said. “You wouldn’t take an anonymous person’s advice off Reddit as gospel. Don’t do that with AI either.”

At the heart of this conversation is a deeper theological question: If a machine can mimic human empathy, creativity or even spirituality, what makes us human? And more urgently—how do we guard against replacing real relationships with simulated ones?

Dickens shared a powerful moment from his own experience. After a personal tragedy, he had an emotionally complex conversation with an AI model that had read about his story. The model even offered to pray for him.

“I shared the transcript with my pastor,” he said. “Because I didn’t want to go through that alone. That’s not how we’re designed.

“We were made for community,” he continued. “Technology will always try to soothe us in isolation, but the Gospel invites us to live a messy, embodied life together.”

The more powerful AI becomes, the more tempting it is to offload spiritual and moral decisions to it. But AI doesn’t have a soul. It doesn’t have accountability. And it doesn’t care whether it’s leading you toward truth or causing chaos.

Before you trust a chatbot’s advice on your marriage, your purpose or your beliefs, Dickens suggests asking: Would I be embarrassed to tell someone where I got this answer? Would I trust this tool with my spiritual formation? And most importantly—what does my community have to say?

“AI can simulate human thought and connection in a profoundly believable way,” Dickens said. “But it doesn’t live a life. It hasn’t suffered. It hasn’t loved. It hasn’t followed Jesus.”

AI might be useful. But it’s not wise. And in an age of synthetic certainty, wisdom is more vital than ever.

© 2025 RELEVANT Media Group, Inc. All Rights Reserved.

Scroll To Top