Agentic AI
,
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
From MechaHitler to Islamic Chatbots, AI Engines Are Writing the Script for Reality

We may think we can create an artificial intelligence that provides objective truth, but the reality is that AI does not, and cannot, offer truth. It provides synthesis and probability. It’s a great simulation that gives the illusion of objectivity, but it is built on a foundation of human bias, cultural fragmentation and ideological conflict.
See Also: OnDemand | Transform API Security with Unmatched Discovery and Defense
AI engines must be trained to be effective, and that training reflects the personal biases of individual programmers, corporate executives, governments, and the religious, political and cultural values of the societies in which they are created.
Differences in approach to tech were highlighted when Max Schrems struck down the Safe Harbor data transfer arrangements between Europe and the United States in 2015 over concerns about mass surveillance and dependence on software and technology from foreign countries.
Those calls for data sovereignty – a country’s ability to control and govern its own digital infrastructure, data and technologies – are now reflected in an even bigger, parallel issue of AI sovereignty. But with AI swiftly replacing search engines for billions of users worldwide, AI sovereignty ultimately means who controls reality.
And the divergence in approaches is growing as those who control AI models create them in their own image.
OpenAI explicitly states that ChatGPT is “skewed towards Western views and performs best in English.” But simply having a Western bias isn’t enough for AI model owner Elon Musk, who has personally sought to fine-tune the ideology of xAI’s Grok. As he sought to eliminate perceived bias from left of center information sources in response to Twitter users saying the chatbot is too “woke,” Grok ended up referring to itself as “MechaHitler.”
China’s DeepSeek AI blocks information about Tiananmen Square, and unsurprisingly follows the Communist Party line on Taiwan and other topics. Meanwhile, DeepSeek has been banned in some jurisdictions over privacy and security concerns.
Saudi Arabian AI company Humain has launched an Arabic-native chatbot that it says is not only fluent in the Arabic language, but also in “Islamic culture, values and heritage.”
And earlier this year, the Trump administration set out plans to regulate what large language models are allowed to output to win federal contracts, including rejecting what it described as radical climate dogma, and being free from ideological biases such as diversity, equity and inclusion. OpenAI, Anthropic and Google subsequently won government contracts by presumably providing compliant services.
Even prior to AI ubiquity, the left and right in the United States couldn’t agree on basic facts, such as who won the 2020 election, let alone issues such as LGBT rights. Plus, people have always chosen biased sources of information that reflect their views, whether it’s Fox News or CNN, the Guardian or the Daily Mail newspaper, so what’s different with AI?
Among the biggest concerns is that people tend to trust AI outputs more than they trust humans. A biased AI is more likely to be perceived as “objective” even when it’s not, which entrenches ideological biases. Open AI says of itself, “The model’s dialogue nature can reinforce a user’s biases over the course of interaction. For example, the model may agree with a user’s strong opinion on a political issue, reinforcing their belief.”
If every ideological group has its own AI, this could potentially lead to a loss of shared reality, further eroding the concept of objective truth, creating parallel realities with little overlap.
Extremist groups could thus use biased AIs to radicalize followers, creating echo chambers with algorithmic reinforcement and emotional manipulation.
We are also left with the problem of accountability and attribution: Who is responsible for the outputs of either a fascist or revolutionary AI? The creators, the users or the platform?
Why can’t we use the AI itself to help eliminate those biases? We could rely on AI to spot patterns that are concerning – and even implement a level of auto-correction. Unfortunately every AI model is itself biased, shaped by training data, design choices and implicit assumptions. Even the most “neutral” AI engines reflect cultural norms, linguistic biases and epistemic (knowledge) frameworks. In a pluralistic society, even a “neutral” AI could simply become another viewpoint – valued by some, distrusted by others.
Regulation also will be put forward as a potential fix. But whose regulation? Should truth be defined by Brussels? Washington? Beijing? Elon? Should religious authorities ensure the theological correctness of AI outputs? Should corporations or owners be free to decide what’s “acceptable” on their platform? In a world where trust itself is fractured, there is no agreement on who gets to regulate reality, so you get to pick your poison.
In the absence of external controls to counter AI bias, we users need to cultivate a culture of critical thinking where we engage with multiple perspectives, question assumptions and treat AI not as a divine oracle, but as a tool for thought. Trust, but verify, then maybe don’t trust at all.
Without transparency, pluralism and a renewed commitment to doubt, we won’t just lose the facts, we’ll lose the ability to agree they ever existed, eliminating the possibility of shared understanding.
Unchecked, our current trajectory could lead us to the end of truth.