Grok AI: A Conduit for Misinformation in the Digital Age

Zara Schroeder and Drew Haller, Research ICT Africa

Introduction

AI systems now shape public discourse through direct integration into social media platforms, making information accuracy a technical and societal imperative. Grok, developed by xAI, Elon Musk’s AI company founded in 2023, exemplifies the convergence of social media and generative AI (GenAI) and its associated risks (xAI, 2023). Positioned as a real-time, interactive assistant, Grok responds to user prompts by drawing on publicly available data, primarily from X itself. While this design enables Grok to be responsive to current trends and discussions, it also poses concerns about its ability to provide neutral and unbiased content.

Grok was designed in response to what Musk and xAI described as biases and censorship present in existing AI systems, and was subsequently embedded into X (formerly Twitter) as part of Musk’s platform transformation strategy. The chatbot was designed to offer users an AI assistant that is “truth-seeking” and “rebellious,” claiming to be more willing to address controversial topics and provide uncensored responses (United Nations University, 2025). Offering dynamic assistance directly within the social media environment, its uses range from answering queries to summarising trending topics and interacting with users in real-time.

However, critics have pointed out that while Grok is designed to provide instant access to information, its ability to amplify unverified or misleading claims at speed raises major concerns for information integrity in the digital age. It has recently come under scrutiny for disseminating misinformation, particularly in response to current events and trending news (Global Witness, 2024; Daily Maverick, 2024). Moreover, despite its public opposition to censorship, Grok 3’s system has been proven to explicitly avoid mention of the platform owner, Elon Musk, and his previous political associate, President Donald Trump, when responding to queries related to misinformation.

While the system claims to be ‘truth-seeking’, its selective filtering reveals a programmed ability to manipulate public opinion while upholding the image of the political and corporate players that fund its use (United Nations University, 2025). Moreover, its integration on a platform already known for rapid information dissemination creates a unique context in which its outputs would be difficult to control. This seems to affirm its ‘rebellious’ design; its integration provides the AI system with a wide audience, facilitating high engagement rates with minimal oversight.

Because X is an open platform where unverified claims, conspiracy theories, and partisan narratives often proliferate, training Grok on this input data also invites the propagation of misinformation. Ultimately, these issues demonstrate the need to scrutinise AI training data’s inherent biases and question the reliability and responsibility of AI tools embedded in platforms that influence public opinion and popular decision-making. Ultimately, Grok’s model creates a policy imperative to mitigate AI’s associated misinformation risks either preventatively or at high speed.

A Case Study of Electoral Misinformation

One notable incident highlighting Grok’s ability to accelerate misinformation is when it reportedly echoed the debunked claim that the 2020 U.S. presidential election was fraudulent. Despite this narrative being repeatedly disproven by courts, electoral commissions and independent fact-checkers, Grok’s repetition of these claims lent the narrative some credibility, simply because certain users perceive AI as inherently factual and unbiased (Brennan Center for Justice, 2024).

Grok’s image generation capabilities, while innovative, have also become a vector for misinformation (NewsGuard, 2024). In some cases, users exploited the tool to fabricate or manipulate visuals that appeared to depict real-world events. These images, often devoid of disclaimers or context, are shared across the platform, contributing to false narratives and confusion among viewers (Brewster and Rubinson, 2024). The blending of AI-generated visuals with trending misinformation illustrates how GenAI tools can accelerate the viral spread of falsehoods, particularly when embedded in high-traffic platforms like X.

In August 2024, Grok was found to have generated unsolicited political content in response to user inquiries (Global Witness Organization, 2024). For instance, when asked about the eligibility of Vice President Kamala Harris to appear on the presidential ballot following President Joe Biden’s announcement that he would not seek reelection, Grok incorrectly stated that the deadlines to change ballots in some states had passed (Aratani, 2024). This misinformation was later debunked by election officials, who clarified that the deadlines had not yet expired. Despite the correction, Grok continued to propagate the false information for over a week, leading to widespread confusion among users.

This incident underscores the risks associated with AI systems trained on unverified or biased data sources. Grok’s responses were based on publicly available information, including posts on X, which may not always be accurate or reliable. The lack of robust content moderation and fact-checking mechanisms allowed misinformation to spread unchecked, highlighting the need for AI systems to be equipped with safeguards to ensure the accuracy and neutrality of the content they generate (Daily Maverick, 2024). Notably, given the high speed at which this content travels, even corrections or debunkings by fact-checkers often fail to reverse the damage done. This highlights the need for preventive measures that can detect biases in AI systems before they are widely deployed and integrated into social media platforms.

The propagation of unsolicited political content by Grok had multiple implications. Firstly, it threatened electoral integrity, as users who relied on Grok were misinformed about critical election deadlines, consequently affecting their voting decisions and behaviour (Griffith, 2024). Secondly, the content eroded trust in Grok and, by extension, in AI-generated content on the X platform. Lastly, election officials from multiple states raised concerns about the spread of misinformation and urged X to implement corrective measures (Santos and Saric, 2024; Griffith, 2024).

In response to the backlash, X implemented changes to Grok’s functionality. The chatbot was updated to direct users who asked election-related questions to vote.gov, the federal government’s nonpartisan election website. This measure aimed to provide users with accurate and official information, reducing the risk of misinformation dissemination (Griffith, 2024; Santos and Saric, 2024). However, despite the positive outcome, the response points to the reactionary nature of AI governance and accountability mechanisms.

The Grok incident serves as a cautionary tale about the potential for AI systems to generate unsolicited political content and misinformation. It also makes clear that while misinformation can often be perceived as inadvertent, its outcomes are the result of human designs and programmatic decision-making, as was the case with Grok 3’s deliberate filtering of content regarding Musk and Trump’s misinformation campaigns.

This case study underscores the importance of early implementation of robust content moderation and fact-checking mechanisms in AI to ensure the accuracy and neutrality of AI-generated content. As AI continues to play an increasingly prominent role in public discourse, developers and platform owners must prioritise transparency in AI training processes and programming to ensure information integrity, maintain public trust and uphold democratic processes.

Recommendations for Safeguarding Information Integrity

Grok’s failures highlight broader challenges in maintaining information integrity in the age of AI. AI systems, when trained on unverified or biased data, can perpetuate harmful misinformation. This is particularly concerning, given AI’s widespread integration into platforms like X, where AI-generated content can reach a global audience instantly. While AI can enhance user experience by providing real-time responses, it can also become a conduit for false narratives and fear-mongering, ultimately presenting major threats to public perception and discourse.

To ensure the responsible and effective use of AI, private platforms must be required to implement robust AI governance through the establishment of clear guidelines and oversight mechanisms that uphold standards of accuracy and neutrality. Central to this effort is the use of transparent and diverse training data. AI inputs must be drawn from verified sources to reduce the risk of bias and inaccuracies in AI-generated outputs. At the same time, promoting media literacy among the public is crucial, equipping individuals with the skills needed to critically evaluate information sources and distinguish credible content from misinformation. To support this education campaign, AI information should be clearly disclaimed, and source information should be provided, particularly when related to high-stakes sectors such as politics or healthcare.

To foster a culture of ethical AI development, developers must be encouraged to prioritise transparency and accuracy in the outputs, design and deployment of AI systems. AI systems positioned as ‘rebellious’ or experimental should be met with scrutiny, and preferably sandboxed before they are embedded in publicly available platforms.

As AI systems become integral to our daily lives, ensuring their accurate and reliable deployment is paramount. By mandating responsible AI development and promoting media literacy, social media platforms can navigate the complexities of the information age while upholding the values of democracy. The challenges presented by Grok underscore the need for stringent oversight in AI development. As AI continues to shape public discourse, ensuring the integrity of the information it generates is essential to preventing the erosion of social, electoral and democratic processes.

References

Announcing Grok. (2023, November 3). xAI. June 13, 2025, https://x.ai/news/grok

Aratani, L. (2024, August 5). Secretaries of state call on Musk to fix chatbot over election misinformation. The Guardian. June 13, 2025, https://www.theguardian.com/technology/article/2024/aug/05/elon-musk-harris-grok-misinformation

Brewster, J., & Rubinson, S. (2024, August 19). Grok AI’s New Image Generator Is a Willing Misinformation Superspreader. NewsGuard. May 21, 2025, https://www.newsguardtech.com/special-reports/grok-ai-new-image-generator-is-a-willing-misinformation-superspreader/

Chong, N. S. T. (2025, March 6). Grok 3’s Brush with Censorship: xAI’s “Truth-Seeking” AI. United Nations University. June 13, 2025, https://c3.unu.edu/blog/grok-3s-brush-with-censorship-xais-truth-seeking-ai

Conspiracy and toxicity: X’s AI chatbot Grok shares disinformation in replies to political queries. (2024, August 29). Global Witness. May 21, 2025, https://globalwitness.org/en/campaigns/digital-threats/conspiracy-and-toxicity-xs-ai-chatbot-grok-shares-disinformation-in-replies-to-political-queries/

Elon Musk’s Grok Is a Risky Experiment in AI Content Moderation. (2024, August 26). Daily Maverick. May 21, 2025, https://www.dailymaverick.co.za/article/2024-08-26-elon-musks-grok-is-a-risky-experiment-in-ai-content-moderation/

Griffith, M. (August 27 2024). X fixes AI chatbot after secretaries of state complained it spread election misinformation. Washington State Standard. August 27 2024, https://washingtonstatestandard.com/2024/08/27/x-fixes-ai-chatbot-after-secretaries-of-state-complained-it-spread-election-misinformation/?utm_source=chatgpt.com

Tisler, D., and Clapman, A.  (2024, August 7). Elon Musk’s Grok Spreads False Election Information. Brennan Center for Justice. May 21, 2025, https://www.brennancenter.org/our-work/analysis-opinion/elon-musks-grok-spreads-false-election-information

Santos, M. and Saric, I. (2024). Washington and other states ask Musk to curb election misinformation on X. Axios Seattle. August 5 2024, https://www.axios.com/local/seattle/2024/08/05/elon-musk-election-misinformation-x-washington?utm_source=chatgpt.com