
Schroeder speaking at the CJID conference in Abuja, Nigeria
by Zara Schroeder
In late November 2024 I took part in the Centre for Journalism Innovation and Development (CJID) conference in Abuja, Nigeria; the theme was the media powering Africa’s development. The conference brought together experts, journalists, and policymakers to discuss pressing topics in the media and technology sectors. I spoke on a panel on AI and media democracy. My panel focused on how AI technologies are reshaping journalism, content creation, and information dissemination, while also covering matters of trust, misinformation, and accountability.
My contribution during the session illuminated the complexities surrounding AI’s integration into media and governance in Africa, especially concerning the ethical dilemmas and regulatory concerns it raises. I highlighted the IDRC Resisting Information Disorders in the Global South project, which is being led by Herman Wasserman and his team at Stellenbosch University. I elaborated on the work being done by Scott Timcke and myself, where we focus on the role of AI in the spread of information disorders on the African continent. Our work has focused on how the current design of engagement platform markets enables the spread of online gender-based violence.
AI’s Impact on Media Consumption in Africa
My research at Research ICT Africa has revealed changes in media consumption patterns across the continent due to AI. I highlighted how AI is accelerating the spread of misinformation, a trend that is not just prevalent in Africa but globally. Governments, I noted, are increasingly relying on digital technologies to solidify their legitimacy while simultaneously viewing these technologies as threats to their control over information. This dynamic can restrict citizens’ ability to freely engage in political discourse and contest power.
“AI is contributing to the proliferation of misinformation, particularly around elections and public health crises, where the spread of false narratives can destabilise democratic processes,” I said. I emphasised that governments must find a balance between using AI for governance and protecting citizens’ rights to access accurate, reliable information.
Collaboration Between Governments and Media Organizations
A key focus of the discussion was the collaboration needed between African governments and media organisations to address the ethical and regulatory challenges posed by AI. I emphasised the importance of developing clear, inclusive frameworks for AI that prioritise transparency, accountability, and fairness. Governments should engage with media stakeholders to co-create policies that ensure algorithms are not reinforcing biases or inequalities.
One example of this is South Africa’s Protection of Personal Information Act (POPIA), which I suggested could be adapted to regulate how AI companies handle personal data. Additionally, media organisations have a critical role in educating the public on AI’s impact, empowering citizens to navigate the complexities of digital content and protect themselves from misinformation.
Fellow panelist, Saadatu Hamu Aliyu, Managing Partner at Hamu Legal, raised concerns about the role of governments in regulating AI. She explained, “AI regulation will eventually become more sector-specific, without stifling its development. While steps are being taken globally to regulate AI, Africa, and particularly Nigeria, lag behind. In Nigeria, we need to focus on how journalists use AI and establish ethical codes for its application. Additionally, for democratic processes, we must consider how AI impacts electoral processes, especially concerning deepfakes. Nigeria lacks strong AI regulations, but we must be mindful of how AI is used in news generation, as the results may not always be accurate. It’s crucial to establish responsible AI use and hold ourselves accountable.”
Combating AI-Driven Misinformation
The potential of AI to generate and amplify misinformation was another critical concern discussed during the panel. I proposed that governments and media outlets collaborate to create ethical guidelines governing AI-generated content. One suggestion was to label AI-generated news or media explicitly, enabling consumers to differentiate between human-created and machine-generated content.
Media literacy programs also emerged as a strategy in mitigating the risks of AI-driven misinformation. I called for investment in public education campaigns that would equip citizens to critically assess news content, particularly content influenced by AI.
Protecting Media Integrity and Democracy
As AI technologies become further integrated into media systems, ensuring their responsible use is crucial to safeguarding democratic values. AI systems that prioritise sensational content, for example, can fuel political polarisation and deepen societal divisions. AI powered algorithms, when unchecked, can contribute to social unrest by manipulating public opinion or fueling ethnic tensions, as seen in some African elections.
Transparency in AI decision-making is paramount. Governments can help by requiring AI systems, especially those used in media, to disclose their algorithms and data sources. This ensures that the public can understand why certain content is prioritised, fostering greater trust in the media and its role in democracy.
Harnessing AI for African Needs
Despite the challenges, the opportunities AI offers in addressing Africa’s unique challenges. I called for a focus on developing AI solutions that meet the continent’s needs, from agriculture to healthcare. For example, AI applications that improve food security or enhance healthcare access can significantly benefit African countries. Media organisations have a role to play in promoting locally developed AI solutions by showcasing these innovations and fostering collaboration across sectors.
African governments can also work together through regional platforms like the African Union (AU) and the Economic Community of West African States (ECOWAS) to harmonise AI policies. This regional cooperation would help ensure that AI development aligns with the continent’s social and cultural values while avoiding exploitation by foreign tech giants.
Idris Akinabjo, Managing Editor at Premium Times, discussed how his colleagues in Nigeria are utilising AI in journalism: “Journalists will always be essential in any democratic society, but the ones who will thrive amidst technological changes are those who master reporting and use it judiciously. They must leverage AI tools to enhance their work.” He emphasised the importance of evolving with technology: “In today’s media landscape, if you don’t adapt, you’ll be left behind. AI should make our lives and jobs easier. Newsrooms must maximise AI’s potential while ensuring all work is thoroughly reviewed.”
Ensuring Equal Access to AI’s Benefits
Finally, I addressed the potential risks of AI further exacerbating inequalities across Africa. While AI offers vast potential to improve media landscapes, its benefits could be unevenly distributed, particularly in rural areas with limited access to technology. I stressed the need to address the digital divide, ensuring that AI technologies are accessible to all, regardless of their geographic location or socio-economic status.
AI has the potential to democratise information access and can only be realised if we ensure that its deployment is equitable. Initiatives to improve digital literacy and infrastructure in rural communities are essential to ensuring that AI does not further marginalise vulnerable populations.
By addressing the challenges posed by AI and seizing the opportunities it presents, African governments, media organisations, and civil society can work together to ensure that AI supports, rather than undermines, democratic values and media freedom. The future of media in Africa depends on our collective ability to navigate the intersection of technology and democracy with transparency, accountability, and inclusivity.