By Scott Timcke
AI systems can now generate persuasive narratives that intersect with projects to make human behaviour predictable by analysing digital footprints. This development creates a major challenge for people trying to separate fact from fiction, especially when finely tuned narratives can be crafted to affect cognitive biases.
That is a problem for democracy because it undermines prospects for self-determination. These high stakes underscore the importance of fostering a nuanced understanding of AI systems and their potential impact on democratic politics.
The rapid diffusion of AI systems is posing challenges to democratic societies across the world. What seems like a lifetime ago, Shoshana Zuboff wrote about how firms in capitalist economies are implicitly incentivised to create and surveil datafied subjects. This profit-seeking routinely led to the systematic breach of privacy rights. Now the whole process of data extractivism has been revolutionised by generative AI and other refinements in business processes afforded by digital technology. Unless there is a similar revolution in maintaining, extending and promoting democracy, we may be left with a token version instead.
Engagement and legibility
There are several components to this problem. In a continuous cycle of data-driven interactions, user-generated content fuels the evolution of large language models. This data is broken down into tokens, serving as linguistic building blocks which are then assembled into expansive datasets to provide a repository for deep learning algorithms. To maintain the momentum of this cycle, recommendation algorithms emerge as key architects within the attention economy, priming users to continue inputting data. By analysing user behaviour and preferences, these algorithms curate personalised content streams, fostering a sense of tailored experiences. With each interaction an ever-expanding pool of data refines the models’ understanding of human language and its nuances.
Next, platforms no longer prioritise subjective exploration in and of itself; instead, AI systems seek to control user behaviour by moulding them into predictable subjects. This predictability provides the advertising industry with fine-tuned personalised appeals that can resonate with pre-profiled audiences. Predictability also validates models by boosting click-through rates, further incentivising platforms to steer users towards things they are conditioned to like. The goal of this system is to make people legible for AI systems, ready at hand for use in other scenarios.
Lastly, amid growing competition, economic uncertainty and the pressure to collect even more data without many countervailing restraints, firms are deploying AI systems to try to retain or increase market share. In line with one of the central tenets in capitalism, ‘to accumulate or be accumulated’, AI is now the latest terrain in this struggle. As AI systems have become entrusted with decision making through social sorting, there is value in critically re-examining the sociopolitical dilemmas that arise from its near ubiquitous use.
Deep learning about market forces
One primary goal of model training is to reduce the error rate of prediction. One way to improve a model’s predictive success is to use recommendation algorithms powered to cluster users into discrete groups, which in turn can be channelled by choice architecture. These systems strategically form groups while normalising a culture of prediction within firms and institutions. Data extractors and intermediaries cater to clients, who include advertisers but also state security forces. Consequently, they exert significant control over the digital ecosystems people navigate, subtly shaping choices and experiences to make actions more easily anticipated and monetisable. In effect, social relations are mediated by computer engineering and compute power.
While the efficacy of online advertising might not always live up to its hype, these systems are being improved. This algorithmic gaze promises ever-finer control over the attention economy. People’s lives become managed experiences, optimised for efficient data extraction. For AI firms whose revenue depends on platform-mined data, the predictable user is very desirable for the commodification of audiences.
From deliberation to reaction
Under the trapping of personalisation, market forces cluster, prime and influence consumers. After more than a decade of optimised engagement algorithms, reactionary actors have learned how to profitably gamify their content so that some users get caught up in social media rabbit holes that ultimately promote misogyny, racism and other reactionary ideologies, for example. The preconditions created by AI and algorithms are leveraged to make politics more deterministic, sorting people in clusters from which they then see their fellow citizens as enemies for instance, forever incommensurate and at a distance. Cycles of reaction leave little room for collective deliberation, messy as it is.
The demand for algorithmic optimisation encourages people to be less thoughtful, less reflective and more dogmatic. The new normal renders notions of Rawlsian public reasoning as naive. However, issues of empirical relativism remain: how can democratic citizens discern between narratives when these narratives are generated by AI systems? This is a critical concern that needs to be addressed to ensure the integrity of democratic processes in a world shaped by AI systems. This phenomenon transcends the commodification of human labour or attention, leading people down a path where they are more alienated from understanding themselves, their communities and their societies.
Connecting scientific and sociopolitical issues
The adverse effects of normalising the treatment of people as engineering problems is well understood by scholars in the humanities. In her 2020 book Cloud Ethics, Louise Amoore discusses how AI can be reorientated to create more possibilities, rather than to more efficiently decide within the status quo. Jennifer Forrestal’s Designing for Democracy and Kate Crawford’s Atlas of AI focus on how current algorithmic designs encode and reproduce institutional discrimination that undermines political equality. This scholarship suggests the need for new institutions that prioritise human agency, inclusivity and ethical data practices over blind efficiency. This is the urgent challenge.
My recent book, Algorithms and the End of Politics, tackles these and other fundamental issues. I discuss how the seemingly neutral logic of algorithmic reasoning can profoundly bias real-world decisions, echoing the troubling history of systems analysis in public policy. Back then and now, the fixation on maximising efficiency often led to the closure of democratic deliberation, sacrificing public agency and participation for the elusive goal of apparently rational results. Additionally, the enormous profits in this sector create vast wealth inequalities, which then create uneven political contests.
Scholarly books like these demonstrate that there is a need to urgently grasp the implications of people being profitably reduced to an algorithmic problem to be efficiently manipulated. These automated systems not only diminish individual autonomy but also perpetuate inequalities by replicating biases baked into their training data. This is not a mere technical glitch; it is a stark reminder of how AI can entrench existing power structures.
While humanities scholars are connecting scientific and sociopolitical issues, public engagement with these complex topics remains a challenge. While there is an appetite to acquire a deeper sociological understanding of science and technology, sometimes this eagerness stops short of engagement with the complexities of capitalism and how it inherently creates inequalities and material deprivation. This limitation creates a fertile ground for misunderstandings to flourish and for weak policy to be crafted.
Democracy Is not about confirming the status quo
State and market forces often incentivise the perception of subjects as ‘algorithmic problems’ that require solutions. So, what is the way forward? If we are to accept Amoore’s challenge to move beyond AI systems that merely confirm existing paradigms and instead embrace those that expand possibilities, we must scrutinise the motivation to collect more data in the first place. Many promising ideas are undermined by poor data, as democratic theorist Adam Przeworski has explained. And so there is value in thinking more clearly before initiating data collection exercises while also minimising the influence of those who have a limited understanding of contemporary epistemology and sociotechnical systems when designing studies and actioning results.
If self-determination is the lodestar, then it is crucial to identify the institutions and regulations that can provide a counterweight to the inclination towards predictability. Regulations prohibiting automated predictive systems already exist in some jurisdictions. But until there is global agreement, data can simply be transferred elsewhere for those kinds of calculations to occur. The global governance of AI urgently requires a direction that is influenced by humanistic enquiry. This direction should be firmly grounded in the assessment of human flourishing that is not predominantly driven by market dynamics. Unless this occurs, we will be left with the tokens of AI conditioning political culture.