Becoming an algorithmic problem: Resistance in the age of predictive technology

In John Green’s novel The Fault in Our Stars, the character Hazel describes falling in love “the way you fall asleep: slowly, and then all at once”. This is the way I think about the impact of our algorithmically recommended lives on liberal democracy. Each time we submit to the temptation of indulging in the familiar (or near familiar), we move one step closer to becoming illiberal subjects. This might seem like a drastic accusation for such small infractions. But indulging in the familiar can habituate us away from exploring new ideas. The result can be the death of liberal democratic institutions – slowly, then all at once.

Every time I engage with my ‘smart devices’, I am part of an unwritten algorithmic social contract. Much like the metaphorical social contract citizens in liberal democracies have with their governments, much of my online cultural life is governed by a sociotechnical system that provides benefits and costs. The terms of this relationship are simple: In exchange for relief from the ‘toil of choice’ present in the age of endless possibility, I hand over my autonomy to the engagement algorithm.

This seems harmless. So what if Spotify knows what I like and gives me more of it? I’d suggest that it’s a question of ratios. What percentage of time do you spend engaging with the familiar versus exploring what you haven’t already developed an opinion about? In the world of reinforcement learning AI, this is the distinction between exploitation and exploration.

Although AI models are trained in a number of different ways, the more effective models strike a balance between letting AI make mistakes and forcing it to ‘exploit’ what it has already learned. Humans learn the same way. We strike a balance between doing what has worked for us in the past and trying new things. If we veer too much in one direction or the other, we run the risk of either becoming so predictable that we don’t evolve or becoming so inchoate in our behaviour that we lack the familiarity and stability to work with others.

VEJA  JSA #8 Preview: Justice Society's European Vacation

But this is not the calculus of tech companies towards you. To Google or Meta, too much exploration makes you a problem. Someone who is constantly exploring creates problems for their classification algorithms. If you like everything, how can you be marketed to? If your patterns are so unpredictable that you are ‘statistical noise’ to the model, then you are an algorithmic problem and a bad neoliberal subject.

In my forthcoming book, You Must Become an Algorithmic Problem, I explore the question of whether liberal democracies depend on citizens who explore rather than succumb to predictability. For centuries, rulers have understood that control depends on predictability. In his 1999 book Seeing Like a State, James C. Scott contends that nation states in modernity seek to make their citizens legible to better govern them. He argued back then that states encountered technical challenges in understanding the day-to-day behaviour of their citizens. Twenty-five years later, these challenges are diminished. We live in an ordinal society where our every action can be captured, categorised and scored.

The danger of such a society is the inability to explore. Liberal democracy depends on conjecture and refutation, the habits of offering opinions about public life to fellow citizens with the possibility that one might be wrong and they might modify their beliefs accordingly. Such habits require an openness to exploration. But our algorithmic economy disincentivises our exploring too widely in ways that make us unpredictable consumers.

VEJA  PSG 5-0 Inter Milan: Champions League winners eye more glory in months ahead as age catches up with Inter | Football News

Do we want to ‘become algorithmic problems’? Or does our contract give us the comfort and convenience of curating our own lives such that we see no reason to explore, only to exploit? If we do, we run the risk of becoming diminished people who cease defending the rights of others and are content to pursue the familiar, only being roused to defend it against the threat of perceived others. This is the core question of our age. If we choose the former, we can renegotiate the algorithmic contract to ensure that it provides us with the tools to both explore and exploit.

José Marichal is Professor of Political Science at California Lutheran University.

You Must Become an Algorithmic Problem by José Marichal is available on Bristol University Press for £24.99 here.

Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.

Follow Transforming Society so we can let you know when new articles publish.

The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Policy Press and/or any/all contributors to this site.

Image credit: Yudha Aprilian via Unsplash

The post Becoming an algorithmic problem: Resistance in the age of predictive technology appeared first on Transforming Society.

Postagem recentes

DEIXE UMA RESPOSTA

Por favor digite seu comentário!
Por favor, digite seu nome aqui

Stay Connected

0FãsCurtir
0SeguidoresSeguir
0InscritosInscrever
Publicidade

Vejá também

EcoNewsOnline
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.