In just three years since the launch of ChatGPT, AI has rapidly emerged into the global consciousness as the next major technological transformation of global capitalism. This is rapidly unsettling and restructuring all aspects of social, economic and political life.
While firms, government and users have each embraced many of the opportunities for improved efficiency, productivity and access to information that AI promises to deliver, concerns nevertheless remain that the unregulated introduction of AI brings with it a number of major risks. AI promises both spectacular benefits, such as curing cancer and ending sluggish economic growth, and creating significant (and arguably far greater) risks. The so-called ‘godfather of AI’, Geoffrey Hinton, has already pointed to an increased likelihood that AI will wipe out humanity in the next 30 years.
At a recent conference that we organised at the University of Birmingham, a range of speakers highlighted these risks, including mass redundancies, depleted water supplies, a worsening of the climate crisis, the detrimental impact on the design and delivery of public services, a flooding of the internet with yet more mis/dis-information, the use of AI to power genocidal killings by states, and the damaging effects that AI is having on teaching and assessment in universities.
Don’t mourn, regulate: The promises of the EU’s AI Act?
The degree to which AI is able to harm our society and environment will depend on how best we can organise in opposition to those harms. This is no straightforward task. There already exists a widespread sense of inevitability that AI is simply something that is going to happen to us, like it or not. As we have shown in our recent research, however, there are a range of different ways in which AI can be – and is being – contested.
Perhaps the most common attempt to limit the harmful impact of AI has been through efforts to tame its potential harms by adopting legal regulations. Many are pinning their hopes on the EU’s new AI Act, adopted in 2024 and being implemented incrementally between now and 2030. The next stage of this implementation comes into force on 2 August 2025 and creates a number of obligations upon firms using AI, including transparency around their training and testing processes, a new need for major systemic risks to be mitigated, and an obligation to abide by national copyright laws.
As with all forms of regulation, however, most commentators agree that the real question is over how effectively the new regulations will be enforced. The concern for many is that any new regulations will be little more than ‘ethics-washing’, or otherwise be watered down or repealed once the Trump administration or the pressure of global economic competition demand it. The experience of earlier attempts at regulation is not reassuring. The EU already paused its investigations into X as a result of pressure arising from trade talks with the US. This reflects the colossal, and arguably overwhelming, concentration of technological power and ‘compute’ in the hands of a remarkably small group of people. Sam Altman, Peter Thiel, Elon Musk, Mark Zuckerberg and Jensen Huang together through their ownership and investment in AI have more technological wealth and power than most of the world’s population combined. Attempts to limit this power have, as a result, faced considerable pushback from Big Tech. The Code of Practice drawn up to enable firms to meet the AI Act regulations has already been heavily influenced by Big Tech lobbying. Despite this watering down, Meta has already pledged not to sign it on the grounds that it still goes too far. The heads of 44 large European firms have called on the European Commission to withdraw the Act due to fears the regulations will hamper European attempts to keep up with the US and China.
Future plans announced by the European Union to triple data processing capacity over the next 5–7 years raise further questions over whether the EU will discard regulations in an attempt to keep up with the global economic race to develop AI. Indeed, the fact that the EU already plans to increase data processing capacity is a perfect illustration of the pressure of global competition in pushing for yet more AI infrastructure, regardless of the disastrous environmental impact it will have. There is a clear risk that the AI Act – even if it remains unaffected by the intense pressure of Big Tech lobbying to repeal it – will leave in place the most problematic aspects of AI, which are left unchallenged.
Overcoming the inevitability narrative: From regulation to direct action?
We remain concerned, therefore, that the EU’s AI Act will fail to tether AI to the degree necessary for it to be rendered safe for human consumption.
This doesn’t mean, however, that we believe that the unfettered rollout of AI is inevitable – even if it remains currently the most likely scenario. Beyond regulation, there are a range of alternative ways in which AI can be, and is being, contested. This includes more individualised forms of escape – including simply ‘switching off’ from AI technologies and refusing to engage. Recent reports indicate that users are increasingly disengaging from Duolingo as a result of its incorporation of AI.
Legal challenges have also been pursued by authors opposing the use of their work by AI models. While two recent US federal court rulings found the use of published texts to train AI to be within the law on the grounds of ‘fair use’, one of those rulings also left open the possibility that a future legal challenge might be successful on the grounds of ‘market dilution’.
Other forms of resistance have also been mounted over recent years. Screenwriters in Hollywood famously won, through sustained strike action in 2023, a commitment from studios not to use AI to replace script writing. In the UK, musicians have lobbied Parliament over the question of copyright regarding music and how this feeds into – or what some argue is essentially pirating by – AI training.
More direct action forms of resistance have also been witnessed. Perhaps the most effective instance of this type of direct action resistance has been seen in San Fransisco, where the Safe Street rebel collective has sought to challenge the (in its view unsafe) introduction of AI-based driverless cars, by placing a traffic cone on top of the cars, which renders them unable to operate. Similarly, in Chile, the community group Mosacat launched a sustained protest campaign against the introduction of a new data centre, which was to be built to power AI, especially by highlighting the massive consequences for water supplies that this would have had.
Ultimately, any genuine curtailment of AI will require a concerted effort by communities to mobilise against the imposition of AI. It will also likely mean the creation of new types of communities, in which we don’t simply submit our decision-making capabilities to a profit-driven tech-bot. This requires careful and community-led deliberation regarding how we should organise our own societies. The more fundamental problem, therefore, is that AI is the opposite of this. AI promises a quick technological fix that removes the need for any human or social deliberation whatsoever – with or without the EU’s attempt at regulation. It is on this basis that we call for greater attention to the contestation of AI, and for a more sustained consideration of alternatives.
David Bailey is an Associate Professor in Politics at the University of Birmingham
Masoumeh Iran Mansouri is an Associate Professor in Computer Science both at the University of Birmingham.
How to be ‘anti-AI’ in the 21st century: overcoming the inevitability narrative by Masoumeh Iran Mansouri and David J. Bailey is available to read open access in Global Political Economy on Bristol University Press Digital here.
Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.