Transforming Society ~ AI in care: Augmentation or depletion?

In the UK and further afield, we are witnessing increased enthusiasm among some policy makers, care sector stakeholders and the media for the ways in which artificial intelligence (AI) will potentially enhance care provision. This is in the context of wider policy interest in AI: in January 2025, Keir Starmer launched the AI Opportunities Action Plan, which he hopes will ‘mainline… AI into the veins of this enterprising nation’. £250 million was announced for the NHS ‘AI Lab’, including social care (though this was subsequently reduced to £139 million in 2024). AI is already being used in care provision, such as the use of ‘voice assistants’ to provide reminders and advice; chatbots to help people navigate services and triage requests for support; transcription software to record assessments and then create care plans; and predictive analytics to analyse sensor data and detect changes in behaviour over time.

Yet the use of AI is often opaque. Some technologies that claim to use AI for autonomous functions require human supervision and decision making, while others underplay the AI capabilities they employ. This ambiguity reflects confusion about what AI really means. Functionally and broadly, AI is the ability of computers to emulate human thought and perform tasks using techniques such as machine learning, which enables systems to identify patterns in data and ‘learn’ without being explicitly programmed, and deep learning, a sub-category of machine learning that uses neural networks with many layers to model complex patterns in data. Fields of AI include Natural Language Processing, which uses machine learning to process text, and Generative AI, which produces new material and is used in chatbots such as OpenAI’s ChatGPT or Google’s Gemini.

When AI is advocated in speeches and policy documents, it is often aligned with New Public Management ideals of ‘economy’, ‘efficiency’ and ‘effectiveness’. In other words, the human labour AI requires is underplayed to ensure efficiency arguments land. Kate Crawford, in her book The Atlas of AI, provides a foundational critique of the perception of AI as ‘intelligent’, arguing that

AI is neither artificial or intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labour, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational or able to discern anything without extensive, computationally intensive training with a large dataset or predefined rules and rewards.

Even without taking into account the embodied nature of AI, efficiency arguments in social care are countered by the nature of the work: care provision is more attuned to ‘nature’s time’, as opposed to ‘clock time’. Financial savings, like efficiencies, are also difficult to achieve, with the economic implications of using digital technology having been assumed rather than evidenced in care provision.

When it comes to the third ‘E’ of effectiveness, there is a risk that AI will render care a quantifiable, ‘datafied’ list of tasks to be completed. Furthermore, the hope that AI will improve effectiveness by predicting and preventing care needs from escalating is impeded by systemic issues in the sector. As noted by Glasby and colleagues, there are obstacles when introducing ‘prevention into an environment which is focused on dealing with crises’, with limited scope and resources for the necessary service redesign, training and culture change within care organisations.

More than economy, efficiency and effectiveness: What about ethics, equity and ecology?

VEJA  Hilarie Burton Praises Son Augustus He Gets Inducted into Theatre Society

In a recent review, we extend the argument to look at other ‘Es’ – ethics, equity and ecology. Starting with ethics, the emphasis on efficiency views care through a particular lens, aligned with ‘time and task’ approaches, where faster is better, but ethical considerations are often overlooked. For instance, there are risks related to bias: Rather than being ‘creative’, AI uses existing data and looks backwards, potentially reproducing bias and errors from the past – exacerbating discrimination in the present. AI also brings with it questions of informed consent, as understanding the complexity of AI and the implications of its use requires expertise.

‘Ethically questionable’ partnerships in this space have not fostered an environment of transparency. For example, many large language model producers refuse to provide information about what data their models are trained on, sometimes including copyrighted works. Further concerns arise when public money – whether through local procurement or national pilots – funds opaque, for-profit technology businesses.

Another important ‘E’ is equity, or fairness. Equity questions relate to the conditions of human labour that powers AI (as highlighted by Kate Crawford). These questions can be understood in terms of depletion. For Shirin Rai, in her recent book Depletion: The Human Cost of Caring, ‘depletion is harm’ and ‘measurable deterioration in the health and well-being of individuals and in the sustainability of households and communities’.

As an example, the 2021 White Paper People at the Heart of Care contained a description of Charlotte (a pseudonym):

Following a stroke, Charlotte has limited mobility and often finds it difficult to make herself understood… Charlotte is also involved in pilot-testing a technology that teaches Amazon Alexa to recognise her speech so that she can control lamps, technology and a tabletop fan with either her voice or the app’s touch interface … These communication devices have given Charlotte access to, and control over, her environment, which she hasn’t had in years.

Charlotte’s experience of AI improving her care contrasts with the experience of a young person working in a factory managed by Foxconn (Amazon’s supplier):

Xiao Fang, a 17-year-old studying computing, started work on the Amazon Echo production line last month and was given the job of applying a protective film to about 3,000 Echo Dots each day. She told researchers that her teacher initially said she would be working eight hours a day, five days a week, but that this had since changed to 10 hours a day (including two hours of overtime) for six days a week.

There are thus trade-offs – largely rendered invisible to end users of AI systems – between augmentation in one setting and depletion in another. Another form of depletion, beyond device manufacturing, lies in the labour which goes into data cleaning, coding, and classifying content. These forms of ‘ghost work’ involve poor and exploitative employment practices, kept out of sight to maintain the ‘façade of technological efficiency’.

A final and related ‘E’ is ecology, and the environmental resources that are depleted through the creation and use of AI. Aside from the raw materials required for the physical infrastructure, processes for storing and processing the data are energy-intensive and require vast amounts of fresh water in cooling systems. The World Economic Forum estimates that a ChatGPT search uses ten times more energy than a simple Google search. Resource use is geographically uneven: 21 per cent of energy used in Ireland was used by data centres, compared with 6 per cent in America. This depletion repeats patterns of colonial history, in terms of where extraction of material resources and exploitation of people occur. A recent investigation in the Guardian revealed that data centres are using water from the world’s driest areas. In 2022, Google operated its data centre in Finland on 97 per cent carbon-free energy; that number drops to 4–18 per cent for its data centres in Asia.

VEJA  Here’s how Trump's health care moves will push medical bills higher : Shots

Applying AI in care contexts thus not only fails to live up to promises of the New Public Management ‘Es’, but also brings with it issues of ethics, equity and ecology. When presenting these considerations at the University of Oxford’s Perspectives on Care seminar series, some audience responses converged around the question of ‘what is the solution?’ Should care be treated as unsuitable for AI while AI expands elsewhere? Should we regulate or nationalise AI companies and markets? Some projects take a solutions-based approach, for example, the Oxford Project: The Responsible Use of Generative AI in Social Care, which aims to understand the ‘ethical risks and implications’ of AI. Yet in looking for solutions regarding ethics, questions of depletion and negative impacts on equity will be difficult to overcome,e given the current landscape of AI development, use and governance. These are questions that relate not only to AI and not only to social care, but to the systems of global production and profit extraction in which industries, sectors and our lives are situated.

Kate Hamblin is Director of the Centre for Care whose research explores care and technology.

Grace Whitfield is a Researcher in the Centre for Care at the University of Sheffield, with a focus on digitalisation and political economies of care.

James Wright, Visiting Lecturer at Queen Mary University of London, is the author of Robots Won’t Save Japan: An Ethnography of Eldercare Automation (Cornell University Press, 2023).

Acknowledgements:

This commentary draws on:

  • Our chapter ‘Whitfield, G., Wright, J., & Hamblin, K. (2024) AI in care: a solution to the ‘care crisis’ in England? In Paul, R. et al. (eds) Handbook on Public Policy and Artificial Intelligence. Edward Elgar Publishing, 383–396.
  • A recent seminar, ‘AI in Care: Augmentation or Depletion?’ Department of Social Policy and Intervention, University of Oxford Hilary Term Seminar Series 2025: Perspectives on Care (co-convened by Dr Rossella Ciccia and Prof. Mary Daly).

The Centre for Care is funded by the Economic and Social Research Council (ESRC), award ES/W002302/1, with contributions from the National Institute for Health and Care Research (NIHR) (Department of Health and Social Care). The views expressed are those of the author(s) and not necessarily those of the ESRC, UKRI, NHS, the NIHR or the Department of Health and Social Care.

Care Technologies for Ageing SocietiesCare Technologies for Ageing Societies edited by Kate Hamblin and Matthew Lariviere is available to read open access on Bristol University Press Digital here.

Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.

Follow Transforming Society so we can let you know when new articles publish. The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Policy Press and/or any/all contributors to this site. 

Image credit: Alexander Sinn via Unsplash

Postagem recentes

DEIXE UMA RESPOSTA

Por favor digite seu comentário!
Por favor, digite seu nome aqui

Stay Connected

0FãsCurtir
0SeguidoresSeguir
0InscritosInscrever
Publicidade

Vejá também

EcoNewsOnline
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.