A study by the University of Oxford highlights the advantages and risks of technology in healthcare, with ethical concerns still unresolved
Britain’s overburdened caregivers require significant support, but this should not involve the use of unregulated AI bots, according to researchers who emphasize the importance of strong ethical considerations in the AI revolution in social care.
A pilot study conducted by academics at the University of Oxford revealed that some care providers had employed generative AI chatbots like ChatGPT and Bard to develop care plans for individuals receiving care.
Dr. Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford who conducted a survey of care organizations for the study, warns of a potential risk to patient confidentiality.
“If you input any form of personal data into [a generative AI chatbot], that data is utilized to train the language model,” Green explained. “This personal data could be generated and disclosed to another party.”
She expressed concerns that carers might act on inaccurate or biased information, potentially causing unintended harm, and that an AI-generated care plan could be of inferior quality.
However, Green also highlighted potential benefits of AI. “It could assist with this administratively heavy work and enable people to review care plans more frequently. Currently, I wouldn’t recommend anyone to do so, but there are organizations developing apps and websites specifically for this purpose.”
Health and care organizations are already using technology based on large language models. For example, PainChek is a mobile app that utilizes AI-trained facial recognition to determine if a non-verbal individual is in pain by detecting subtle muscle movements. Oxevision, a system used by half of NHS mental health trusts, employs infrared cameras installed in seclusion rooms—used for potentially violent patients with severe dementia or acute psychiatric needs—to monitor their risk of falling, sleep patterns, and other activity levels.
In the early stages of development, projects include Sentai, a care monitoring system that utilizes Amazon’s Alexa speakers. This system is designed for individuals without 24-hour caregivers, reminding them to take medication and enabling relatives to remotely check in on them.
According to George MacGinnis, the challenge director for healthy aging at Innovate UK, the Bristol Robotics Lab is working on a device for individuals with memory issues. This device includes detectors that can turn off the gas supply if a stove is inadvertently left on.
“In the past, this would have required a visit from a gas engineer to ensure everything was secure,” MacGinnis explained. “Bristol is collaborating with disability charities to develop a system that allows individuals to safely manage this themselves.
“We have also supported the development of a circadian lighting system that adjusts to individuals, aiding them in reestablishing their circadian rhythm, which is often disrupted in dementia.”
While those in creative industries are concerned about the potential replacement by AI, the social care sector faces a different challenge. With approximately 1.6 million workers and 152,000 job vacancies, there are also 5.7 million unpaid carers looking after relatives, friends, or neighbors.
“People often view AI in binary terms – either it replaces a worker or things remain unchanged,” explained Lionel Tarassenko, professor of engineering science and president of Reuben College, Oxford. “However, it’s not that simple. AI can help elevate individuals with limited experience to the level of someone with significant expertise.
“I personally experienced this when caring for my father, who passed away at the age of 88 just four months ago. We had a live-in carer for him. When my sister and I took over on weekends, we were caring for someone we deeply loved and knew well, who had dementia. However, we did not possess the same level of skills as the live-in carers. These tools could have helped us reach a similar level of care as a trained, experienced carer.”
Nonetheless, certain care managers are concerned that the adoption of AI technology might inadvertently lead to rule violations and result in the loss of their license. Mark Topps, a social care professional who co-hosts The Caring View podcast, noted that individuals in social care are anxious that using technology could unintentionally lead to violations of Care Quality Commission regulations, potentially resulting in the loss of their registration.
“Many organizations are hesitant to take action until the regulator provides guidance, fearing the repercussions of making mistakes,” he explained.
Last month, 30 social care organizations, including the National Care Association, Skills for Care, Adass, and Scottish Care, gathered at Reuben College to discuss the responsible use of generative AI. Green, who organized the meeting, stated that they aim to develop a best practice guide within six months and hope to collaborate with the CQC and the Department for Health and Social Care.
“We aim to establish guidelines that the DHSC can enforce, defining what responsible use of generative AI in social care entails,” she said.