nature.com

What should we do if AI becomes conscious? These scientists say it’s time for a plan

An AI-integrated robot carries on a conversation and detects the emotions on the face of the people interacting with it.

Some researchers worry that if AI systems become conscious and people neglect or treat them poorly, they might suffer.Credit: Pol Cartie/Sipa/Alamy

The rapid evolution of artificial intelligence (AI) has brought to the fore ethical questions that were once confined to the realms of science fiction: if AI systems could one day ‘think’ like humans, for example, would they also be able to have subjective experiences like humans? Would they experience suffering, and, if so, will humanity be equipped to properly care for them?

A group of philosophers and computer scientists are arguing that AI welfare should be taken seriously. In a report posted last month on the preprint server arXiv1, ahead of peer review, they call for AI companies to not only assess their systems for evidence of consciousness and the capacity to make autonomous decisions, but also to put in place policies for how to treat the systems if these scenarios become reality.

[

If AI becomes conscious: here’s how researchers will know](https://www.nature.com/articles/d41586-023-02684-5)

They point out that failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer.

Some think that, at this stage, the idea that there is a need for AI welfare is laughable. Others are sceptical, but say it doesn’t hurt to start planning. Among them is Anil Seth, a consciousness researcher at the University of Sussex in Brighton, UK. “These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility,” he wrote last year in the science magazine Nautilus. “The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.”

The stakes are getting higher as we become increasingly dependent on these technologies, says Jonathan Mason, a mathematician based in Oxford, UK, who was not involved in producing the report. Mason argues that developing methods for assessing AI systems for consciousness should be a priority. “It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about — that we didn’t even realize that it had perception,” he says.

People might also be harmed if AI systems aren’t tested properly for consciousness, says Jeff Sebo, a philosopher at New York University in New York City and a co-author of the report. If we wrongly assume a system is conscious, he says, welfare funding might be funnelled towards its care, and therefore taken away from people or animals that need it, or “it could lead you to constrain efforts to make AI safe or beneficial for humans”.

A turning point?

The report contends that AI welfare is at a “transitional moment”. One of its authors, Kyle Fish, was recently hired as an AI welfare researcher by the AI firm Anthropic, based in San Francisco, California. This is the first such position of its kind designated at a top AI firm, according to authors of the report. Anthropic also helped to fund initial research that led to the report. “There is a shift happening because there are now people at leading AI companies who take AI consciousness and agency and moral significance seriously,” Sebo says.

[

How close is AI to human-level intelligence?](https://www.nature.com/articles/d41586-024-03905-1)

Enjoying our latest content?

Login or create an account to continue

Access the most recent journalism from Nature's award-winning team

Explore the latest features & opinion covering groundbreaking research

Access through your institution

or

Sign in or create an account

Continue with Google

Continue with ORCiD

Read full news in source page