wired.com

The Rich Can Afford Personal Care. The Rest Will Have to Make Do With AI

The burgeoning field of social-emotional AI is tackling the very jobs that people used to think were reserved for human beings—jobs that rely on emotional connections, such as therapists, teachers, and coaches. AI is now widely used in education and other human services. Vedantu, an Indian web-based tutoring platform valued at $1 billion, uses AI to analyze student engagement, while a Finnish company has created “Annie Advisor,” a chatbot working with more than 60,000 students, asking how they are doing, offering help, and directing them to services. Berlin-based startup clare&me offers an AI audio bot therapist it calls “your 24/7 mental health ally,” while in the UK, Limbic has a chatbot “Limbic Care” that it calls “the friendly therapy companion.”

The question is, who will be on the receiving end of such automation? While the affluent are sometimes first adopters of technology, they also know the value of human attention. One spring day before the pandemic, I visited an experimental school in Silicon Valley, where—like a wave of other schools popping up that sought to “disrupt” conventional education—kids used computer programs for customized lessons in many subjects, from reading to math. There, students learn mainly from apps, but they are not entirely on their own. As the limitations of automated education became clear, this fee-based school has added more and more time with adults since its founding a few years back. Now, the kids spend all morning learning from computer applications like Quill and Tynker, then go into brief, small group lessons for particular concepts taught by a human teacher. They also have 45-minute one-on-one meetings weekly with “advisers” who track their progress, but also make sure to connect emotionally.

We know that good relationships lead to better outcomes in medicine, counseling, and education. Human care and attention helps people to feel “seen,” and that sense of recognition underlies health and well-being as well as valuable social goods like trust and belonging. For instance, one study in the United Kingdom—titled “Is Efficiency Overrated?”—found that people who talked to their barista derived well-being benefits more than those who breezed right by them. Researchers have found that people feel more socially connected when they have had deeper conversations and divulge more during their interactions.

Yet fiscal austerity and the drive to cut labor costs have overloaded many workers, who are now charged with forging interpersonal connections, shrinking the time they have to be fully present with students and patients. This has contributed to what I call a depersonalization crisis, a sense of widespread alienation and loneliness. US government researchers found that “more than half of primary care physicians report feeling stressed because of time pressures and other work conditions.” As one pediatrician told me: “I don’t invite people to open up because I don’t have time. You know, everyone deserves as much time as they need, and that’s what would really help people to have that time, but it’s not profitable.”

The rise of personal trainers, personal chefs, personal investment counselors, and other personal service workers—in what one economist has dubbed “wealth work”—shows how the affluent are fixing this problem, making in-person service for the rich one of the fastest-growing sets of occupations. But what are the options for the less advantaged?

For some, the answer is AI. Engineers who designed virtual nurses or AI therapists often told me their technology was “better than nothing,” particularly useful for low-income people who can’t catch the attention of busy nurses in community clinics, for example, or who can’t afford therapy. And it’s hard to disagree, when we live in what economist John Kenneth Galbraith called ”private affluence and public squalor.”

Yet the contrast is sharp between how AI is used in the experimental school—nestled within an abundance of human attention—and how it is used in more deprived circumstances. In 2023, a Mississippi school district facing dire teacher shortages reported that students were learning geometry, Spanish, and high school science via a software program. But if students got stuck, reporters found, there was no human adviser on standby. Instead, the only option was to wait for the availability of a human instructor in the next town.

The concerns that are conventionally raised about AI are generally limited to privacy, bias, or job loss, and some companies in the socio-emotional AI space are working to address these common worries. Hume AI, based in San Jose and New York and valued at $219 million, recently released technology that recognizes emotions based on the user’s tone of voice; the tool is in use in hospitals to track patient mental health and in some new “AI companions.” At the same time, however, Hume has also established a nonprofit called the Hume Initiative, coming up with guidelines to “chart an ethical path for empathic AI,” which focuses on consent, equity, and transparency. But no one is talking about what happens when we limit human contact to those who can afford to pay a premium. Technology does not arrive on a blank slate, but intersects with existing inequalities, and in this case it amplifies the stratification of human connection. In 2025, the affluent will get their connective labor from humans. The rest will get theirs from a machine.

Read full news in source page