close
close

Don’t ask AI to make life and death decisions


Don’t ask AI to make life and death decisions

Ssometimes, AI can actually be a matter of life and death.

Last year, a Belgian man tragically took his own life after allegedly being persuaded to do so by a chatbot. In the Netherlands, there is currently a debate about whether artificial intelligence (AI) should be allowed to support decisions about physician-assisted suicide. Elsewhere, researchers are using AI to predict how likely terminal cancer patients are to survive the next 30 days. This could allow patients to avoid unpleasant treatments in their final weeks.

I have witnessed firsthand the tendency to ask AI questions about life and death. When a professor at a university I attended learned that I was a computer scientist, he immediately asked, “So can your algorithms tell me when is the best time to kill myself?”

The woman was not in danger of harming herself. Instead, she feared she might develop Alzheimer’s disease in old age and longed for an AI model that could help her determine the optimal time to end her life before cognitive impairment rendered her unable to make important decisions.

Luckily, I don’t get requests like that very often. But I do meet a lot of people who hope that new technologies will remove existential uncertainty from their lives. Earlier this year, Danish researchers developed an algorithm called the “Doom Calculator” that could predict the probability of people dying within four years with an accuracy of over 78 percent. Within weeks, I noticed several copycat bots popping up online that could supposedly predict their users’ death dates.

From “Seinfeld” jokes to science fiction stories to horror movies, the idea of ​​a sophisticated computer telling us when we’re going to die is nothing new – but in the age of ChatGPT, the idea of ​​AI doing amazing things seems more realistic than ever. As a computer scientist, however, I remain skeptical. The reality is that while AI can do a lot, it’s far from being a crystal ball.

Algorithmic predictions like life tables are useful overall: for example, they can tell us roughly how many people in our community will die in a given period. What they cannot do, however, is provide the final word on an individual’s life expectancy. The future is not set in stone: a healthy person could be hit by a bus tomorrow, while a smoker who never exercises could buck actuarial trends and live to be 100.

In the age of ChatGPT, the idea of ​​AI doing amazing things seems more realistic than ever. As a computer scientist, however, I remain skeptical.

Even if AI models could make meaningful individual predictions, our understanding of disease is constantly evolving. Nobody used to know that smoking caused cancer; once we found out, our health predictions changed dramatically. Likewise, new treatments can make previous predictions obsolete: According to the Cystic Fibrosis Foundation, the average life expectancy of people born with the disease has increased by more than 15 years since 2014, and new drugs and gene therapies promise even greater advances in the future.

If you want certainty, that might sound disappointing. However, the more I study how people make decisions based on data, the more I believe that uncertainty is not necessarily a bad thing. People crave clarity, but my work shows that people feel less certain and can make worse decisions when they have more information to guide their choices. Predicting bad outcomes can leave us feeling helpless, while uncertainty—as any lottery player knows—can give us permission to dream of (and strive for) a better future.

AI tools can, of course, be useful in situations where it’s not so risky. Netflix’s recommendation algorithm is a great way to find new shows to binge-watch – and if it leads you to a flop series, you can just click away and watch something else. There are also situations where AI is useful when there’s a lot more at stake: for example, when a fighter jet’s onboard computer intervenes to avoid a collision, AI predictions can save lives.

The problems start when we see AI tools as replacements for our own agency rather than augmenting it. While AI is good at identifying patterns in data, it can’t replace human judgment. (Dating app algorithms are notoriously bad at judging compatibility, for example.) Algorithms also tend to confidently invent answers rather than admit uncertainty, and can also exhibit troubling biases depending on the data sets they were trained on.

However, the more I study how people make decisions based on data, the more I am convinced that uncertainty is not necessarily a bad thing.

What should we learn from all this? For better or for worse, we must learn to live with the uncertainties in our lives – and perhaps even accept them. Just as doctors learn to tolerate uncertainty in order to care for their patients, we all have to make important decisions without knowing exactly where they will lead.

This can be uncomfortable, but it’s part of what makes us human. As I warned the woman who was afraid of Alzheimer’s, it’s impossible for AI to quantify the value of a single lived moment – and we shouldn’t be too quick to outsource the challenges of being human to an unfeeling AI model.

The poet Rainer Maria Rilke once said to a young writer that we should not try to eliminate insecurity, but learn to “love the questions themselves.“It’s hard not knowing how long we’ll live, whether a relationship will last, or what life has in store for us. But AI can’t answer these questions for us, and we shouldn’t ask it to. Instead, let’s try to appreciate the fact that the most difficult and meaningful decisions in life can still be made by us and only us.”


If you or someone you know needs help, the National Suicide and Crisis Hotline in the United States can be reached by calling 988 or texting. There is also an online chat at 988lifeline.org

Samantha Kleinberg is an associate professor of computer science at Stevens Institute of Technology and author of “Why: A Guide to Finding and Using Causes.”

Leave a Reply

Your email address will not be published. Required fields are marked *