EARLIER THIS MONTH the University of Nottingham published a study in PloSOne about a new artificial intelligence model that uses machine learning to predict the risk of premature death, using banked health data (on age and lifestyle factors) from Brits aged 40 to 69. This study comes months after a joint study between UC San Francisco, Stanford, and Google, which reported results of machine-learning-based data mining of electronic health records to assess the likelihood that a patient would die in hospital. One goal of both studies was to assess how this information might help clinicians decide which patients might most benefit from intervention.
The FDA is also looking at how AI will be used in health care and posted a call earlier this month for a regulatory framework for AI in medical care. As the conversation around artificial intelligence and medicine progresses, it is clear we must have specific oversight around the role of AI in determining and predicting death.
There are a few reasons for this. To start, researchers and scientists have flagged concerns about bias creeping into AI. As Eric Topol, physician and author of the book Deep Medicine: Artificial Intelligence in Healthcare, puts it, the challenge of biases in machine learning originate from the "neural inputs" embedded within the algorithm, which may include human biases. And even though researchers are talking about the problem, issues remain. Case in point: The launch of a new Stanford institute for AI a few weeks ago came under scrutiny for its lack of ethnic diversity.
Then there is the issue of unconscious, or implicit, bias in health care, which has been studied extensively, both as it relates to physicians in academic medicine and toward patients. There are differences, for instance, in how patients of different ethnic groups are treated for pain, though the effect can vary based on the doctor's gender and cognitive load. One study found these biases may be less likely in black or female physicians. (It’s also been found that health apps in smartphones and wearables are subject to biases.)
In 2017 a study challenged the impact of these biases, finding that while physicians may implicitly prefer white patients, it may not affect their clinical decision-making. However it was an outlier in a sea of other studies finding the opposite. Even at the neighborhood level, which the Nottingham study looked at, there are biases—for instance black people may have worse outcomes of some diseases if they live in communities that have more racial bias toward them. And biases based on gender cannot be ignored: Women may be treated less aggressively post-heart attack (acute coronary syndrome), for instance.
When it comes to death and end-of-life care, these biases may be particularly concerning, as they could perpetuate existing differences. A 2014 study found that surrogate decisionmakers of nonwhite patients are more likely to withdraw ventilation compared to white patients. The SUPPORT (Study To Understand Prognoses and Preferences for Outcomes and Risks of Treatments) study examined data from more than 9,000 patients at five hospitals and found that black patients received less intervention toward end of life, and that while black patients expressed a desire to discuss cardiopulmonary resuscitation (CPR) with their doctors, they were statistically significantly less likely to have these conversations. Other studies have found similar conclusions regarding black patients reporting being less informed about end-of-life care.
Yet these trends are not consistent. One study from 2017, which analyzed survey data, found no significant difference in end-of-life care that could be related to race. And as one palliative care doctor indicated, many other studies have found that some ethnic groups prefer more aggressive care toward end of life—and that this may be related to a response to fighting against a systematically biased health care system. Even though preferences may differ between ethnic groups, bias can still result when a physician may unconsciously not provide all options or make assumptions about what options a given patient may prefer based on their ethnicity.
However, in some cases, cautious use of AI may be helpful as one component of an assessment at end of life, possibly to reduce the effect of bias. Last year, Chinese researchers used AI to assess brain death. Remarkably, using an algorithm, the machine was better able to pick up on brain activity that had been missed by doctors using standard techniques. These findings bring to mind the case of Jahi McMath, the young girl who fell into a vegetative state after a complication during surgical removal of her tonsils. Implicit bias may have played a role not just in how she and her family were treated, but arguably in the conversations around whether she were alive or dead. But Topol cautions that using AI for the purposes of assessing brain activity should be validated before they are used outside of a research setting.
We know that health providers can try to train themselves out of their implicit biases. The unconscious bias training that Stanford offers is one option, and something I’ve completed myself. Other institutions have included training that focuses on introspection or mindfulness. But it's an entirely different challenge to imagine scrubbing biases from algorithms and the datasets they're trained on.
Given that the broader advisory council that Google just launched to oversee the ethics behind AI is now canceled, a better option would be allowing a more centralized regulatory body—such as building upon the proposal put forth by the FDA—that could serve universities, the tech industry, and hospitals.
Artificial intelligence is a promising tool that has shown its utility for diagnostic purposes, but predicting death, and possibly even determining death, is a unique and challenging area that could be fraught with the same biases that affect analog physician-patient interactions. And one day, whether we are prepared or not, we will be faced by the practical and philosophical conundrum by having a machine involved in determining human death. Let’s ensure that this technology doesn’t inherit our biases.