Skip to main content Skip to secondary navigation
Main content start

Can AI Make us Healthier?

Professor Marzyeh Ghassemi presented at AI for Good seminar series with her critical and thoughtful assessment of the current state and future potential of AI in healthcare
Professor Marzyeh Ghassemi presents at AI for Good
Professor Marzyeh Ghassemi presents at AI for Good seminar series with her critical and thoughtful assessment of the current state and future potential of AI in healthcare

By Izzy Pirimai Aguiar

Professor Marzyeh Ghassemi empowered this week’s audience at the AI for Good seminar series with her critical and thoughtful assessment of the current state and future potential of AI in healthcare. Ghassemi is an Assistant Professor in the Departments of Computer Science and Medicine at the University of Toronto and a faculty member of the Vector Institute. Her interdisciplinary expertise contributed to a rich discussion about the state of AI in medicine and the various ways in which she and her collaborators are proactively addressing the challenges present in the field.

As Ghassemi pointed out, the healthcare system is fraught with insufficiencies in both practice and knowledge. Our information about medical practice is biased and inconsistent. Despite the requirement of ethical training for doctors, the biases in the healthcare system mirror those of our society. Minorities, women, low-income earners, and LGBTQIA+ people are all more likely to receive worse health care, and less likely to be represented in studies that inform and define medical practices. Furthermore, treatment practices vary from patient to patient and doctor to doctor. For example, the treatment pathways of 24% of patients with hypertension are completely unique from other patients, indicating the unpredictability of treatment recommendations given patient conditions.

In addition to the gaps in medical practice, the current set of medical knowledge is problematic, as well. The “gold standard” of medicine, randomized control trials (RCT’s), although prized as being unbiased reflections of medical Truth, only actually inform 10-20% of treatments, and about 10% of their results are later contradicted. RCT’s are poor predictors of the best treatment plan for a patient, as well. For example, only 6% of current asthmatics would have been eligible to participate in the RCT for their very own treatment plan. Furthermore, the population that has historically been positioned to participate in such trials grossly misrepresent the population of patients on the whole. “We don’t want to generalize things that we learn on Yale and MIT undergraduates,” Ghassemi noted.

In presenting data about the existing inequalities and injustices in healthcare, Ghassemi reminded the audience, “this is all before we apply an algorithm.” Machine learning and the use of algorithms in general are often critiqued for their tendency to be biased-- targeting or under-serving vulnerable and underrepresented populations-- so it was surprising to hear that an entire profession had these weaknesses prior to and independent of their use of algorithms. “We want to make sure we’re improving care, improving practice, not just using discrepancies that exist in the data and making them worse; automating them.” Ghassemi was able to offer an optimistic perspective about this, sharing that she truly believes that properly designed AI can help to identify discrepancies in care and eradicate these downfalls.

However, given that both medical practice and knowledge are laden with inadequacies, the question is revealed to be more remedial than how we should use AI in healthcare, but rather, with what information shall we even build such an algorithm? “I believe that we need to use complex models for complex data. We shouldn’t just limit ourselves to one kind of data modality,” Ghassemi encouraged the audience.

Ghassemi has a wealth of research in commitment to the responsible implementation of AI in healthcare, and presented her research from a variety of papers in medicineML research, and ethics. With excitement, Ghassemi shared the implementation details of models used to write x-ray specific notes for radiologists, or predict interventions and treatments for patients in the ICU.

Interestingly, Ghassemi contributed an uncommon opinion on the field of interpretability/explainability in AI. While Ghassemi supports the research field and intentions as a whole, her support came with words of caution. In asking for AI models to be more interpretable, we are asking that these models become more trustworthy. “But there’s a problem with trust. The problem is that trust has a real impact” Ghassemi pointed out that when decisions or models are easily explainable, people tend to over-trust them. Given that trust already plays a huge role in the healthcare setting, it is critical that models used to suggest treatment, predict interventions, or evaluate the state of a patient, are not over-trusted. “We don’t want to turn off people’s alarm bells.”

There is clear potential in the ways in which AI can improve the current state of healthcare. However, as Ghassemi reminded us, “we want to make sure we’re improving healthcare for all.” When implemented thoughtfully and responsibly, algorithms can help alleviate some of humanity’s greatest afflictions: bias, negligence, and injustice. Given that the FDA has already approved the use of over 40 algorithms for medical practice, the use of AI in healthcare and medicine is not a future, but a reality.  It is our responsibility to ensure that these algorithms don’t learn from and enforce our hubris, but lift us out of it.