Another day, another academic paper highlighting the performance of a new machine learning tool for healthcare. But despite the cascades of literature, the vast majority of these AI innovations never make it into routine clinical practice. Why?
Ahead of the AI, Machine Learning and Robotics in Health Summit, we spoke with Dr. Martin Seneviratne, Digital Health Fellow at Stanford Medicine X. He discussed his recommendations on how to address the ‘implementation gap’ and further strengthen the link between binary and bedside.
“The most important part of a machine learning algorithm is what a clinician should do with the output,” said Dr. Seneviratne.
“A predictive model without a clear way to intervene for the patient will never make its way into practice.”
Dr. Seneviratne points to a number of existing prognostication algorithms which haven’t established a clear clinical use case.
In contrast, he highlights several quality-assurance algorithms that have been deployed in clinical practice – including one at Stanford which identifies patients in need of a palliative care review and makes that referral.
“The prediction coupled with a distinct clinical action are what forms part of a successful implementation recipe,” he said.
“To be clinically actionable, you need to have algorithms integrated with the EHR.”
Dr. Seneviratne will detail further examples of ‘clinically actionable’ AI research at the Summit.
Overcoming the data bottleneck
“There is a real shortage of labelled training data for healthcare. If we want to move toward a learning health system, we need to start routinely collecting data in a way that is conducive to machine learning,” Dr. Seneviratne argued.
“Algorithms are becoming the commodity. The training data is the rare asset. Large-scale data repositories are being created by the NIH [National Institutes of Health] and NHS [National Health Service], but every healthcare institution needs to be thinking about this.”
He will discuss his recommendations for reducing data bottleneck at the Summit.
Opening the black box
“There is a lot of concern about the black box problem of AI in healthcare ”, Dr. Seneviratne continued.
Healthcare is an industry in which people are used to asking ‘why?’ and understanding the exact mechanisms which underpin any given area of practice.
“But then again, there are many medications (take lithium) where we might not understand the full mechanism of action. But it has been shown to be safe and effective for many patients.
“We should have the same approach for digital therapeutics – study the logic behind them for sure, but also build empirical evidence to show they are safe and effective.”
Dr. Seneviratne will ignite discussion on the issue of ‘interpretability’ within deep learning and algorithms at the event.
Cathy O’Neil’s popular book ‘Weapons of Math Destruction’ opened the world’s eyes to the dangers of algorithmic bias, including its tendency to “prop up the lucky and push down the downtrodden”. But Dr. Seneviratne highlights how, in a clinical context, the stakes are even higher.
“On the one hand you want decisions to be more data driven, but on the other, we must acknowledge data’s susceptibility to historic bias and the inherent dangers of this – particularly when it comes to sensitive healthcare decisions,” he said.
“Since it is likely that algorithms will become more widespread over time, it is important that we deeply consider the ways in which we can quantify and address bias”.
Dr. Seneviratne will discuss his expert recommendations for minimising algorithmic bias at the Summit.
Dr. Martin Seneviratne is among an esteemed line-up of speakers to address the AI Machine Learning & Robotics in Health Summit– due to take place 20-21 November 2018 in Melbourne.
Learn more and register.