Informa Australia is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Healthcare | Technology

How to ensure an AI ready health workforce

28 Sep 2023, by Amy Sarcevic

With 85 percent of healthcare executives now having an AI strategy, and half already deploying the technology in some form, the Australian health system is set to look radically different over the coming years. But just how ready are frontline workers for its broader adoption across the sector?

Professor Ian Scott, Director of Internal Medicine and Clinical Epidemiology at the Princess Alexandra Hospital, says the idea that training alone can ready the workforce for AI’s ubiquity is a myth. He believes a range of factors should be considered to ensure the workforce accepts and uses the technology.

Does it actually address a clinical need?

While health applications for AI may be increasingly sophisticated, executives should always interrogate whether they help clinicians do their job better, Prof Scott said. In other words, are clinical outcomes improved as a result of their usage?

“If the AI isn’t solving a genuine clinical need, there is not much point in introducing it. We don’t want to impose an AI tool that disrupts clinical workflows, unless the tool has solid evidence that using it confers tangible, meaningful improvements in care.

“Sadly, randomised trials for validating the efficacy of AI are sparse, so there is a bit of an evidence gap in demonstrating this,” he said.

A related concern is the potential for over-diagnosis or over-treatment, resulting from AI-based decision-making.

“We don’t want to over-detect minor abnormalities that don’t necessarily mean disease and subject patients to unnecessary biopsies, for example. This could do more harm than good,” Prof Scott said.

“We’ll need to think carefully and fine tune algorithms to get the balance right here.”

Will clinicians want to use them?

Before AI applications are rolled out as mainstream tools, Prof Scott said they should be thoroughly tested and optimised, to ensure they function well in the hands of clinicians.

“Unfortunately, clinicians have had bad experiences in the past with technologies that were imposed on them and which weren’t well designed.

“In the early stages of using electronic medical records, for example, there was a loss of productivity across the sector and it took a while to optimise the technology.”

Recommendations, predictions and prognoses the AI gives out must make clinical sense and inform decisions about what to do next Prof Scott said.

“The outputs of the tool presented to the clinician must be easy to interpret and be actionable. These are complicated tools, and clinicians don’t need to understand all of the nuts and bolts under the hood of how they work, but they do need to know the outputs are clinically meaningful.”

Is the tool externally validated?

AI tools should be tested in their intended environment, to ensure the outputs are calibrated to local patient populations, Prof Scott said.

Previously, tools that have worked well in one health setting have under-performed in others, given the different patient profiles in each, he highlighted.

“External validation is crucial for reassuring clinicians and patients the model has been tested in their environment with their population.

“It isn’t enough to validate AI models in a single environment and assume they’ll fit well in others. It may be that the model needs retuning to reflect a high proportion of Indigenous peoples, for example.”

Does the tool blend seamlessly into clinical workloads?

Clunky systems that take up time and interfere with clinical workflows will not be used and worked around, Prof Scott says.

“It’s vital the tools blend seamlessly into routine workflows. The AI’s interface has to be customised to local need of end users and the tool itself needs to be calibrated to the population.

“Care must be taken to avoid cognitive overload and alert fatigue that desensitises workers to any alerts the AI gives out.

“The tool design and the end user testing will need to be done methodically.”

Will the tool compromise the patient-clinician relationship?

A criticism of AI and other health technologies is their potential to distract clinicians from quality person to person care.

“This has been a major downside of EMR,” Prof Scott said. “Some clinicians say they resent the amount of time they have to spend staring at a screen, instead of interacting and building a rapport with their patient.”

Building and maintaining patient trust is also a central issue in an era of shared clinical decision making, he suggested.

“Patients will rightfully want to understand how much of their care is being managed by AI versus the professional in front of them.

“Surveys suggest that most patients are happy for AI to assist decision making, but they want clinicians to be the final arbiter. Few are comfortable with receiving a diagnosis determined solely by AI.”

To maintain trust and ethics, clinicians should seek informed consent from patients about their intention to use AI to assist decision-making.

“We ideally need to advise patients at the beginning of any consult that AI will be used in the management of their care, with a brief explanation on how. This may involve reassuring patients about the tool’s accuracy and reliability.”

Is it clear who will be liable if there is a bad patient outcome as a result of AI error?

Should AI give an incorrect prediction leading to an erroneous clinical decision, clinicians will need to understand who is liable for any unfavourable patient outcomes that result. As yet, the legal and regulatory landscape surrounding AI tools remains unclear.

“We don’t yet know if it will be a shared liability and if so, how liability will be apportioned,” Prof Scott said.

“Theoretically, the more autonomous the tool, the more regulated it needs to be. Also, tools that directly impact care of acutely ill patients will need a much higher evidence base of efficacy and safety than a tool that works at the backend triaging or scheduling clinic appointments where patients might be inconvenienced but not necessarily harmed.”

Do clinicians have sufficient AI literacy?

While clinicians won’t need to understand the technicalities of deep learning or convoluted neural networks, they will benefit from ongoing education aimed at improving their AI literacy, Prof Scott argued.

“Of course, they don’t need to know what a data scientist knows but it is important to understand basic concepts and the pitfalls and limitations of the technology.

“At the very least, clinicians should want to ask developers and implementers on behalf of their patients – i.e. what sort of populations has this AI model been tested on,” he said.

Professor Ian Scott is Director of Internal Medicine and Clinical Epidemiology at the Princess Alexandra Hospital. Hear more from him at the AI in Health Readiness Forum, hosted by Informa Connect.

This year’s event will be held 25 October 2023 at the Ivy Sunroom, Sydney.

Learn more and register your place here.

Blog insights you may like

Get all the latest on Informa news and events

Informa Connect Australia is the nation's leading event organiser. Our events comprise of large scale exhibitions, industry conferences and highly specialised corporate training.

Find out more

Subscribe to Insights
SUBSCRIBE 

Join Our Newsletter
Informa Insights

Stay up-to-date with all the latest
updates, upcoming events & more.
close-link