Informa Australia is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Healthcare

AI in Healthcare: from performance to trust, governance and ethics

14 Jan 2026, by Amy Sarcevic

Artificial intelligence (AI) has long been debated in healthcare, but in recent years, the conversation has shifted.

According to Dr Yagiz Alp Aksoy (MD, PhD), a clinician at Royal North Shore Hospital and Senior Clinical Research Fellow at the Biomedical AI Centre, Centenary Institute, AI’s place in healthcare is no longer in question.

Instead, attention has turned to how these systems are governed, monitored and trusted when deployed in real clinical settings.

“The promise of AI in healthcare is enormous,” Dr Aksoy said. “But without strong ethical oversight and governance, these technologies can amplify bias, entrench inequities, or create new forms of harm.

“The real question now is not whether AI can work, but how we ensure it is used safely, transparently and responsibly. We need systems that clinicians and patients can genuinely trust.”

Dr Aksoy’s work sits at the intersection of clinical medicine, AI and health policy. He is a chief investigator on the $2.25 million Responsible and Ethical AI in Health Research (REP‑AI) project, funded through the National Health and Medical Research Council (NHMRC) Partnership Projects scheme.

This four‑year national study examines how AI is currently used in health research and clinical contexts, consults experts across Australia, and engages the public to understand community expectations around AI use in healthcare.

A central aim of the REP‑AI project is to translate ethical principles into practice. The team is developing practical tools and guidance to support human research ethics committees and health organisations in evaluating AI‑based studies and technologies.

The project is the only nationally funded initiative this year focused specifically on applied ethics and AI governance in healthcare.

From accuracy to accountability

Reflecting on this work so far, Dr Aksoy explained that regulators and health systems are increasingly moving beyond headline performance metrics such as accuracy or area‑under‑the‑curve scores.

“Regulators no longer care if your model is 95 per cent accurate in isolation,” he said. “What matters is whether its outputs are explainable at the point of care, whether there is meaningful oversight, and whether accountability is clear when something goes wrong.

“If an AI system is suggesting a diagnosis or risk estimate, a clinician should be able to understand why, rather than being asked to accept a black‑box result.”

Recent international developments have highlighted why this matters, with rapidly AI-enabled health tools now moving beyond traditional regulatory pathways. In the United States, for example, the launch of large language model–based health tools, such as ChatGPT Health, has coincided with shifts in how the Food and Drug Administration (FDA) oversees AI-enabled systems.

As AI increasingly operates outside conventional medical device frameworks, questions around validation, accountability and post-deployment oversight become more pressing.

“These developments underscore the need for governance models that extend beyond pre-market approval, incorporating ongoing monitoring, clear accountability and human-centred oversight once systems are deployed,” Dr Aksoy said.

Decision support, not decision making

While AI can enhance clinical decision‑making, Dr Aksoy cautioned against an over‑reliance on automated systems.

“The most defensible position right now is AI‑informed decisions where humans remain accountable,” he said. “This isn’t an anti‑AI stance. It’s a practical approach that protects patients and clinicians while enabling safe adoption.”

This concern extends to automation bias, where clinicians may defer to AI outputs even when they conflict with clinical intuition. “Good governance means designing systems that encourage critical thinking, not blind trust.”

Applicability to the real world

Another challenge lies in translating AI from curated datasets into everyday clinical environments.

“Clinical data is messy and fragmented,” Dr Aksoy said. “Doctors often walk into a room without knowing what they are going to see, and they rely heavily on context and judgment.”

“In contrast, AI models are often trained on highly structured, cleaned datasets. A model can look impressive on paper, yet struggle in real life because it has never encountered rare presentations, outliers or the complexity of day‑to‑day practice.”

Ongoing monitoring and model drift

Dr Aksoy emphasised that AI governance does not end once a system is deployed.

“Health systems are realising that AI behaves more like a living system than a static medical device,” he said. “Models can degrade silently over time as populations, workflows and treatments change.”

“For example, a sepsis prediction model trained before COVID‑19 could perform poorly afterwards due to shifts in admission thresholds and patient presentations. That doesn’t mean the model is broken, it means it needs active monitoring.”

He argues that regulation must incorporate post‑deployment surveillance, with clear triggers for review and recalibration.

Further insight

Dr Aksoy will explore these themes in greater depth at the AI in Health Regulation, Policy and Standards Conference, one of three conferences to be held at Connect Virtual Care on 31st March – 1 April 2026.

His presentation will focus on translating clinical AI into practice safely, with particular emphasis on governance frameworks, ethical oversight, accountability and ongoing monitoring.

One pass for Connect Virtual Care provides access to three conferences:

National Telehealth
Hospital in the Home
AI in Health Regulation, Policy and Standards

About Dr Aksoy

Dr Yagiz Alp Aksoy (MD, PhD) is a clinician at Royal North Shore Hospital and Senior Clinical Research Fellow at the Centenary Institute’s Biomedical AI Centre. He is a chief investigator on the NHMRC‑funded REP‑AI project, which focuses on responsible and ethical AI in health research.

His work spans clinical practice, public‑interest research and industry translation, with a focus on developing AI‑enabled prognostic models and governance frameworks that prioritise safety, transparency, accountability and real‑world applicability.

 

Blog insights you may like

Get all the latest on Informa news and events

Informa Connect Australia is the nation's leading event organiser. Our events comprise of large scale exhibitions, industry conferences and highly specialised corporate training.

Find out more

Subscribe to Insights
SUBSCRIBE 

Join Our Newsletter
Informa Insights

Stay up-to-date with all the latest
updates, upcoming events & more.
close-link