Explore our State of the Nation Hub

Content Hub

Opinion article

Governing AI in healthcare: why smart governance unlocks productivity

In healthcare, the debate around AI is often framed as a trade-off between innovation and safety — but effective governance is what makes sustainable innovation possible.

Artificial intelligence (AI) is increasingly framed as a productivity solution for healthcare systems under pressure, even in the absence of robust real-world evidence of realised benefits. At the same time, AI is often treated as a governance challenge that must be tightly constrained or delayed. Too often AI-related decision making is framed as needing to move quickly and accept risk, or govern carefully and sacrifice productivity. This is a false choice. In healthcare, both innovation and appropriate governance are essential. The real question is how we can ensure AI governance in healthcare is safe, effective and responsive in times of rapid change, while also enabling scaled implementation.

AI systems are increasingly embedded in clinical, administrative, and operational workflows, often interacting directly with patient data and influencing decisions that affect safety, quality and equity. Poorly governed AI can introduce bias, amplify errors, expose health services to cybersecurity threats, and erode trust among clinicians and the public. For a sector that relies on professional judgement, institutional credibility, and social licence, strong governance is not optional.

Productivity and governance are not competing goals

What is less settled is how to establish context-relevant governance that optimises productivity. Productivity gains from AI are not guaranteed simply because tools exist or pilots succeed; they depend on governance and implementation pathways that are effective, efficient and responsive to real-world health system needs. When governance is slow or disconnected from operational realities, productivity gains fail to materialise and organisations risk driving AI adoption outside formal oversight. Clinicians, managers and support staff are increasingly exposed to powerful, easy-to-access AI solutions outside organisational approval processes. When governance is viewed primarily as a handbrake, it incentivises bypassing formal channels and can increase risk by driving unvetted use.

What should effective AI governance look like?

Effective AI governance should not be an additional layer of bureaucracy applied after the fact, nor a one-off approval hurdle. At a minimum, it should be proportionate to risk, clear about decision rights, and designed to support adoption. Low-risk administrative tools should not face the same barriers as high-risk clinical decision support. Clear accountability matters: when responsibility is diffused across committees or lines of authority are unclear, progress stalls and confidence erodes. Governance that can make timely, well-scoped decisions is central to both safety and productivity.

Equally important is integration. AI governance is likely to work best when embedded within existing organisational structures, rather than bolted on as a parallel process. Clinical governance, quality and safety systems, data governance and cybersecurity frameworks already exist to manage risk in healthcare. AI governance should be integrated into these structures, not duplicate them. Done well, integration reduces friction, improves consistency, and allows AI-related risks to be assessed alongside other clinical and digital risks.

Governance must scale in the AI era

Scaling governance is another challenge that cannot be ignored. Health systems are not introducing one or two AI tools, but dozens, often across multiple services and settings. Governance processes must be capable of scaling consistently without relying on heroic effort or goodwill. Governance that is expected to absorb AI responsibilities without appropriate resourcing will inevitably fail, either by becoming a bottleneck or by becoming superficial. Treating AI governance as strategic critical infrastructure, rather than an administrative overhead, is essential.

Governance is a lifecycle responsibility

Governance cannot stop at approval. Many AI risks emerge after implementation, as data changes, patient populations shift or workflows evolve. Without mechanisms for monitoring performance, detecting model drift, and revisiting decisions, organisations risk false reassurance. Lifecycle governance, including clear triggers for review and withdrawal, is increasingly central to maintaining both safety and trust.

Trust and equity cannot be afterthoughts

Trust, equity, and social licence must also be part of governance. Public confidence in healthcare AI depends not only on outcomes, but on transparency and accountability. Equity is not just about whether AI systems are biased, but about who benefits from adoption and who is left behind. Governance frameworks that are overly complex or resource-intensive risk concentrating AI benefits in well-resourced settings, while smaller or regional services struggle to participate. Poor governance design can unintentionally entrench inequity, even when intentions are sound.

Governance as a strategic capability

The productivity potential of AI in healthcare will be realised because of governance, not despite it. Well-designed governance enables safe experimentation, supports consistent scaling, and builds confidence among clinicians, managers, and the public. Poor governance either freezes progress or drives risk underground. As health systems grapple with rising demand, workforce constraints, and fiscal pressure, the choice is not between governance and productivity. The choice is between governance that delays and incentivises unmitigated risk or effective governance that enables and supports high-value AI implementation. Governance is a strategic capability that will influence whether AI delivers on its promise to improve productivity, efficiency, quality, and equity, or whether it becomes another missed opportunity buried under avoidable friction, delays and lost trust. 

CEDA Members contribute to our collective impact by engaging in conversations that are crucial to achieving long-term prosperity for all Australians. Find out more about becoming a member, our ESG and AI Communities of Best Practice or getting involved in our research today.
About the author
SM

Steven McPhail

See all articles
Professor Steven McPhail is an internationally renowned health service innovator, health economist, clinician and researcher. He is Director of the Australian Centre for Health Services Innovation (AusHSI) and the Centre for Healthcare Transformation at QUT. Steve is passionate about empowering health services to deliver high-value patient-centred care, particularly improving care for vulnerable members of our community and their families. As a health economist and clinician, he has spent more than a decade working collaboratively in strong multidisciplinary teams that bridge the divide between health service, industry and academic sectors. Steve’s work has been cited in policy-related documents from the World Bank and World Health Organisation. Interventions, patient assessment procedures, decision support tools, and clinical care models he has developed or evaluated have led to changes in more than 100 health services on 6 continents.