Explore our Climate and Energy Hub

Content Hub

Opinion article

Australia's AI blueprint to build trust and drive innovation

Australia’s new Guidance for AI Adoption marks a pivotal shift from abstract principles to practical governance—offering businesses a clear roadmap to build trust, manage risk and unlock innovation with confidence in an AI-driven economy.

Australia’s National AI Centre (NAIC) launched its Guidance for AI Adoption in October last year. This is a streamlined and expanded framework to help organisations harness artificial intelligence responsibly and confidently.

This is a significant step in our nation’s approach to AI governance.

The guidance moves beyond the government’s 2024 Voluntary AI Safety Standard, distilling its 10 safety guardrails into six essential practices (6AI), comprising Foundations and Implementation Practices.

The key question for Australian businesses is how to leverage this practical roadmap to build trust, manage risk and drive innovation with confidence.

Australia’s ‘middle ground’ approach is our strategic advantage

The Voluntary AI Safety Standard laid crucial groundwork, defining "what" responsible AI should look like with concepts including accountability and risk management, aligning with international benchmarks like ISO/IEC 42001 and the NIST AI RMF. 

Its purpose was largely preparatory: to prime Australian enterprises for potential future mandatory guardrails.

By contrast, the 2025Guidance forAIAdoption replaces formalistic guardrails with a practical AI governance model based on six integrated practices:

  1. Decide who is accountable
  2. Understand impacts and plan accordingly

  3. Measure and manage risks

  4. Share essential information

  5. Test and monitor

  6. Maintain human control

This shift is profound, reframing responsible AI as a core imperative and a cornerstone of sustainable innovation for organisations.

The accompanying Implementation Practices provide the technical rigour, with elements like AI management systems and accountability maps ensuring observable and auditable governance. Crucially, it integrates human-centred impact analysis and directly addresses the unpredictable behaviours of General Purpose AI (GPAI) and frontier models.

This pragmatic approach places Australia alongside other nations seeking a middle ground, distinct from the more prescriptive EU AI Act and the market-led framework of the US. 

Discussions with business leaders show the key challenge is balancing AI's transformative power with trust. 

This guidance addresses that tension, fostering innovation without stifling it, and building trust without unchecked development. This balance is our unique competitive advantage.

For organisations ready to move beyond policy statements into measurable governance, Guidance for AI Adoption becomes the practical roadmap for enterprise-ready accountability in an AI-driven economy.

Why ‘voluntary’ guidance isn’t optional for AI adoption

However, from working with our customers on AI adoption, I see the guidance’s clear benefits and solid foundation, alongside areas where organisations need to proactively bridge the gap.

The framework's "light-touch" approach prioritises innovation, but its voluntary nature across much of the economy presents a critical challenge.

 

Unlike the EU Act, Australia relies heavily on goodwill and existing, often immature, technology-neutral laws. 

 

This can create legal ambiguities, particularly around:

 

  • Algorithmic discrimination: Proving intent when bias emerges from complex AI systems is difficult.
  • Liability and tort: Determining who is legally responsible for harm caused by autonomous AI is challenging, especially with complex development and deployment networks.

In addition, the guidance's sophistication could deepen the ‘AI divide’.

 

Smaller enterprises often lack the resources to operationalise such frameworks, risking a scenario where only large firms lead in responsible AI.

 

From my discussions with C-suite leaders, they seek practical governance that mitigates risk without stifling innovation. 

 

Our experience aligns with the NAIC's guidance, and we advise leaders to address the same critical areas that regulators are focused on:

 

a.    Sovereign AI: Defining what Sovereign AI means for Australia requires safeguarding national AI capability and data sovereignty. Fujitsu believes investing in inferencing infrastructure rather than building large language models locally is the path forward. This means ensuring that prompts, data and outputs are processed within Australian jurisdictions and subject to Australian law.

b.    Risk and safety redefined: Focusing on where AI introduces new or amplified risks that existing laws may not cover, particularly around system bias, discrimination, deepfakes and misinformation.

c.    Clarity on accountability and liability: Defining who is responsible when an AI system causes harm, particularly across complex supply chains involving developers, deployers and data providers.

d.    Effective monitoring and auditing: Continuing to monitor and audit AI systems in deployment to ensure ongoing safety and compliance. 

Trusted innovation starts with Guidance for AI Adoption 

Guidance for AI Adoption provides a crucial opportunity for Australian organisations to turn their ambition into impactful action that builds trust.

Businesses now have a clear, practical roadmap for building trust, managing risk and driving innovation with confidence.

Here’s how to get started:

  • Begin with the Guidance for AI Adoption: This distils AI governance into six clear, actional practices.
  • Then move to the Implementation Process:  This guides organisations through the entire lifecycle, from setting accountabilities to continuous oversight.

The market, customers and future regulations will increasingly demand demonstrable commitment to responsible AI. This new guidance provides the pathway.

We encourage every Australian organisation to delve into the NAIC's Guidance for AI Adoption and its Implementation Practices.

Further insights on AI adoption have been drawn from recent discussions with business and policy leaders.

CEDA Members contribute to our collective impact by engaging in conversations that are crucial to achieving long-term prosperity for all Australians. Find out more about becoming a member, our ESG and AI Communities of Best Practice or getting involved in our research today.
About the author
MK

Mahesh Krishnan

See all articles
Mahesh is the Oceania Chief Technology Officer at Fujitsu Australia.