Explore our Climate and Energy Hub
Australia’s new Guidance for AI Adoption marks a pivotal shift from abstract principles to practical governance—offering businesses a clear roadmap to build trust, manage risk and unlock innovation with confidence in an AI-driven economy.
Australia’s National AI Centre (NAIC) launched its Guidance for AI Adoption in October last year. This is a streamlined and expanded framework to help organisations harness artificial intelligence responsibly and confidently.
This is a significant step in our nation’s approach to AI governance.
The guidance moves beyond the government’s 2024 Voluntary AI Safety Standard, distilling its 10 safety guardrails into six essential practices (6AI), comprising Foundations and Implementation Practices.
The key question for Australian businesses is how to leverage this practical roadmap to build trust, manage risk and drive innovation with confidence.
The Voluntary AI Safety Standard laid crucial groundwork, defining "what" responsible AI should look like with concepts including accountability and risk management, aligning with international benchmarks like ISO/IEC 42001 and the NIST AI RMF.
Its purpose was largely preparatory: to prime Australian enterprises for potential future mandatory guardrails.
By contrast, the 2025 Guidance for AI Adoption replaces formalistic guardrails with a practical AI governance model based on six integrated practices:
Understand impacts and plan accordingly
Measure and manage risks
Share essential information
Test and monitor
Maintain human control
This shift is profound, reframing responsible AI as a core imperative and a cornerstone of sustainable innovation for organisations.
The accompanying Implementation Practices provide the technical rigour, with elements like AI management systems and accountability maps ensuring observable and auditable governance. Crucially, it integrates human-centred impact analysis and directly addresses the unpredictable behaviours of General Purpose AI (GPAI) and frontier models.
This pragmatic approach places Australia alongside other nations seeking a middle ground, distinct from the more prescriptive EU AI Act and the market-led framework of the US.
Discussions with business leaders show the key challenge is balancing AI's transformative power with trust.
This guidance addresses that tension, fostering innovation without stifling it, and building trust without unchecked development. This balance is our unique competitive advantage.
For organisations ready to move beyond policy statements into measurable governance, Guidance for AI Adoption becomes the practical roadmap for enterprise-ready accountability in an AI-driven economy.
However, from working with our customers on AI adoption, I see the guidance’s clear benefits and solid foundation, alongside areas where organisations need to proactively bridge the gap.
The framework's "light-touch" approach prioritises innovation, but its voluntary nature across much of the economy presents a critical challenge.
Unlike the EU Act, Australia relies heavily on goodwill and existing, often immature, technology-neutral laws.
This can create legal ambiguities, particularly around:
In addition, the guidance's sophistication could deepen the ‘AI divide’.
Smaller enterprises often lack the resources to operationalise such frameworks, risking a scenario where only large firms lead in responsible AI.
From my discussions with C-suite leaders, they seek practical governance that mitigates risk without stifling innovation.
Our experience aligns with the NAIC's guidance, and we advise leaders to address the same critical areas that regulators are focused on:
a. Sovereign AI: Defining what Sovereign AI means for Australia requires safeguarding national AI capability and data sovereignty. Fujitsu believes investing in inferencing infrastructure rather than building large language models locally is the path forward. This means ensuring that prompts, data and outputs are processed within Australian jurisdictions and subject to Australian law.
b. Risk and safety redefined: Focusing on where AI introduces new or amplified risks that existing laws may not cover, particularly around system bias, discrimination, deepfakes and misinformation.
c. Clarity on accountability and liability: Defining who is responsible when an AI system causes harm, particularly across complex supply chains involving developers, deployers and data providers.
d. Effective monitoring and auditing: Continuing to monitor and audit AI systems in deployment to ensure ongoing safety and compliance.
Guidance for AI Adoption provides a crucial opportunity for Australian organisations to turn their ambition into impactful action that builds trust.
Businesses now have a clear, practical roadmap for building trust, managing risk and driving innovation with confidence.
Here’s how to get started:
The market, customers and future regulations will increasingly demand demonstrable commitment to responsible AI. This new guidance provides the pathway.
We encourage every Australian organisation to delve into the NAIC's Guidance for AI Adoption and its Implementation Practices.
Further insights on AI adoption have been drawn from recent discussions with business and policy leaders.
Looking at global benchmarks or indicators of above-average growth in productivity, innovation and research districts are a good proxy for a city’s economic diversity and health. These districts tend to exhibit strong ‘place capital’, resulting in a distinctive identity, investability and attractiveness to talent. They also often exhibit collaborative governance frameworks and a strong sense of intentional specialisation.
Read more Opinion article November 9, 2022Mental health has become a personal priority for many in recent years with the advent of COVID-19 lockdowns and its many pressures, both financial and emotional. Unfortunately, in practice this has not necessarily been mirrored in workplaces across Australia.
Read more Opinion article August 8, 2018Following the launch of An India Economic Strategy to 2035: Navigating from potential to delivery on the CEDA stage, the report's author, University of Queensland Chancellor Peter Varghese AO, outlines the ambitious plan to unlock economic opportunities for Australia from India’s growth.
Read more