AI Leadership Summit 2025 Highlights

Content Hub

Opinion article

AI control, not AI creation

The case for Australia's sovereign AI strategy.

Understanding Sovereign AI: Why nations are redefining digital control

What is Sovereign AI?

Sovereign AI refers to a nation’s ability to develop, control and operate artificial intelligence using its own infrastructure, data and talent to ensure that AI serves national interests rather than those of foreign entities.

While the concept may sound simple, every major stakeholder fundamentally agrees that Sovereign AI is essential for national security and economic stability, however, its specific interpretation and prioritised implementation vary widely across governments, industries and commercial sectors.

Governments view Sovereign AI as a matter of national security and autonomy, while industries respond variously, seeing it as either a commercial opportunity or a market restriction. Essentially Sovereign AI is an extension of digital sovereignty - control over data, compute and AI applications developed domestically.

Why Sovereign AI matters

In 2025’s geopolitical landscape, control over AI is as vital as control over energy or defence. Several reports note that AI sovereignty has become a strategic imperative, defining national competitiveness, governance and autonomy. Countries now treat control over data, compute and model governance as a core element of national power, since AI can influence economies, security decisions and even democratic discourse.

For Australia, domestic capacity reduces the risk of data exfiltration, preserves personally identifiable information and sensitive defence data under national laws, and mitigates dependence on foreign providers - a risk amplified by cloud concentration and potential geo-political policy shocks.

The layers of sovereignty

True sovereignty spans the entire AI stack, from infrastructure to personnel, and not all layers are equally achievable domestically:

  • Data sovereignty: Sensitive national and citizen data held and processed within Australian jurisdiction.
  • Model sovereignty: Models trained or fine-tuned locally to reflect linguistic, cultural and ethical contexts.
  • Compute sovereignty: Domestically hosted, auditable and compliant infrastructure enabling autonomy.
  • Workforce sovereignty: Local skill and R&D capability to develop and maintain AI systems.
  • Supply chain sovereignty: Domestically manufactured components (like GPU), that are used for Compute.

Replicating all these layers are economically prohibitive. Even countries that are touted as leaders in the AI space struggle to achieve a complete sovereignty in this space across all layers. Sovereign capability thus requires balance – prioritising control points (data, inference, governance) rather than total isolation.

The two phases of AI: Training and inference

AI systems operate in two distinct phases: training and inference.

  • Training is the initial, capital-heavy process where models learn from vast datasets, costing up to hundreds of millions for advanced foundation models. It is extremely compute-intensive.
  • Inference is the ongoing, recurring task of running these trained models to produce outputs. While less demanding than training, it still requires significant infrastructure investment.

This fundamental dichotomy raises the strategic question of whether a nation, like Australia, should aim to be an AI maker (creating and training models) or an AI taker (adopting existing models for inferencing).

Getting the right balance

Australia's pursuit of AI sovereignty is a strategic, not existential, question of where to focus within the global AI stack, particularly because no single country has achieved full-stack sovereignty.

Given this reality, Australia's pragmatic approach lies in strategic control, not total reconstruction. The strong arguments favour concentrating investment on a limited portion of the stack—specifically hosting and inference—rather than costly, resource-intensive training and manufacturing:

  • Global infeasibility: Replicating the entire supply chain, from semiconductor manufacturing and advanced GPU production, is economically and logistically unrealistic for Australia in the near term, mirroring the lack of full-stack sovereignty even among global leaders.
  • Data protection & compliance: Hosting inference infrastructure within Australia ensures that critical national data (government, health, defence) and API calls do not leave the country, preserving data sovereignty and domestic compliance.
  • Cost-efficient customisation: Instead of attempting to build foundation models from scratch, the more efficient path is fine-tuning proven global models on Australian-specific data (including Indigenous language or industry content). This avoids duplicating the vast R&D investment of global leaders and capitalises on their rapid model evolution.
  • Economic & skill growth: Establishing domestic inference clusters and operations retains operational expenditure locally and cultivates essential applied AI and ethical governance skillsets, fostering long-term digital resilience.

In essence, a hybrid model is the most viable path. Australia should leverage domestically-hosted, auditable cloud regions under Australian legal frameworks, blending global compute efficiency with local data protection, effectively achieving "sovereign cloud" capabilities.

Conclusion

For Australia, Sovereign AI is both a national imperative and a strategic opportunity. The nation’s focus should be on inferencing infrastructure, fine-tuning, data governance and skilled capability rather than replicating every layer in the AI sovereignty stack. Sovereignty lies not in isolation but in control, transparency, and alignment with national values - ensuring that the digital intelligence shaping the country’s future is truly Australian in design, purpose and spirit.

CEDA Members contribute to our collective impact by engaging in conversations that are crucial to achieving long-term prosperity for all Australians. Find out more about becoming a member, our ESG and AI Communities of Best Practice or getting involved in our research today.
About the author
MK

Mahesh Krishnan

See all articles
Mahesh is the Oceania Chief Technology Officer at Fujitsu Australia.