PROGRESS 2050: Toward a prosperous future for all Australians

Progress 2050

Opinion article

AI regulation in Australia: The Actuaries Institute's view

As experts in both advanced analytics and risk management, actuaries offer a crucial perspective on Australia's proposed AI regulations. Involving AI practitioners could be the key to creating effective guardrails that harness AI's potential while mitigating its perils.

Some have called actuaries “the original data scientists”. Whether or not you agree with the sentiment, the actuarial profession is well-placed to help society navigate the current era of burgeoning data, analytical tools and AI.

With a rich analytical history and a blend of contemporary technical and soft skills, including specialised data science and risk-management qualification pathways, actuaries have never been in more demand as business and government look to get value from, and manage the risks of, AI.

It is through this practitioner’s lens that the Actuaries Institute responded to the recent Federal Department of Industry, Science and Resources (DISR) proposals paper for introducing mandatory guardrails for AI in high-risk settings. This aims to create a prospective, protective regime, with controls pre-emptively applied to reduce and manage the threats of ‘high-risk’ AI before they bite.

It is AI practitioners like actuaries who will ultimately drive the success or failure of this regime, as they will be the ones asked to take action – building controls, procedures, checks and balances into AI systems as they are developed and deployed.

Our response identifies two critical questions that practitioners must be able to answer if this regime is to succeed:

  1. Can practitioners confidently and consistently categorise an AI system as high-risk or not?
  2. Can practitioners confidently and consistently describe what they need to do if an AI system is defined as high-risk?

If we cannot confidently answer “yes" to both these questions, the guardrails will not succeed in creating the intended protection. Currently, we could not.

While the Institute credits the Federal Government for the substantial progress made and believes the proposals paper is an important step forward, there is still much to do.

Defining ‘high-risk’ and General Purpose AI (GPAI) – where should the guardrails apply?

The guardrails are proposed in two situations – high-risk AI and GPAI – but within these there appear to be some gaps, waste and confusion.

First, gaps. The dimensions defining ‘high-risk’ cover sensible territory: human rights, health and safety, legal effects, impacts on groups or broader society. However, there are noticeable omissions, including economic or monetary effects, privacy and cybersecurity; areas in which we think the public would expect to see protections applied.

Second, waste. GPAI is defined too broadly, as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration into other systems”. This describes many very low-risk AI systems in use today. We suggest this be suitably narrowed so that it no longer captures models operating internally in companies in low-risk ways, but which are theoretically able to be repurposed.

Third, confusion. The principles defining ‘high-risk’ include a threshold test (via principle (f)), but there is little detail of how this threshold is to apply. We believe this will create practical challenges. Notably, AI systems can fail in unpredictable ways, and many change over time.

So, the likelihood or severity of impact is unlikely to be predicted reliably, and trained practitioners with the same information may come to different but equally valid opinions about this. We question whether this sort of approach will lead to consistent or reliable decision-making by practitioners.

The guardrails themselves – what are practitioners asked to do?

The proposals paper outlines 10 mandatory guardrails for high-risk AI systems. These are described in high-level terms without sufficient detail for a practitioner to confidently apply them.

In previous submissions on AI regulation we have cautioned against this type of one-size-fits-all approach, instead advocating for a more nuanced approach of a broad menu of guardrails linked to risks, applied as-needed for the specific risks posed by individual AI systems.

While the proposed guardrails are reasonable “headlines” that most would accept, they represent a compromise – broad requirements which might not fit every situation. We illustrate the limitations of this in our submission:

  • We demonstrate that several examples of AI harms from the proposals paper would likely not be effectively controlled by the guardrails.
  • We suggest the guardrails may be challenging to apply and may do little to manage the risks of some GPAI models.
  • We identify situations where the blunt applications of the guardrails as written may lead to direct harm to the public, including in safety systems and in consumer notifications.

We do not use these examples to suggest narrow areas of improvement, but to make a critical point about the regime’s structure – a one-size-fits-all approach often fits nothing well. We think Australia can aim higher.

Successful AI regulation needs practitioners

The proposals paper represents an important step forward for Australia, clarifying the general direction for AI regulation and containing welcome progress on many fronts. The next step requires involving those who will be tasked with implementation and who are thus best placed to identify possible flaws: AI practitioners like actuaries. We ignore practitioners at our peril.

CEDA Members contribute to our collective impact by engaging in conversations that are crucial to achieving long-term prosperity for all Australians. Find out more about becoming a member or getting involved in our work today.
About the authors
EG

Elayne Grace

See all articles
As actuaries spearhead advanced analytical techniques across sectors, Elayne Grace is a passionate advocate for AI’s potential and the importance of responsible implementation. With 30 years of international experience across financial services and broader consulting, Elayne is a frequent presenter and contributor to public policy papers and government expert panels. She was acknowledged in the Australian Financial Review's 100 Women of Influence awards for her committed efforts to shaping public policy on emerging societal issues such as climate change, intergenerational equity and responsible AI.
CD

Chris Dolman

See all articles
Chris Dolman is working to ensure AI is developed safely and responsibly. He is currently the Data & AI Risk and Ethics Principal at Telstra, helping to ensure Telstra’s AI ambitions are underpinned by strong and scalable risk, governance and ethics routines. Chris was named the 2022 Actuary of the Year in recognition of his leading work in responsible AI including research, policy and practice, and he is a Fellow of the NFP Gradient Institute.