PROGRESS 2050: Toward a prosperous future for all Australians
As experts in both advanced analytics and risk management, actuaries offer a crucial perspective on Australia's proposed AI regulations. Involving AI practitioners could be the key to creating effective guardrails that harness AI's potential while mitigating its perils.
Some have called actuaries “the original data scientists”. Whether or not you agree with the sentiment, the actuarial profession is well-placed to help society navigate the current era of burgeoning data, analytical tools and AI.
With a rich analytical history and a blend of contemporary technical and soft skills, including specialised data science and risk-management qualification pathways, actuaries have never been in more demand as business and government look to get value from, and manage the risks of, AI.
It is through this practitioner’s lens that the Actuaries Institute responded to the recent Federal Department of Industry, Science and Resources (DISR) proposals paper for introducing mandatory guardrails for AI in high-risk settings. This aims to create a prospective, protective regime, with controls pre-emptively applied to reduce and manage the threats of ‘high-risk’ AI before they bite.
It is AI practitioners like actuaries who will ultimately drive the success or failure of this regime, as they will be the ones asked to take action – building controls, procedures, checks and balances into AI systems as they are developed and deployed.
Our response identifies two critical questions that practitioners must be able to answer if this regime is to succeed:
If we cannot confidently answer “yes" to both these questions, the guardrails will not succeed in creating the intended protection. Currently, we could not.
While the Institute credits the Federal Government for the substantial progress made and believes the proposals paper is an important step forward, there is still much to do.
Defining ‘high-risk’ and General Purpose AI (GPAI) – where should the guardrails apply?
The guardrails are proposed in two situations – high-risk AI and GPAI – but within these there appear to be some gaps, waste and confusion.
First, gaps. The dimensions defining ‘high-risk’ cover sensible territory: human rights, health and safety, legal effects, impacts on groups or broader society. However, there are noticeable omissions, including economic or monetary effects, privacy and cybersecurity; areas in which we think the public would expect to see protections applied.
Second, waste. GPAI is defined too broadly, as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration into other systems”. This describes many very low-risk AI systems in use today. We suggest this be suitably narrowed so that it no longer captures models operating internally in companies in low-risk ways, but which are theoretically able to be repurposed.
Third, confusion. The principles defining ‘high-risk’ include a threshold test (via principle (f)), but there is little detail of how this threshold is to apply. We believe this will create practical challenges. Notably, AI systems can fail in unpredictable ways, and many change over time.
So, the likelihood or severity of impact is unlikely to be predicted reliably, and trained practitioners with the same information may come to different but equally valid opinions about this. We question whether this sort of approach will lead to consistent or reliable decision-making by practitioners.
The guardrails themselves – what are practitioners asked to do?
The proposals paper outlines 10 mandatory guardrails for high-risk AI systems. These are described in high-level terms without sufficient detail for a practitioner to confidently apply them.
In previous submissions on AI regulation we have cautioned against this type of one-size-fits-all approach, instead advocating for a more nuanced approach of a broad menu of guardrails linked to risks, applied as-needed for the specific risks posed by individual AI systems.
While the proposed guardrails are reasonable “headlines” that most would accept, they represent a compromise – broad requirements which might not fit every situation. We illustrate the limitations of this in our submission:
We do not use these examples to suggest narrow areas of improvement, but to make a critical point about the regime’s structure – a one-size-fits-all approach often fits nothing well. We think Australia can aim higher.
Successful AI regulation needs practitioners
The proposals paper represents an important step forward for Australia, clarifying the general direction for AI regulation and containing welcome progress on many fronts. The next step requires involving those who will be tasked with implementation and who are thus best placed to identify possible flaws: AI practitioners like actuaries. We ignore practitioners at our peril.
Policymakers need to deepen the efforts to align their approaches, develop international standards and promote ethical principles that ensure AI is developed and deployed for the greater good.
Read more Opinion article June 1, 2020Former Industry Commission Chairman, Bill Scales AO; Swinburne University Centre for Transformative Innovation Director, Professor Beth Webster; and Centre for Transformative Innovation PhD Candidate, David Paynter argue that we need to understand what Australian manufacturing looks like today before we decide what role it should play in the economy after COVID-19.
Read more Opinion article January 17, 2018Following his speech to a CEDA event in December 2017, Bill Ferris provides insights into the National Innovation 2030 Strategic Plan.
Read more