Make a tax-deductible donation to support CEDA's agenda and help accelerate change
As experts in both advanced analytics and risk management, actuaries offer a crucial perspective on Australia's proposed AI regulations. Involving AI practitioners could be the key to creating effective guardrails that harness AI's potential while mitigating its perils.
Some have called actuaries “the original data scientists”. Whether or not you agree with the sentiment, the actuarial profession is well-placed to help society navigate the current era of burgeoning data, analytical tools and AI.
With a rich analytical history and a blend of contemporary technical and soft skills, including specialised data science and risk-management qualification pathways, actuaries have never been in more demand as business and government look to get value from, and manage the risks of, AI.
It is through this practitioner’s lens that the Actuaries Institute responded to the recent Federal Department of Industry, Science and Resources (DISR) proposals paper for introducing mandatory guardrails for AI in high-risk settings. This aims to create a prospective, protective regime, with controls pre-emptively applied to reduce and manage the threats of ‘high-risk’ AI before they bite.
It is AI practitioners like actuaries who will ultimately drive the success or failure of this regime, as they will be the ones asked to take action – building controls, procedures, checks and balances into AI systems as they are developed and deployed.
Our response identifies two critical questions that practitioners must be able to answer if this regime is to succeed:
If we cannot confidently answer “yes" to both these questions, the guardrails will not succeed in creating the intended protection. Currently, we could not.
While the Institute credits the Federal Government for the substantial progress made and believes the proposals paper is an important step forward, there is still much to do.
Defining ‘high-risk’ and General Purpose AI (GPAI) – where should the guardrails apply?
The guardrails are proposed in two situations – high-risk AI and GPAI – but within these there appear to be some gaps, waste and confusion.
First, gaps. The dimensions defining ‘high-risk’ cover sensible territory: human rights, health and safety, legal effects, impacts on groups or broader society. However, there are noticeable omissions, including economic or monetary effects, privacy and cybersecurity; areas in which we think the public would expect to see protections applied.
Second, waste. GPAI is defined too broadly, as “an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration into other systems”. This describes many very low-risk AI systems in use today. We suggest this be suitably narrowed so that it no longer captures models operating internally in companies in low-risk ways, but which are theoretically able to be repurposed.
Third, confusion. The principles defining ‘high-risk’ include a threshold test (via principle (f)), but there is little detail of how this threshold is to apply. We believe this will create practical challenges. Notably, AI systems can fail in unpredictable ways, and many change over time.
So, the likelihood or severity of impact is unlikely to be predicted reliably, and trained practitioners with the same information may come to different but equally valid opinions about this. We question whether this sort of approach will lead to consistent or reliable decision-making by practitioners.
The guardrails themselves – what are practitioners asked to do?
The proposals paper outlines 10 mandatory guardrails for high-risk AI systems. These are described in high-level terms without sufficient detail for a practitioner to confidently apply them.
In previous submissions on AI regulation we have cautioned against this type of one-size-fits-all approach, instead advocating for a more nuanced approach of a broad menu of guardrails linked to risks, applied as-needed for the specific risks posed by individual AI systems.
While the proposed guardrails are reasonable “headlines” that most would accept, they represent a compromise – broad requirements which might not fit every situation. We illustrate the limitations of this in our submission:
We do not use these examples to suggest narrow areas of improvement, but to make a critical point about the regime’s structure – a one-size-fits-all approach often fits nothing well. We think Australia can aim higher.
Successful AI regulation needs practitioners
The proposals paper represents an important step forward for Australia, clarifying the general direction for AI regulation and containing welcome progress on many fronts. The next step requires involving those who will be tasked with implementation and who are thus best placed to identify possible flaws: AI practitioners like actuaries. We ignore practitioners at our peril.
Artificial Intelligence is transforming everyday life at a rapid pace, but realising its full potential in a way that is responsible, trustworthy and safe will require careful consideration and strategic implementation. While more organisations are looking to implement their own internal AI chatbots trained on company data, any new approaches require input from a multidisciplinary team across technology, risk, legal, security and change management.
Read more Opinion article July 16, 2024The traditional market model of comparative advantage denies Australia the more promising strategic opportunity to identify and capitalise on areas of potential competitive advantage in the high productivity, high-skill jobs and industries of the future, including advanced manufacturing. Instead, with this model we will be locked into low-productivity, low-wage industries, with limited scope for uplift through technological change and innovation, writes Emeritus Professor Roy Green AM.
Read more Opinion article July 1, 2021KPMG Australia National Chairman Alison Kitchen says CEDA’s three-day State of the Nation forum in Canberra from 23-25 June showed business leaders and policymakers must choose between complacency and courage following our better-than-anticipated recovery.