PROGRESS 2050: Toward a prosperous future for all Australians

Progress 2050

Opinion article

Procuring AI: Risks organisations must consider

Procuring AI solutions presents new and unique risks that organisations must consider to ensure the successful implementation of an AI product.

Artificial intelligence is transforming organisations. As the IT industry continues to incorporate features and develop new applications enabled by AI, organisations are increasingly focused on procuring these technologies to unlock the benefits. But AI solutions present new risks and challenges that organisations must consider to ensure successful deployment. Fortunately, these risks can be mitigated or addressed throughout the procurement and contracting process with effective governance, consideration to data, IP and privacy issues and legal compliance.

Governance

Effective governance is essential for maximising the benefits of AI and reducing potential risks, helping to build trust both with customers and within organisations.  

The supplier's governance and risk management processes can have a significant impact on the technology an organisation is implementing. For example, if a supplier lacks effective data governance, this could inadvertently expose the organisation to allegations of misuse of data, or problems with accuracy or bias in decisions, heightening the risk of legal and reputational damage. 

It's important to evaluate the supplier's governance frameworks during the procurement process. The supplier’s practices should be aligned with the organisation's own processes to minimise risks and ensure a seamless integration. Organisations might also consider whether to require certification under a recognised AI standards (such as ISO 42001).

AI governance should be clearly outlined in any contract, either by adopting an existing framework or requiring the supplier to develop one.

Capability and performance

AI’s capabilities can be broad and its results, based on complex probability calculations, are often unpredictable. For this reason, when defining requirements for the solution in the contract, it can be effective to address the desired outcomes that the technology will provide. This will vary depending on the application but may include outcomes relating to relevancy, accuracy, bias, system performance and scalability.

It is also likely that an agile or iterative approach to implementation is going to work better for many AI solutions where it is difficult to specify detailed requirements from the outset. The contract should also provide for continuous testing throughout the life of the application as the technology and data change over time.

Data, IP and privacy 

Large volumes of data are central to the success of any AI system, but this introduces new risks for organisations, including risks relating to IP, privacy, data residency and training on organisational data.

In terms of IP risks, AI output that reproduces source or training material could infringe copyright laws. There are many copyright cases working their way through the courts where creators are alleging that AI companies have infringed copyright by training or using models based on their content. Organisations might seek to shift this risk to the supplier in the contract.

Breaching privacy laws is another risk. Both input and output data may contain personal information, creating both legal and reputational risks for organisations. For example, Australian privacy laws require organisations to take reasonable steps to ensure that personal information used or disclosed (including generated contact) is accurate, up-to-date, complete and relevant. The solution should be designed to meet privacy requirements.  

Data residency is a common consideration in the procurement process. While some AI application suppliers host data in Australia, many use offshore large language models. Contracts should address the rules around any data leaving Australia, and if data is processed offshore, you might seek to require it to be immediately deleted after processing.

Training on organisational data is another risk to be aware of. Suppliers may want to use client data to improve their AI systems. Organisations need to assess whether this is appropriate based on the sensitivity of their data, and they might choose to introduce additional safeguards in the contract, like de-identifying or aggregating the data.

Legal compliance 

AI in Australia is currently regulated by a broad array of existing laws, but the Federal Government is considering new rules for organisations using high-risk AI. International laws, such as the EU AI Act, might also apply if the AI is used by consumers outside Australia.

Contracts should require suppliers to follow all relevant laws and provide legally compliant deliverables. Specific compliance measures that may be required by future laws are also beneficial for contracts. This includes transparency to ensure users know when they're interacting with AI or when outputs are AI-generated, a record-keeping obligation to keep records of conversations, source content and metadata, and contestability, so users have a process to raise concerns about AI-generated outputs.

CEDA Members contribute to our collective impact by engaging in conversations that are crucial to achieving long-term prosperity for all Australians. Find out more about becoming a member or getting involved in our work today.
About the author
SN

Simon Newcomb

See all articles
Simon Newcomb is a Partner at Clayton Utz.