Loader
Technology | Innovation

Trustmark needed to ensure ethical application of AI: Chief Scientist

A certification system or a “trustmark” is needed to send a clear signal that consumers expect ethical behavior in the application of Artificial Intelligence (AI), according to Chief Scientist, Dr Alan Finkel AO.

Addressing CEDA’s NSW event on the use of artificial intelligence in the workplace examining its impact, opportunities and regulations, Dr Finkel said a combination of lack of information and lack of foreknowledge unnerves us about AI.

“The knowledge seems concentrated in a priesthood: a cabal. The number of AI experts today is tiny – an estimated 22,000 worldwide are qualified to the level of a PhD,” Dr Finkel said.

“So, what is this priesthood capable of creating? Not knowing makes us uneasy. And so does finding out.”
 
However, Dr Finkel said we give up our data today without knowing what others might be able to do with it tomorrow, adding to this sense of unease.

“For example, when you uploaded your photos to Facebook, did you expect that Facebook would be able to scan them, name all the people, identify your tastes in food and clothing and hobbies, diagnose certain genetic disorders, like Down Syndrome, decide on your personality type, and then package all that information for advertisers?”

Dr Finkel said banning AI as a response to our concerns would be a tragic mistake for Australia.

“It wouldn’t halt progress, because I find it difficult to believe that China and France and the United Kingdom and the United States and every other nation that has staked its future on AI would now step back from the race,” he said.

“A ban would simply discourage research and development in the places where we most want to see it: reputable institutions, like CSIRO’s Data61, and our universities.”
 

Nor, he said would we want an AI “free-for-all” allowing unscrupulous and unthinking and just plain incompetent people to do their worst.

“We want rules that allow us to trust AI, just as they allow us to trust our fellow humans,” Dr Finkel said.

“What we need is an agreed standard and a clear signal, so that we individual consumers don’t need expert knowledge to make ethical choices, and so that companies know from the outset how they are expected to behave.” 

“So my proposal is a trustmark. Its working title is ‘the Turing Certificate’, of course named in honour of Alan Turing.

“Companies can apply for Turing certification, and if they meet the standards and comply with the auditing requirements, they would then be allowed to display the Turing stamp.

“Then consumers and governments could use their purchasing power to reward and encourage ethical AI.

“Independent auditors would certify the AI developers’ products, their business processes, and their ongoing compliance with clear and defined expectations.”

He said the Turing stamp would have to be granted to both organisations and their products, as is common in the manufacturing industry. He proposed a voluntary system with responsible companies opting in.

“I have to be very clear that a voluntary system does not mean self-certification,” Dr Finkel said.

“It means that the companies would voluntarily submit themselves to an external process. Smart companies, trading on quality, would welcome an auditing system that weeded out poor behaviour. And consumers would rightly insist on a stamp with proven integrity.”

Following Professor Finkel’s address, Daisee Co-Founder and Chief Executive Officer, Richard Kimber said AI is a bigger event than the internet in terms of its impact on business.

“As much as it relies on the internet the implications for business and how we’re organised, I think are far greater,” he said.

“The impact is wider and it crosses across every industry. There’s not one industry that’s going to escape AI.”
 

However, he said businesses are not yet well set up for AI because there is a lack of understanding around what AI really is and applying it is really hard.

In terms of the impact of AI on future work, Mr Kimber said AI is not yet a challenge to a human. But, he said, AI will change what we need to know and how we need to learn. Human skills such as empathy, creativity and critical thinking will remain important. We still need to focus on humanities and how humans interact with machines alongside STEM learning.

Also addressing the event was IBM Australia and New Zealand Head of Automation, Deborah Walker.

“The rise of artificial intelligence is directly related to the phenomenon of data, which is effectively the world’s new natural resource,” Ms Walker said.

She said AI has the potential to help us defeat cancer, reverse climate change and manage the complexities of a global economy. However, some worry about the abuse or misuse of the technology and the potential automation of jobs.

 

Ms Walker pointed out that automation is not new, citing the impact of the steam engine on jobs and transport. While some jobs were replaced, it also created the new profession of the engineer. 

We need to make sure that the shift of processes from humans to technology is handled equitably, she said.

“Restless reinvention of technical skills will be the norm. In addition, all workers will need the adaptive skills to be able to work alongside increasingly capable machines,” she said.

Ms Walker said we need to focus on “STEAM” with Arts added to STEM as future work will require people who can challenge ideas, create, adapt and be resilient.
 
;