Loader
Data | Digital economy // Technology | Innovation

Trust is needed to get the benefits of Artificial Intelligence

If we are to achieve the benefits of Artificial Intelligence (AI), we have to get people comfortable sharing their data, Microsoft Corporation, Vice-President, Regulatory Affairs, David A. Heiner, has told a CEDA audience.

Speaking in Melbourne, Mr Heiner explored trends as well as risks and considerations in AI.

“We really need to build AI systems in a way that they earn trust,” he said. 

“The promise of AI…is that it can enable us to make better decisions and that's a really big idea, because we're making decisions all day long in almost every field of human endeavour.

“One of the interesting insights of cognitive psychology over the past 10 or 15 years is that people are actually not that good at making decisions, we have all kinds of implicit biases. 

“Computers don't have those kinds of biases. Computers can reason over vast amounts of data. They never forget anything and they're really good at probabilistic reasoning. 

“What computers don't have is empathy, they don't have inbuilt in a sense of fairness judgment.
“What we really want to do is marry the best aspects of computers with the best aspects of people.
 

“(But) there is a fear of the unknown…what are these computers? How do they make decisions? What's this all about?

“There's a set of issues that AI raises.

“We need to be very thoughtful about how people interact with these systems.

“One is just safety and reliability of these systems.

“Self-driving cars is the obvious example but if you have a surgical robot or something, you want that to be safe. If you're making lending decisions, you want to make sure your system is reliable.

“Just as a basic practice, we want these systems to be fair and non-discriminatory. You might ask yourself, how could a computer discriminate?

 “You wouldn't think it would have the kinds of biases that people have, but discrimination could arise in two ways. 

“You might have a system that's built on data that's not representative of society as a whole.

“For instance, if you built a facial recognition system or an emotion detection system on a hundred thousand images of Caucasian faces and then tested the system on Asian faces it likely wouldn't work as well.
 
“Or it might be that you have data that is representative of society as a whole but to the extent that society reflects structural racism or sexism, that will be reflected in the data.

“We have to think about, well do you want to change the algorithm to present a more positive image? These are value decisions that we need to make as a society.

“What can we do to address this? First thing we need to do is attract more diverse people to the computer industry and that's something we need to do for a wide variety of reasons.

“These systems are mostly being built by young white men. We need to attract greater diversity, we need to develop analytical techniques to tell AI researchers, do you have a good data set to train your system and if you do have a good data set, is the output nevertheless biased? 

“There's a whole community of academic research going on right now it's called Fairness Accountability and Transparency in Machine Learning and Microsoft is very supportive of the work of that group. 

“Some of the leading people in that group are actually with Microsoft Research.

“We're working…(to) develop guidelines for developers who may not be thinking about these kinds of questions. Here are the steps you need to take to build a system that's fair to everybody.”

Mr Heiner also discussed the issue of privacy.

“With the digitisation of our lives, we have all this data around. How is it being used?,” he said.

“It raises the stakes on both sides. The benefits are greater than ever before…but then of course we're disclosing more sensitive data about ourselves.

“The whole privacy paradigm today is based on ‘you make a decision to share something and to get some benefit.’

“But with an AI system, things can be inferred about us that are true but maybe we had not chosen to disclose. 

“Facebook could look at someone's…Facebook feed and figure out their sexuality.

“Spotify in the US can look at what music you listen to and figure out if you're more likely for Trump or more for Clinton. These things correlate.

“We need to be very thoughtful about privacy as well and part of that is building the best possible controls so that people can make decisions. 

“We need to be very transparent about what these AI systems are doing and that's another challenging topic because they're very hard to explain.”

Mr Heiner said an organisation called the Partnership on AI, has been created to work through these questions with Microsoft, IBM, Google, Apple, Amazon, Facebook, and others.

“The idea is to set up working groups across these companies with civil society, with academics and think through what are the best practices relating to privacy, transparency, security, reliability, accountability,
develop them…and maybe over time some of that could even be standardised,” he said. 
 
;