Loader
Technology | Innovation

The greatest value of AI lies in areas of society with greatest need

Through solving the big issues that face our communities, states and countries, AI will have the greatest impact, Microsoft Australia National Chief Technology Officer, Lee Hickin, said during an interactive CEDA livestream.

Speaking on the uses of AI as a force for good, he said one of the greatest limitations both in Australia and globally are restrictions on sharing and open data frameworks.

“I take a personal responsibility for ensuring that the AI solutions we work on, we deliver and we develop are considered around an ethical and principled framework and fundamentally underpinned by this idea that AI needs to be democratised,” he said.

“If AI is going to be used for good, everybody needs to have access to AI, it needs to be equitably distributed, trusted by all, done in an ethical way and largely understood, so we drive this program of democratisation of AI.

“Making sure that AI is not just locked up in complex, difficult tools that only data scientists and those that have access to high-end computing services can access, but it's really more equitable and available to all through the development of more flexible and usable tools that we all have potentially the capability to use.”

Mr Hickin said often AI skill investment focuses on the scientific, analytical side but that there is huge opportunity to invest in leadership’s understanding – how boards, ministers or service workers understand the ethical capabilities and constraints.

He said while we are making strides in AI and machine learning models there are further constraints around the data that feed these models.

“Often it's old, often it's driven by biases that were relevant at the time of the data capture and the biggest issue we’re seeing today is this construct around data drifting.

“As you teach AI with a data set, it learns and then as it learns different data comes in and it changes it’s perspective based on that data so you end up with this drifting issue,” he said.

Speaking on the impact of AI on the workforce and learning, Swinburne University Director for the Centre of New Workforce, Dr Sean Gallagher, said that these challenges are “both sides of the same coin”.

“Most work in most organisations today is routine and predictable tasks and because it's routine that means it can be codified, that means it's possible that algorithms could be written and with increasing amounts of better data, those algorithms can be turned into AI,” he said.

“We're seeing AI disrupt work in two ways. The first is that AI is displacing tasks where AI is actually doing the work – think about call centres, chat bots, all the way through to cancer diagnosis.

“The key thing here is that no matter how menial or sophisticated the work, if it's routine and predictable, it’s vulnerable to being displaced by AI.

“Modelling shows that 22 per cent of jobs in Australia are going to be impacted by AI over the next five years.

“Looking to education, so that's the supply side of the labour market, and where tasks were relevant to work, skills are relevant to talent, so skills are kind of the supply-side counterpart to tasks.

“AI is being increasingly widely used in education and helping people learn better and I'm blown away by how AI is baked in almost everywhere in terms of higher education, but we're using it in particular on the school side to better understand the skills we need as workers, as the economy continues to transform.”

An area leading in the application of AI, according to Presagen Co-Founder and CEO, Dr Michelle Perugini, is healthcare.

“The healthcare sector is actually quite unique because it is a highly regulated industry already and it already operates under very strong risk and regulatory framework, so in some ways I see the healthcare framework being really reminiscent of how AI is going to be regulated into the future,” she said.

However, Dr Perugini said some of the limitations around development and expansion of AI stem from use of data.

“Challenges that are going to impactnot only on AI development in healthcare but also more generally in other industries, some of those challenges that I see kind of fall into two different classes,” she said.

“One is around the actual building of AI.

“The inherent biases that exist within data sets is a huge consideration here, and one of the big challenges that we've had to overcome, and many others will also, is the access to good quality data.

“What happens in the healthcare sector, there's a lot of restrictions around data privacy and sharing data, so how then do you build an effective, scalable AI that is built on a diverse enough data set if you can't move data from different countries and different clinical environments into a central location to build the AI? Very difficult.

“There's data privacy considerations, there’s data ownership issues, who owns the data? How willing are they to give that data?

“There's a whole range of issues that come up in the healthcare sector that will also become relevant in other industry sectors that are looking to build AI.

“There's this big piece around education of society, around the potential of AI to positively improve things like healthcare outcomes, things like education, agriculture, all of these different industries have an enormous potential for AI to really have a positive impact.

"There needs to be that protective risk framework and the right narrative that allows people to understand exactly how their data is being used, for what purpose, and for the greater good of the society in general.”
 
;