Blog icon

By Alice Trend 11 July 2022 6 min read

Make artificial intelligence (AI) real and make it practical is the motto of the National AI Centre’s (NAIC) newly launched Think Tanks, headed by Judy Slatyer, Professor Didar Zowghi, and Aurelie Jacquet.

“Our NAIC Think Tank leaders bring a wealth of expertise and a track record of leadership impact,” said Stela Solar, Director of the National AI Centre.

“We are so excited to have them lead the AI Think Tanks, to help push the boundaries and explore what is possible with trusted AI development, adoption and innovation in Australia.”

The Think Tanks are made up of a diverse representation of Australia’s AI Ecosystem, with membership from small and medium business across sectors, leading international AI experts, Government, First Nations Australians, humanitarian and workers’ rights organisations, the Business Council of Australia, and leading AI research organisations.

Each of the Think Tanks will provide considered advice, recommendations, and council in connection to the NAIC's charter to create trusted and practical paths to AI adoption and innovation in Australia.

Meet the National AI Centre Think Tank Leaders

In this short series, you’ll get the chance to hear more from each of the Think Tank leaders.

First up, meet Judy Slatyer who will spearhead the Responsible AI Think Tank. Judy is a highly respected leader experienced across start-up, corporate, government and not-for-profit settings within Australia and internationally. Previously the CEO of the Australian Red Cross, and COO of the Worldwide Fund for Nature, Judy holds the position of Entrepreneur-in-Residence at CSIRO’s Data61, is a member of the Net Zero Emissions and Clean Economy Board, The AI Coalition (part of The B Team Australia) as well as several Non-Executive Director roles for Booktopia, WWF-Australia, Natural Carbon, Talent Beyond Boundaries.

‘With great power comes great responsibility,’ explains Judy, describing the importance of Responsible AI with the ancient adage alluding to the Sword of Damocles.

“AI is capable of the most amazing improvements in medicine, science, the environment, customer service, productivity, new businesses, new careers - the list goes on.

‘Combined with big data, the ability of AI to predict, assess, review, analyse and compare is like never before. AI has tremendous potential and can benefit many fields. 

‘Yet, as we know, by the very nature of how machines become ‘intelligent’, they're also able to replicate and scale the worst of human biases, to discriminate with laser like efficiency and scale, exacerbate unfair and unjust outcomes, and most concerningly, to generate unintended consequences despite best intentions.  

"AI is also changing how we work and live.  

“For all these reasons, that ancient adage is highly relevant and contemporary today, and is highly applicable to artificial intelligence.”

Your resume and experience show your passion for humanitarian issues and the environment, how do these apply to Responsible AI?

“Working in humanitarian, and environmental fields I’ve seen every day how technology can make significant improvements in the environment or in people’s lives”, said Judy.

“Drones can deliver blood in remote areas or help to prevent illegal wildlife trade.  Digital technologies are bringing transparency to supply chains which can help tackle problems like modern slavery and enable small producers to get a fair share of returns from the full value chain.

“Greater predictive and spatial capabilities allow us predict the impact of natural disasters and reduce the risk to lives by taking action earlier,” explained Judy, who is also a member of the Climate Leaders Coalition.

“However, despite all these wonderful examples, if not applied responsibly, these technologies can also worsen the situation for those experiencing vulnerability and cultural alienation or create environmental havoc faster and more destructively.  

Judy explains that this is not a new story, that it has always been this way with emerging technology. When cars replaced horses and carts, along came new dangers requiring novel processes, training, and governance.

“My personal interest in Responsible AI was triggered by seeing the huge potential it has to solve the biggest challenges facing humanity. I also read a sobering book by Eugene Banks, called Automating Inequality which was a call to action…if I can do something, I will.”

What does adopting Responsible AI do for industry – what are the benefits? 

Trust and integrity are the central underpinnings of long-term business success. Edelman's most recent Trust Barometer, confirmed that “We have studied trust for more than 20 years and believe that it is the ultimate currency in a relationship that all institutions - business, governments, NGOs and media- build with their stakeholders.” 

In an AI world, when decisions made by machines are increasingly impacting the lives of millions even billions of people, the importance of building trust will only grow. 

“Customers, employees, investors, and stakeholders are going to be asking - can decisions made by machines be trusted? Are they fair? Are they explainable? Are they accurate? And who's accountable?” asked Judy.

“How industry embeds and uses artificial intelligence is going to be critical to building that trusted relationship or destroying it.”

Judy expressed her pleasure in seeing so many companies starting their AI journey, but stressed that as a nation, Australia is already somewhat behind other countries as Australians can be quite ambivalent about whether they trust AI.

A 2020 survey found that the public are most confident in Australian research and defence organisations to develop and use, and regulate and govern AI, and least confident in commercial organisations,” said Judy.

“By bringing all these parties together, and proactively tackling these issues, Australian business has the potential to become known - maybe even famous - for the quality and responsibility of its AI deployment.”

How can companies operationalise Responsible AI? 

“This is such a big question,” exclaims Judy.

“It varies depending on sector, stage of digital capability, maturity in customer engagement, talent and people skills.  

“There are many guides on how companies should approach the deployment of Responsible AI and increasingly there are platforms and software offerings which have the core requirements built in.  

“There are also agreed principles focussing on essential attributes such as transparency, fairness, explainability, and accountability,” said Judy.

The Responsible AI Think Tank will consider and develop all these aspects and focus on the identification and acceleration of cultural and leadership attributes that can facilitate adoption of Responsible AI in Australia.

How will the Think Tanks bring people together to set the agenda for responsible AI in Australia, and how is this different to what’s been done before? 

“Stela Solar - the Director of the National AI Centre - is leading an approach which is about listening, including, learning, and pragmatic actions which flows nicely into the work of the Think Thanks,” said Judy.

 “With a focus on having people with a wide variety to backgrounds and skills contributing to the dialogue at each Think Tank, we will develop ambitions for each year and meet virtually to work towards delivering agreed outcomes.”

What can we learn from other countries around the world in this space?  

“Just as we have much to contribute – like our significant expertise in responsible AI - we also have much to learn,” said Judy.

India is building an eco-system of institutions and companies dedicated to responsible AI and they are facilitating discussions among Southeast Asian nations around the possibility of coming together to promote and regulate AI in the region.  

Equally, South Korea is emerging as a global leader in successfully harnessing trustworthy AI by making human-centred design a key element of its ’New Deal’ strategy.  

Many countries are working on how best to embed responsible AI in regulation and there are lessons to be learned from large global corporations and the work that they have done in this space.

“At home, we can learn from important initiatives like the Indigenous Data Sovereignty principles for embedding Responsible AI,” said Judy.

“Our focus will be on how Australians and Australian industry can learn about, embed, and hold each other to account to use AI in a way which is responsible. Coming together as a nation will allow us to make the most of the great opportunity in front of us.”

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.

First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.