Blog icon

10 January 2023 10 min read

Most people interact with artificial intelligence (AI) every day. Yet many Australians report not understanding or trusting it. Our new podcast, Everyday AI, aims to challenge this by exploring how AI is shaping our everyday lives. The six-part series features experts from across the globe sharing how AI is changing the game for creative industries, health, conservation, sports and space.

To launch the podcast, we put the call out on Instagram for your questions. Our AI expert and host of the series, Professor Jon Whittle, answered.

Jon is the Director of our Data61, which is the largest collection of AI researchers and engineers in Australia. He's worked in AI for about 25 years.

"One of the things I've noticed is there can be a lot of hype around the term artificial intelligence. We've got Hollywood depictions of robots coming to take over the planet, and so forth," Jon said.

"So we've launched this new podcast, which is really looking to do two things. Firstly, to show that AI is not going to take over the world in that way. And secondly, to show it has real positive benefits to society. And it's actually being used here and now. It's not just a futuristic technology."

So, let's get some answers from Jon to your intelligent questions.

 

 

 

 

 

View this post on Instagram

 

 

 

 

 

 

 

 

 

 

 

A post shared by CSIRO (@csirogram)[Link will open in a new window]

Should we be afraid of AI?

I think the answer to that is yes and no. There's no doubt that AI can do amazing things for society. We've got lots of examples of that in the podcast.

To give you one example, I interviewed a student nurse who wears a smart watch. She'd been wearing it for about three years until the AI algorithms on the smartwatch noticed there was a change in her heartrate patterns, and so it flagged that with her. She then went along to her GP and explained this, and it turned out she had a pretty serious thyroid condition. In fact, half of her thyroid had disintegrated.

Now that's not to say a GP wouldn't have picked that up. But I think the smartwatch picked it up, perhaps quicker than the GP might have done (if you were just going in for a regular visit). So that's just one example of where AI can really make the world a better place.

On the other hand, there's also examples of AI gone wrong. One of the classic examples is a system called Compass, which was used for parole boards in the United States. It was looking at whether they could predict reoffender rates of criminals. And it turned out that that system was actually racially discriminatory. So, things can certainly go wrong.

I think we have to get the balance right. And we have to figure out how we can build these technologies in a way that does positive things and doesn't do negative things.

Is AI going to become more intelligent than humans?

That's an interesting question. On the podcast, we interview a professor from the University of California, Berkeley called Alison Gopnik, who is a child psychologist. She studies how children and babies learn, and she's also looking at artificial intelligence. The point she makes is that they're very, very different types of intelligence.

Artificial intelligence, or at least the most popular forms of that nowadays, are essentially large-scale statistical pattern matching algorithms. They will take lots and lots of data, which you give it, and will look for patterns in that data.

So, for example, if you want to train an AI system to recognise cats in images, you give it lots and lots of images of cats. And then eventually it will be able to take a new image and say, “that's a cat” or “that's not a cat”.

But these AI systems don't tend to generalise very well. So, if you were then to give it pictures of dogs, it wouldn't be able to recognise that it was a dog. According to Alison, that's directly the opposite of how children learn. Toddlers are able to look at maybe one cat or two cats and generalise from that. Show them another cat that might be a completely different colour or completely different shape in a completely different context, and they're still able to recognise it as a cat.

So, I think these two forms of intelligence are different. I would see them as complementary rather than one taking over the other.

Is AI going to take our jobs?

The key word here for me is collaboration. AI is already changing the nature of work, but I don't know that we need to be completely dystopian about it. There probably will be some jobs that will no longer exist. But I think in most cases it will become a collaboration between the AI and the human.

To give you an example, there's a podcast episode on AI and health where I interview Helen Fraser, who is a radiologist at St Vincent's Hospital. Her team has been doing some really interesting work looking at getting an AI to automatically detect breast cancer in mammograms. She's shown that the AI system is actually 20 per cent more accurate than radiologists. But that doesn't mean that radiologists are out of a job. What it does mean is that the radiologists can be freed up from doing the run-of-the-mill cases. There's always going to be more difficult cases that require human expertise, so they're likely going to be deploying that system, having the AI work alongside the human radiologists. That kind of collaborative intelligence.

What's the most exciting thing about AI?

The stuff I'm really excited about now is what is called generative AI. This is AI that arguably can be creative.

There's been huge advances in that field, particularly over the last 12 months. There are now systems that can generate art for you, generate music for you, and do other creative things.

In fact, we got an AI to generate the theme song for the podcast, which was fun. It was really a collaborative endeavour, whereby a human musician fed snippets of music we thought would be suitable for the podcast into the AI system. The AI system then took those and created multiple pieces of music. But the human musician would say things like, “I like that bit”, “I don't like that bit”. Then they'd go through a collaborative process with the AI, coming up with the podcast theme together.

What do the next few decades hold for AI?

It's hard to say, of course, because it's the future. And if one thing is difficult to predict, it's the future. But I certainly think the last 10 years has been a story of larger and larger data sets and AI algorithms that could deal with that scale.

This has led to systems like ChatGPT, which came out recently. This is where you can go online and have a conversation with an AI, and it almost feels like you're having a conversation with a human. They're very impressive systems and very sophisticated. But, at the end of the day, they're only looking for patterns in data. They don't have any knowledge about what they’re doing. That's why there are lots of examples of ChatGPT, for example, getting facts wrong.

I think there's going to be a lot of interest over the next 10 years in the community around how we can actually put knowledge into these systems. If we could combine those two different kinds of approaches, we could have something really impressive.

Will AI replace doctors and nurses?

I see it more of as a collaborative endeavour. Going back to the AI and health episode in our podcast, there's a CSIRO researcher called David Ireland, who’s been working on chatbots in a health and wellbeing context for quite a long time. For example, chatbots that can talk with you and analyse your speech patterns as you talk are really useful in speech pathology. Chatbots that can talk with children with autism spectrum disorder and use that to help them with social anxiety are another example. None of these are necessarily replacing the experts, but it's freeing up the experts to do the more interesting and less run of the run-of-the-mill tasks.

What's being done to prevent problematic biases in AI?

As mentioned, there's been lots of examples of these kind of problematic biases coming up. The good news is over the last five to 10 years, the AI community is really looking at that problem in depth.

Lots of countries, for example, have now brought out AI ethics guidelines. Australia has one. In fact, Australia was one of the first countries to bring out a set of these guidelines back in 2019. It’s called the Australian AI Ethics Framework. It has eight high-level principles that AI developers should think about. They're things like, is the AI system explainable? That is, can it explain why it took a certain decision? Is it contestable? If an end user is using an AI system, they should know they're using an AI system and that it's an AI decision, so they can contest the decision if necessary.

I think there's still some way to go, so these ethics principles are quite high-level things. In fact, some of the work that my own research team is doing is asking how you can translate those ethics principles into much more concrete things that AI experts and developers can use in their day-to-day work.

How quickly will AI actually change the way we work today? What areas are likely going to be the most affected?

AI has already changed the way you work and play today. Even simple systems we are all very familiar with, such as automated mapping to get you from place A to place B or recommendations from your video streaming service about what you might watch next. These are all using AI. Even search engines make heavy use of AI, but we're starting to see that transition out of the run-of-the-mill tasks.

Each episode in the podcast covers one industry that’s seeing some impressive changes, whether that's healthcare, sport, space or conservation. For example, there are AI systems now that will listen to birds around you and tell you what they are. That's really useful for conservationists.

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.