Blog icon

By Alison Donnellan 24 November 2019 3 min read

Artificial Intelligence (AI) is one of those technologies, like advances in gene editing or quantum computing, which has the power to change life itself. It has the potential to transform economies, unlock new societal and environmental value and accelerate scientific discovery. With AI estimated to generate $13 trillion in economic activity globally by 2030, the global race to lead in AI is well and truly underway.

Below, we've curated a series video recordings from panels and interviews on artificial intelligence (AI) taken from Australia's premier technology science showcase, D61+ LIVE 2019.

Moderator: Brad Howarth

Speakers: Dr Simon Barry, Acting Director, CSIRO’s Data61, Tim Finnigan, CSIRO, Denis Bauer, CSIRO Health & Biosecurity, Dr Terry O’Kane, CSIRO Oceans & Atmosphere, Dr Jen Taylor, CSIRO Genomics and Data Science

About: CSIRO scientists discuss how AI is already achieving impact as a force for social and environmental good. From predicting the climate in the coming decade and using AI to better understand how and why droughts occur, to identifying the gene that causes Motor Neuron Disease. This panel discusses the fascinating possibilities and outcomes already emerging as the result of AI research at the national science agency across climate, health, energy, agriculture, health and biosecurity domains.

Introduction
0:06
Alright, ladies and gentlemen, welcome back to the final panel session. Good to see you all sticking around, are you still all energetic
0:12
and enthusiastic? [cheers] That group is, we like them, who can do better?
0:18
Are we still all energetic and enthusiastic? [loud cheers] Love it, let's keep that
0:23
going for the next hour. I know it has been a big two days, there has been a huge amount of information to take in; hopefully it's been stimulating your
0:29
thought processes; you've got a ton of new knowledge to take back to your organisations afterwards. I'm not going to say we're saving the best for last
0:36
because we haven't actually delivered on that yet and I don't want to set the bar too high, but I'm certainly excited about what we have to offer today. So for those
0:42
of you who weren't in the session that I chaired this morning, my name is Brad Howarth, I'll be your moderator for this panel session where we are here to discuss AI
0:49
for social and environmental good. So let me be very clear, if you thought you were
0:54
here for the AI for social and environmental evil panel, you are at the wrong event. Everything we're going to be talking about here now is about the
1:02
positive, wonderful things that AI is doing out into the world. My group of scientists here will be doing that work for me, I just have to ask the questions,
1:10
they're the ones who do all the heavy lifting, which is good. We'll be discussing how AI is already achieving impact as a force for social and
1:16
environmental good, from predicting the climate in coming decades and using AI to better understand how and why droughts occur, to identifying the gene
1:24
that causes motor neurone disease. My panel will discuss the fascinating possibilities and outcomes already emerging as a result of AI research at
1:33
the National Science Agency across climate, health, energy, agriculture and biosecurity. Like I said, I don't know very much about
1:40
any of this, which is why I'm so grateful that you are all here. Let me tell you a little bit about my panellists. At the far end we have Dr. Terry O'Kane, the
1:47
Principal Research Scientist in the CSIRO Climate Science Centre. Terry's applying machine learning in climate science to understand future
1:55
climates and how and why extreme events like droughts occur. Associate Professor Denis Bauer is a Principal Research Scientist in the CSIRO e-Health Research
2:05
Program, she applies machine learning in biomedical research to understand the origins of disease and help develop fundamentally new treatment regimens
2:13
such as gene therapy. Then we have Dr. Tim Finnigan, the Director of CSIRO Energy
2:18
and an advocate for the increased use of digital simulations in energy networks and the use of AI to optimise energy resources across the network. Then we
2:27
have Dr. Jen Taylor, Deputy and Science Director for Agriculture and Food at CSIRO, who leads a research group using genomic and computational science for
2:35
improved crop performance. Then finally next to me, Dr. Simon Barry Acting Director for CSIRO's Data61, previously also Research Director of the
2:44
Analytics and Decision Sciences Research Program. I feel like quite a fraud, I'm the only one who is not a doctor, and probably never will be either.
2:53
Alright, let's dive into it, so maybe it's a good opportunity perhaps to tell us a little bit about some of the work that
2:59
you're doing. Maybe Denis, I might start with you, if you can talk a little bit perhaps about the work you're doing in using
3:05
machine learning in genomics, please. Absolutely, thank you, and hi everyone. So
Dennis
3:11
my research group focuses on using machine learning to understand the secrets that are hidden in the genome. You see, the genome really holds a
3:20
blueprint for every single cell and organ in our body, and as such, holds vast
3:25
amounts of information about our future disease risks, and the way that the mechanisms that keep us healthy. So from our perspective,
3:34
understanding all of this is really exciting and I'm really passionate about doing this, but at the same time it's also quite daunting. There are three
3:44
billion letters in the genome and every single one of them could be potentially contributing to disease. So for example, one misspelling in the
3:54
genome can cause life-threatening diseases like Huntington's or ALS.
3:59
Therefore, understanding all of this is not an eyeballing or statistics mechanism anymore, but really needs sophisticated machine learning, and I
4:07
hope to share that with you today. Now Terry, I live in Melbourne, and it's hard enough for the Bureau of Meteorology to tell me what's gonna be
4:13
happening tomorrow, let alone 10 years from now, so tell me a little bit about the work that you're doing then, using artificial intelligence, machine learning,
4:20
to model climate when we're looking forward over periods of decades. Yeah, so it really is blue sky research. That was pun intended? No - well, yes - but we
4:32
do two things, we try to reconstruct the climate over the last sixty years.
4:37
We think we can go back and have sufficient atmospheric, or estimates of
4:43
atmospheric, observations going back that far. We, because it's a... we have a
4:48
paucity of ocean observations, we can really only get a reasonable estimate of
4:55
the ocean from 2004 onwards, where the Argo profiling arrays came into play,
5:00
and so it's highly uncertain, so we treat this in a probabilistic sense,
5:08
so rather than one estimate of the climate from, say, 1960 to present, we
5:13
treat it as an ensemble, so we have a hundred realisations of this to try and get some estimate of the uncertainty in what we think the climate over the past
5:23
several decades was, and we use that information to then run that same set of
5:30
models forward into the future and again try to ascertain some estimates of the
5:37
uncertainty and also have, if you like, a four dimensional probabilistic estimate
5:44
of what the current climate is and what we think it will be. So where ML - machine learning - comes into play, we have these vast data sets; we really want to reduce
5:54
it down to just, what are the important cause of relationships
5:59
between the various modes that live within the climate system, so that's
6:04
our approach. Jen I imagine that a lot of this would probably tie somewhat to the work that you're doing, given you're using machine learning and artificial
6:11
intelligence to look at how crops are adapting to climate, so tell us a little bit about the work that you're doing and how you're going about trying to
6:17
understand these changes. Sure, so in agricultural food we're using machine
6:23
learning and AI really to make decisions, and often they're decisions, multiple decisions in the single day, to understand quite a complex system about
6:32
how our food and fibre is grown, so it's something I guess we all take for granted, that we can get food and we can get clothes and we get
6:39
building materials and things like that, but if you think about it, it's actually a very complex system, so what we need to understand to do that successfully is, we
6:46
need to understand the environment that that thing is growing in, so we need to say, take wheat for an example, we've already heard from Denis that weed
6:55
has its own genome and any single misspelling can cause disease, that wheat grows in soil and a single gram of soil can contain up to a billion different
7:08
taxa of microbes that do or do not help that wheat plant grow, and then you've got climates, so we've heard from Terry that climate is incredibly blue
7:17
sky, and thinking about how we can do that, so if you put yourself in the
7:23
shoes of a wheat farmer, they've got to make a decision in April when they sow that wheat plant about what its future is going to be. If they've sowed
7:33
it too late and there's not enough rain to help it establish, then they
7:38
lose, and they are betting the farm sometimes, they lose that crop if they sow it too early or late, then they risk frost in five or six times,
7:48
five or six months time, can you tell me when the frost is gonna hit, so they could lose the crop to frost. So they've got to weigh up those decisions and
7:56
balance that whole system and the reason I really find that fascinating is that
8:02
unlike ever before, we've had so much data about climate, soil, plants, organisms,
8:07
everything like that, to help them figure this out and we really need complex systems to help us work on that, but the other fascinating thing you'll
8:16
find about us, we've been doing agriculture for thousands of years, it's one of the first things humans did together, other than maybe dancing, Simon...
8:25
Not sure where you're gonna go with that. [laughs]
8:30
So I find it fascinating that we're still learning from this system, it's a very important system, we all need to eat. Tim, I guess the
Tim
8:40
energy network is probably a little bit more modern by comparison to some of the systems that we're dealing with here, so how are you applying AI and ML, or how
8:46
can they be applied in order to do things like give us a more secure and reliable energy network? Thanks, well you would think it was more
8:54
modern but it's actually not quite as modern as you, we all might expect or hope it to be. One thing it is, is it's a large machine, so if you think of the
9:04
electricity grid of Australia, it extends from the northern tip of Australia all
9:10
the way down the East Coast to the southern tip of Tasmania by way of the the Bass Link Cable and across to South Australia. Some have called it
9:20
the biggest machine in the world; it really is one large single machine that
9:26
has to operate as a perfectly orchestrated system. It doesn't always, so
9:32
we're all aware of problems that it has faced, it's changing all the time, so the
9:38
way that machine was built and planned in the 60s was from a small
9:44
number of central power generation, coal-fired power generators,
9:49
distributing electricity out to the end-users at the consumption point. All that's been completely changed in the last few years, from a few
9:59
dozen power sources to now over 2 million in the network, largely due to
10:04
rooftop solar that's been added. Those rooftop solar systems are often controlled by smart inverters, so all of a sudden, you've got what we
10:13
went from, a small number of power generators, to literally millions in the system, all trying to feed in, and can be controlled and balance this massive,
10:22
some say the world's biggest, machine in real time and you would think it was
10:28
more modern than it is, that it was all sort of somehow systematically automated, and if you've seen the Wizard of Oz movie, I'm sure everyone over the years
10:36
has seen that, I kind of pictured us much more like the the man at the end with the curtain and the levers, you're pulling the levers to get the system to
10:44
work, and in some ways it's a bit like that still, there's a fault in a
10:51
line or a lot of clouds coming over solar farms and somebody's got to actually make a call to say turn the coal plant up or down or the gas plant
10:59
on or off. Where machine learning comes in is we started to now put in place the frameworks for being able to simulate
11:08
that massive machine, digitally, heading towards a digital twin, that's some work
11:13
that's started, underway now, if we can get that set up and validated in the
11:21
future, that opens the possibility to apply machine learning to this massive
11:26
machine and resolve all these manual operations that it currently requires.
11:32
Big opportunity, that's one example. So we've got four very big problems
11:37
that are being tackled here using this technology, but Simon, this is really just the tip of the iceberg isn't it, in terms of what we're seeing
11:42
machine learning applied to, so can you tell us about a few of the programs that are also running within Data61 that are utilising this technology and where it's
11:49
being put to good use? I think one of the opportunities for Data61 is that we actually work with all the parts of CSIRO, so actually a lot of
11:59
the examples we've had there, Data61 is actually working in those places, so we're bringing that expertise there. I think machine learning,
12:08
it has a major... it's a major opportunity for science, so science has the conceptual side, which is, we have gravity, you
12:16
have concepts and I suppose processes, but you can also generate data and you can actually learn directly from data, and as we get the ability to...
12:24
as we get more data, we can use machine learning to almost learn directly, and the amazing thing about machine learning is it actually lets us
12:30
generate data more efficiently, we can generate data from images, we can generate data from simulations, we can generate data in all sorts of different
12:37
ways, so it's opening up amazing opportunities in lots of places. Examples could be in things like aircraft.
12:45
Aircraft now, we'll have 50 systems in them that generate data all the time, every every second they're generating data about their state, but
12:53
what does that actually say about the state of the whole aircraft? So how do you actually learn that, how do you actually deal with that in
12:59
real time? If you get a... in biosecurity, if there's an outbreak of a disease, if you find something in a certain place, what does that
13:05
actually tell us about where it actually came from? Now previously you'd think that's quite a hard... if you think of all the possibilities about how
13:11
something got somewhere, but if you have data about where trade goes to, you have data about where people look, where people
13:18
don't look, you can actually use the power of computation to actually explore all of those scenarios. I think there's lots of places, there's
13:27
conceptual models of how the world works, but as we get data they can be tested, so it's like we can accelerate science, and so there's a huge number of
13:36
places that we're using that. I like that idea of accelerating science because we're really talking about doing things we could never do before, aren't we? So how
13:44
far down the path have we got to, because machine learning and AI is an evolving science in its own right, so in terms of the tools that we've built today to be
13:51
applied to the problems that are being solved here, how much more capacity do we have with regard to developing the tools themselves to take
13:57
the work that this group is doing even further? I think it is a very interesting question because, to some extent, learning and making inference from data
14:06
it's probably got 100 years of tradition, and some of that is well established, whereas I think
14:13
just in the last 10 years the power of deep learning, which came from these neural nets which were, I suppose, understood from the
14:20
80s and the 90s, but once you actually brought that together with technology from GPUs - from graphical processing units - which are
14:28
basically coming out of the video gaming industry, that actually opened up whole new opportunities, and so the way technology
14:34
intersects with new ideas is amazing, so I think there is still a long
14:40
way to go. Our ability to learn autonomously, our ability to learn through artificial intelligence. I do feel gratified though,
14:47
just on that last statement that all the time I've spent playing video games is actually helping to develop the artificial intelligence space, so it's
14:53
nice to know that that was time well spent. I would encourage you to continue with that, that's very important. Good, thank you. Denis, I want to come back to
Dr Dennis
15:00
something that we talked about in the opening, motor neurone disease. Now this is obviously a critical illness that we need to be looking at, so how is the work
15:07
that you're doing, and the work of your colleagues, helping to deal with a disease of that nature. ALS is the motor neurone disease that
15:14
Stephen Hawking had, or that you might be familiar with from the ice bucket challenge, and as such, it's a really heinous disease where within the, on
15:22
average, the space from diagnosis to death, there's only two years, and there's
15:27
currently no treatment, certainly no cure. So therefore, we're part of the international consortium, the largest single disease consortium in the world
15:37
due to the prominence of the disease, that really looks at what is the origin of this particular disease, which genes are involved, and how could we
15:46
potentially find treatment - drugs - that could treat this disease. Doing that
15:52
requires an enormous amount of data; as I was saying, we have three billion letters in the genome, each one of them could be contributing to the disease,
16:01
therefore investigating which one truly is associated with that, we need to have
16:06
the amount of samples that is, that allows us to build a robust model, so the
16:12
project mine has 22,000 individuals, which is staggering. Therefore the
16:17
resulting matrix of 22,000 times 80 million features that comes from, on
16:24
average there are 2 million differences between you and the person sitting next to you, so therefore in a cohort of 22,000 that
16:33
averages out to 80 million, so 80 million times 22,000, that's one point seven
16:39
trillion data points, and any machine learning, current or in the past,
16:44
is not designed for actually dealing with that, therefore now for the first time, we've built, because we are thrust at the forefront of this, we had to build
16:54
our own machine learning model that deals with this ultra high dimensional data that others have not dealt with before, so for example, the Google's and
17:04
the Yahoo's of the world, they typically deal with tens of thousands of features, whereas 80 million sets, to the order of magnitude larger than that, so
17:14
therefore it's nice to know that a little Australian team is at the forefront of big data analysis. Terry, the climate, an incredibly complicated
17:24
system that we seem to be still learning about every day in terms of the complexities and how things inter-operate, how far down the path have you got
17:31
then with regard to building solid and reliable models, and can you see a pathway ahead that takes you to something that will progressively get
17:39
better over time? The difference between doing a climate forecast and a weather forecast is that a weather forecast, you're taking
17:50
a lot of observations now, you're synthesising them into a model,
17:56
assimilating them, and then you're running forward for a relatively short period of time, so the inherent biases or model errors don't have time...
18:07
those errors grow slower than the errors in the initial conditions, the uncertainty and the observations. In the climate forecasting space we have a
18:17
paucity of observations because we really rely on knowing the ocean very
18:24
well and at reasonable resolution, and we don't have observations at depth in the
18:30
ocean, and then we're running the model forward for up to a decade, so the inherent errors in the model manifest relatively quickly within that forecast,
18:41
and they're the real problems with how we forecast, so you become very aware of the limitations of your model. One of the
18:52
approaches that we take is to generate very large data sets and then try to use
18:57
machine learning to pull out what other, if you like, the relevant mechanisms
19:04
within that probability distribution, and then verify them against what
19:11
observations that we have from history. So it's a difficult problem; also, the
19:20
other complicating factor is we actually don't really know the dynamics that drive the major climate teleconnection modes, so if you think of the El Niño
19:30
Southern Oscillation which drives a lot of the large-scale weather in Australia, its relation to something like the Indian Ocean Dipole,
19:40
the causal relationships between these large teleconnections, we only really know them empirically, so a machine learning office and a tool to try
19:51
and do a purely data-driven approach to pull out and
19:56
mathematically underpin those relationships. Tim, you're talking about modelling this huge machine that you described in the notion of building a
Digital twin
20:04
digital twin, so I guess trying to replicate the physical world in the virtual world. If you can build that capability what will it enable the
20:11
energy industry to actually do, how will we benefit from having that digital model in place? One thing is to avoid overinvestment in the future grid,
20:21
so there's a lot of discussion about what the national electricity grid will look like in ten years or 20 years or 30 years, given that it was designed decades
20:30
ago for a different purpose, so once we start looking at the distribution of
20:35
energy sources in that grid, and again I said before, we had a dozen or two
20:41
central generators in the past and now we have millions, so coal is still
20:47
there, gas, solar, wind, rooftop solar, hydro, we're adding batteries, adding hydro
20:53
storage, all kinds of different contributions into this network, this
20:59
simulator will be able to help us decide where the poles and wires and transmission systems need to be placed in the future to optimise that system
21:08
without over-investing, without gold plating as they call it, which has happened before without clear view of the future, we just say, "well,
21:18
there could be a lot of solar farms out there because it's sunny, let's put a whole lot of power transmission lines in now to enable that," only to find
21:27
out that dynamically that's not the best situation, so it will avoid over
21:32
investment and that can be in the billions in savings; that's just one benefit, there's a number of others, so in the situation where you have faults in
21:42
the system or environmental impacts, for example bushfires, so with climate change we're expecting more and more intense bushfires, higher peak
21:52
temperatures, how will a grid system in those environmental conditions operate
21:57
in the future and how can it potentially repair itself if it is integrated with
22:03
some machine learning, if a fault goes out in a certain part of the grid
22:09
can machine learning help the system repair itself or reroute power so that we don't get a South Australia statewide blackout - we actually have a smart enough,
22:20
well-trained grid that can figure out other pathways for power to flow, so there's a number of benefits to it.
22:29
I suppose adding to that, we're doing work with New South Wales Transport where we're looking to build models, and a model is
22:36
a digital twin, as a representation of the Sydney traffic system. Are you telling me you get to build model trains?
22:46
That makes it sound even more exciting. That's the same issue, they're interested in understanding the
22:52
system, they're interested in forecasting demand, they're interested in when something happens, understanding what has happened and how they can respond to it,
22:59
and I think that's the opportunity in many systems, is to find representations which are predictive, because once you have that, they can
23:06
actually be used, they're actionable. Jen, you talked about helping farmers make better
Working with farmers
23:12
decisions, how far down the path have you actually got to delivering that information to farmers then, are we at the point now where you're actually
23:19
able to work with them directly in order to help guide them in terms of what their future plans will be? In some areas; I think farmers have been working hard
23:28
to get efficient in their practices for a while now but there's a lot more urgency in the system. I think everything needs to get more efficient, more
23:37
sustainable so farmers need to make some fairly complex decisions about how they
23:42
use their water, whether they need to worry about pests or pathogens and all that sort of stuff and there are systems around that do that, I guess one example
23:52
I'll point to is the whole planet is thinking about what is the most efficient way to make protein, and that's something that we've been working
24:01
on, and aquaculture in particular, I think as Genevieve said this
24:06
morning, aquaculture is quite old in this country, but just recently agriculture
24:12
and its sustainability credentials have been growing and growing and growing, so there's a booth outside that's been talking to how they help aquaculture
24:21
farmers monitor the water quality in their pond and then they can make essentially almost instantaneous
24:29
decisions about how to manage that system and prevent waste for their production system, so that's one example. Wouldn't have been able to do
24:37
that without sensors and without a way to interpret that stream of data that's coming in. That's the exciting bit about this; we're not talking about
AI in agriculture
24:46
machine learning and AI because it's just the latest tool, we're talking about being able to do things we could never actually do before, aren't we?
24:52
So I mentioned - sorry, Simon. I was just going to say, alongside the agriculture one, I
24:57
suppose the really exciting part about that is a lot of the machinery within agriculture is now becoming, it generates data, it generates what it's done and
25:06
so if it adds fertiliser it generates what it harvests, it knows where that happened, and so that's exciting in the sense you've now got data that
25:14
you never had before, but the even more exciting part about that is actually the machines are programmable so you can actually write programs to actually get
25:21
the machines to do things, so you can actually run experiments on your farm in an automated way and so your ability to learn about
25:28
your property and your production system, it opens up in ways you've never had before, so it's a really interesting time. There's a statistic of a 70 percent uptake of unmanned
25:38
tractors and things like that rolling around farms and making decisions about water, pests, pathogens, without essentially a farmer walking up and down
25:47
the field trying to make decisions about where chemicals go, and things like that. It's just so much more precise and efficient, which is fantastic for the
25:56
whole system. Terry, I want to come back to you for a moment, maybe pop another couple of questions in before we go out to the audience; I
Drought prediction
26:02
understand you've done some fairly groundbreaking work when it comes to drought prediction as well, so tell us a little bit about that.
26:09
I think it's more that we now understand how little we know about droughts, so I think
26:19
drought's one of those words where everyone intuitively has a sense of what
26:24
you think a drought is, but from a meteorological perspective, if you look back in the history of droughts, they're pretty different, and
26:34
the, if you like, the large-scale drivers that set up the
26:39
conditions for a particular drought at a particular location at a particular time can be quite disparate, so the work that we've been doing is to try and find
26:51
in a mechanistic sense, what are the common underlying drivers going back in time.
26:57
So I wouldn't say that we've succeeded, but we have a better understanding of what we don't know, which is a lot. One of the topics I'd like to get into then is
Machine learning in genomic research
27:07
how AI and machine learning is changing the research project process itself for the better.
27:12
Denis, I'm guessing the kind of work that you're doing, it just would not have been humanly possible to do this prior to having the tools available
27:18
that you have now. Yes, so genomic research typically in the past is very heavily statistics based, where you look at each location in the
27:28
genome independently and make some inferences of which should be involved in disease or not, but we know that complex diseases
27:35
are multigenic diseases, so there will be multiple locations in the genome working together in order to create the disease. Therefore you are never able to capture
27:45
this on a full genome - 3 billion letters - on the full genome scale by using
27:50
standard methodologies. Therefore, using machine learning, particularly what we're using as random forests, they build individual decision trees that look
27:59
at subsets of the data and then ultimately come up with a model that captures the whole 3 billion letters of the genome and make that
28:08
inference. So from that perspective, really understanding what the genome - and we're
28:14
certainly not there yet - but we're on the path of understanding how the genome influences our disease risks, and what we know though, is that almost everyone in
28:23
the audience here will have one mutation or more that should be clinical, actionable, so it could be a future disease risk or it could be something
28:32
relatively trivial of how quickly you metabolise a certain drug. And all of this
28:37
should be readily available to your clinician and to you yourself in order to make the best decision around your healthcare.
28:46
And all of this knowledge is currently not there, but with machine learning, we're slowly growing it. Good, so I thought we might go out to the audience now because
Audience question
28:54
you've been very patiently sitting there, and I want to give you an opportunity to be involved. Do we have a question? I can see a hand up there, if we can just bring
29:01
the mic down, and once again ladies and gentlemen, if you could just tell us who you are and where you're from please. [mic test] 1, 2, you there?
29:08
Sorry, it's Stephan Wagner from AusIndustry, I'm just going to ask a question which is a little bit around ethics; given that the great
29:16
unwashed of public have to trust data, and AI in and of itself is already a very
29:23
complex scheme for most in the public to understand, how do you then use
29:29
AI to mitigate against where there's fraudulent activities in manipulating
29:35
data? We've seen it happen in the automotive sector, we've seen it happen in pharmaceuticals, I'm sure there are other case examples where even scientists have
29:43
manipulated data, so is there a chance here that this would actually give more confidence back to the rest of the public, that the data can't be twisted
29:52
and we can believe and engender that trust over the long term?
29:59
That's a great question. I suppose I'll start, science has got, at its foundation,
30:05
the idea of peer review, and I think if you look at the history of the last hundred years of science, it is around understanding the nexus between
30:14
the data that you have and why that lets you tell something about the world, or let you make conclusions that are reasonable or have
30:24
a level of confidence to them. I think from some of the ethical problems that you see in machine learning, everything used commercially, is
30:31
that people just take any data and do anything to it and then claim that it gives them certain answers, and so at least science itself,
30:39
the science method, has actually been very careful about that. I think actually on the other side, to some extent
30:46
maybe science can become too religious about that in a certain sort of way, so it needs to be open about using the information that's available, and how
30:53
it does it, but I think at least in terms of your question, there is a set of pretty strong protocols within the profession I suppose,
31:03
to look at how you manage that, I don't know what other people think. And also, AI is about being able to deal with multiple
31:08
independent sources of data at the one time and so I've seen cases where in fact it's the reverse, where AI has picked up
31:15
anomalies, when looked as on its own perfectly reasonable, but when put in the environment, and the whole data fabric as
31:24
we say, it just doesn't seem to fit. Now, as Simon says, there's a sense of judgement that
31:30
has to, maybe there's something there, we're looking at complex systems, but also maybe something doesn't fit, so I've seen actually the opposite where
31:39
it's actually been rich brought truth to the system, when you look at it as a whole. Adding to that, at the end of the day, your data is the point of truth, it's a point of truth
31:49
about the world, and to some extent, the power of AI is having large amounts of data, and so it does test our theories and assumptions and our beliefs,
31:58
and that's where learning comes from. In our space we can take very large high dimensional data sets, use machine learning tools to
The power of AI
32:08
generate some reduced order model, and then do predictions with it and test
32:14
whether we can predict a historical period. Jen, I want to come back to you
Trust in data
32:19
for just a second, I mean the question I guess goes to that of trust in data. You're working with farmers, and farmers are often third, fourth, fifth generation, been
32:28
on the land for a long time; how do you go about having that conversation where you're telling them, "Maybe that's the way you've been doing it but the data says
32:34
maybe you should do it this way instead?" Yeah, it's a very complex question, I think, because you're right, people have been doing this for a while, and often
32:41
looking over the fence is what they trust, what their neighbours have been doing is what they trust, and also I think we need to think very carefully
32:50
about the question of data ownership, so I'm not necessarily going to go there, but I guess when you're talking about someone you want
33:00
them to change practices and various other things, you do have to build trust. The great thing about being able to handle multiple streams of data, as
33:08
I've said, is that you can essentially build in their knowledge and their contextual understanding and their lived experience,
33:15
and once you do that, and as Terry said, you show that you can predict back in time, then it validates to some sense your predictions going forward in time.
33:24
So the bit I really love about working in data is connecting people with data, and when you do that, and then they start to see, they start to
33:34
trust themselves in the data, so it's about grounding it and
33:39
respecting history to predict the future, if that makes sense. Good, do we have another question out on the floor? Looking for a hand, there, go
33:48
ahead please. Hi there, my name is Clement Yoong, I'm from the New South Wales Government Department of Customer Service, I've got
33:55
two questions around climate change. The first one is, what are we doing proactively to mitigate the risk of climate change with AI - Tim, you've talked
34:05
about energy efficiency, balancing the load, that's all great measures but can
34:10
you elaborate on some examples that we can use AI to better address climate change risks? And the second one is around the energy consumption for
34:19
increased computational power; I know that's kind of an emerging field and obviously with a lot of IOT, as well as AI, an increase of computer
34:30
power, what's everyone's thoughts about that? Who wants to grab that one? Tim?
34:37
I didn't quite catch all of that, especially the second part, but I think the first part, it didn't come through quite clearly, but the first part
34:44
was around climate change and despite the load balancing and automation of the grid, what are we actually doing to combat climate change
34:53
in the electricity system. So one of the things is just integration of renewable
34:58
energy into the grid, that's a massive challenge at the moment and a huge bottleneck to decreasing our emissions from the electricity system.
35:07
Keep in mind electricity is only about a third of the energy in Australia so there's a lot of emissions and climate change impact from other parts
35:15
of the energy sector, as well as transport, industry, and our export energy sector as
35:22
well, but in electricity there is a backlog of large-scale wind
35:29
and solar projects sitting on the fringe of the grid, waiting in a queue to
35:34
get connected and we just don't have enough capability in predicting and
35:43
simulating and designing the grid system to accommodate that input, so
35:48
that's one area, if we can work on that, will allow a quicker uptake of more
35:54
renewables which obviously would have a benefit for climate change. Also the
36:01
other side of it is not demanding the energy that we don't actually need,
36:07
so energy efficiency, so the further we can go down to the micro scale inside the home basically, and improve the distributed energy resources
36:19
or the demand response at that scale then we reduce the energy that actually
36:24
has to be generated and that'll drop emissions as well. So this
36:30
digitalisation of the whole system will have a net benefit in terms of combating
36:37
climate change for Australia, our own impact. The second part of the question was sort of related to what I was just talking about, the residential scale.
36:47
It was more about increase in power usage because of using AI and
36:53
IOT devices. I mean overall, I guess there might be an increased demand on energy.
36:59
Yes, that's a really interesting and growing topic at the moment, so for
37:08
everyone here, it used to be - you just had your manual power meter on your house
37:13
and you got one data point every three months or so when the meter reader would come along and read the numbers and that was it. But now with smart meters,
37:22
you've got live real time data, so literally hundreds of thousands of data points every minute and what we're doing in CSIRO is what we can get from those
37:33
meters, is a real time, high sampling rate, frequency and voltage signal,
37:41
and with that we're using machine learning techniques to disaggregate those frequency signals to work out what
37:48
appliances inside the house are operating in which way, so by just looking at frequency and voltage we can tell the washer went on,
37:56
or the air-conditioner went on and that's starting to build up a massive data set to teach us how consumers are actually behaving inside
38:06
the home with their electricity use. That's a project called the National Energy Analytics Research Program, it's a $50 million dollar project, Data61 of
38:15
course is again involved in that, with the energy market operator and the
38:21
Department of Energy. To get AI to really have a benefit you need a lot of
38:27
data, so this is... the start of it is building up this massive data set right to the consumer level, so that's one thing we're doing there. Terry, would I be right in
Climate variability
38:37
thinking perhaps that some of the work you are doing could lead to, feed into planning processes, particularly with regard to infrastructure
38:42
resilience? Yeah, so I suppose here's the point where we differentiate
38:48
between climate change and climate variability. So I think climate change is,
38:54
we're reasonably good at that, because it's really the forced response and if
39:01
you look at the IPCC projections they pretty much match the large-scale
39:08
observations. Where it gets difficult is when we start to talk about climate variability, so over the next decade, two decades, and that's when we're really in
39:18
in the climate forecasting space, where we're actually trying to predict the
39:23
phase, if you like, of the relevant large scales within the climate system
39:29
and that's a difficult problem because... and this is really at the cutting edge
39:35
of some of the machine learning methods that we're applying, because these
39:40
phase transitions are really regime changes, and so one of the starkest ones that Australia went through was in the late 1970s, the warming and
39:52
drying over South Western Australia coincided with a shift in the
39:58
mid-latitude jet, so these kinds of relatively fast changes in the jet
40:06
structure, or if you like, the variability in the climate system, are very difficult to predict, but they can have very large consequences in the short, in the near
40:14
term climate, and so that's really, these new machine learning tools that we're
40:20
applying are giving us not just a deeper insight, but also a chance to try and
40:26
extract these relatively weak but skilful predictive signals within our
40:32
large data sets. Right, do we have another question out there, hands up?
Changing protein sources
40:41
Thanks, Jason Schiemer from IP Australia, I'm actually just wondering, you've mentioned various things like changing protein sources, for example aquaculture,
40:51
and there's been other talks about meat substitutes and things that have been performed, there's also talk here about the climate variability and long term
40:58
trends; is anyone looking at who's farming what and where rather than just what they're doing on their farm? So I got some of that, if there's anyone looking at who's...
41:09
Who's farming what and where, are we growing things in the wrong place, maybe? Are we growing the wrong crop, using water that we don't have to make rice, for example?
41:19
Yeah, so it's an excellent question, and I guess there's been a lot of discussions about how we might do
41:26
this as a system, as a national system. Look, there are those middle
41:32
level decisions sometimes made, often it's market driven you can't run a non profitable farm; eventually resources will get to you and
41:43
our farmers...[inaudible] Yeah, exactly, there's no subsidies and there's no crop
41:49
protection in this country so you can't get away with it, but I think you're talking about middle level decisions, and look, I think the
42:00
conversation in this country is really rapidly maturing in a very short timescale about how we deal with how agriculture and biodiversity
42:08
can be allies, and how we use agriculture as a lever to actually
42:14
improve our natural resources rather than just being seen as a sink for it. So
42:21
yeah, that's a pretty ambitious conversation but I think there's a lot of levers in the system to start making it happen. Obviously water utilisation is
42:29
going to be the major driver, discussions around that, so there are rapidly maturing conversations about it but the key driver at the moment is the market
42:37
driver and there's some key sustainability signals, so you saw the
42:43
alternative protein signal, as you mentioned, that's an example of
42:49
a huge protein driver. Consumers are becoming increasingly aware and sensitive too, which is fantastic, to knowing more about and I encourage
42:58
everyone to learn more about how the food is produced, I think it
43:04
will drive the market in the right direction. I suppose there's slight tangential comments, I think
43:11
some parts of agriculture are becoming more intensive, all of your tomatoes are grown in greenhouses, and that's very strong automation, so we're
43:19
using smaller amounts of land to generate more food. You're seeing things happening in an urban side whereas I think you'll find other
43:27
parts of Australia, if you travel, travelling areas, you see the expansion of almonds, so to some extent there's specialisation in certain things, cotton coming south.
43:35
The tomatoes are a really good example, because systems at Sundrop farms include closed systems, solar water, they use salt water,
43:43
completely closed system, and they do that for, it's a sustainability
43:49
credential thing but also it's a good economic decision for them so that's a market driver.
43:55
I'll grab another question from the floor in a moment, but I want to insert one here myself. The topic we're talking about is the use of AI for good, but as we know,
44:03
a lot of good intentions often get railroaded or derailed by malicious actors. How do you go about ensuring then, that you don't lead to unintended consequences?
44:13
One of the topics we've talked about is the connecting of sensors up into networks, therefore putting sensors onto equipment, but theoretically every piece of
44:21
equipment we connect up now becomes connected potentially to a malicious actor, so how do we go about ensuring that the work that we're doing for good
Cybersecurity
44:27
doesn't end up having consequences that we didn't plan for?
44:33
Anyone want to have a go at that? Silence descends upon the panel. I'll just make a comment, I'm not sure if I can in any way ensure that we don't have unintended consequences,
44:44
but again with the grid system, if you automate that and you're right that
44:49
every internet router in someone's house becomes a target point for hackers
44:54
to get into that system and then that can proliferate, so in the
45:00
planning for things like a digital twin, or potentially automating parts or all of the grid, national security, cybersecurity is top-line part of that
45:12
whole planning and that process. I think it's just vigilance from the start
45:19
in looking at the architecture and how everything will be set up, so every step of the way it's protected.
45:24
Otherwise, I actually don't have an answer for how you make it foolproof so you absolutely ensure that there's no risk. In Data61, we've got a program, a group
45:39
around cybersecurity research, and we do work, say medical devices, I think we've done stuff, even quite particular devices, so I suppose it comes down to the sort of hazard
45:51
assessment that people need to take, in terms of for this device, what are the risks, the risks around there, but I think, as we all know, it's becoming more complex,
46:01
the risks are becoming different, so I think people need to keep revisiting
46:07
what those things are. In terms of actual science, I suppose you always have to be assessing
46:15
what's the ethical basis of the things you do, so if we do research that involves human subjects, and it might even be just to interview them,
46:23
that actually gets approved by panel, so there is oversight over that, but I think the important part of the question is, the landscape is changing, the things we do
46:31
are changing, so is our conceptual basis for why we think it's safe still good enough, and so we've got to keep working on that.
46:39
To quote Uncle Ben from Spiderman, with great power comes great responsibility. Alright, do we have another question out on the floor?
Technology adoption
46:45
We might just grab one more before we start to wrap up - I can't see very well from here, do we have anyone with a hand up? No? Alright, well I'm going to put another one of my
46:55
own in then. I might just add to that last point if that's alright, so I was just thinking
47:01
with a great power comes great responsibility, but the thing that I really like about some of the AI conversations, particularly the ones I've
47:06
had in TSR is technology adoption and just the realisation that we actually do have to understand how the technology is adopted, so in the farmer
47:15
case you actually do have to go on a farm and talk to them about, if we develop this will you use it, and some of the answers are surprising. What they're
47:22
thinking and how they will use it and what that will mean, and I think that the fact that we do that now, that technology adoption question happens, we're not...
47:33
I think it really informs some of the things that's wrong but we do need to think through, but that's the data that we're using to think that through.
Autonomous systems
47:41
So a lot of what we're talking about here is using artificial intelligence and machine learning to help us work with incredibly complicated datasets and
47:48
reach findings and conclusions from that. Often though, when people talk about AI, they're thinking about the notion of a fully autonomous system, so how far away
47:56
are we from actually starting to put more autonomous technologies into play in some of the areas that you're working? I'll take this one, so I often hear
48:08
the analogy, or there is a continuum that - machine learning, the next one is AI, and then there's fully generalised artificial
48:15
intelligence, think of Terminator charging in, and I don't think that's a
48:21
continuum that really is a realistic way of looking at it, like machine learning is not the poor cousin of AI, it's horses for courses.
48:30
The question is, can a certain technology answer the question that you have? If yes then good, if no then let's invent and come up with new
48:41
technologies, so therefore I don't think it's the goal of going to a fully autonomous system in any capacity if there's no purpose
48:51
around it, if the current technology can already answer the question that we have.
48:56
So, we are at the dawn of the AI era, so to speak, we're very quickly moving down
Future capabilities
49:03
a pathway of utilising these technologies, but there's still so much more that can be done that we haven't explored yet. What are you most
49:09
excited about, if you look forward over the next three to five years, in terms of the capabilities that are likely to be delivered to you, what really gets
49:16
your blood pumping around that? For me, the most exciting thing is that we will
49:24
have so much data in different disciplines that we can actually link together. So for example, CSIRO is part of the MRFF-funded outbreak consortium,
49:33
which looks at antimicrobial resistance and the rise there and how we can be
49:39
protecting ourselves, because antimicrobial resistance is going to be one of the largest challenges that we're going to face going forward, because it
49:47
will mean that if our current drugs are not effective anymore, then
49:54
effectively any surgery could be a death sentence, any graze or cut could be a death sentence or any infection could be a death sentence, so this is
50:03
something that as a society, as a community, we need to come together and really address that problem, and I think data, education and technology will be the
50:12
answer around that. Terry, what about for you then, in terms of the complexity of the system that you're working with, what will you be able to do five years from
Complexity
50:19
now that you can't do today? So I think the exciting thing is that AI and machine learning relies on a high quality continuous stream of data coming in, and
50:29
we've had that for atmospheric observations for a few decades now, since the late 70s with the satellite period, and oceanography is moving into a
50:41
new phase where we do currently have a pretty good coverage
50:46
for the first time of the upper thousand meters of the global ocean. There's new
50:53
profiles coming on board that will take that down to about 2,000 meters and it's really this information in the subsurface ocean
51:00
that's the key to understanding the predictability in the climate system.
Future for Australian farmers
51:08
Jen, what about for you then, if you think about the work you're doing with Australian farmers today, what will you be able to do for them three to five years from now that you can't do today, as a result of your capabilities in AI
51:16
and ML? I guess two things, one is understanding the system they're
51:22
operating in much better, and really inform their decisions so they can just generate value for their communities, for their regional communities, and that's
51:31
what really this is about for me, bringing that value back and making those decisions that really capture that value, that are good for them, are good for the
51:41
planet, and are good for what we need as a society to feed ourselves I think that's really important, we've got no choice in that, so I think making those decisions as informed
51:51
as possible with that quadruple bottom line, it's just really exciting that we can do that now. The second thing that excites me about AI and machine learning
51:59
is when I think about these technologies globally and what they can do, it's
52:04
about unlocking creativity to me. If I think of a farmer that's no longer
52:10
spending four days driving a tractor around the field, then what they're doing is actually using their brain and I'm excited about that, that they're freed up to do much more
52:21
creative, interesting, human type activities. I kind of like the idea of driving a tractor but I've never done it, so what would I know.
Future of energy
52:32
Tim, what about for you, I'm kind of getting the sense that if we don't start utilising these tools, we may not have a robust and reliable energy grid moving
52:40
forward, would it be right to think that? Yeah, that'd be right, and it's not a secret. I think the, well I'll move away from the National Grid, but it is
52:48
truth that it's not... the tools we have in place now are not fit for purpose in
52:54
actually managing that system going forward. It's a bit fragile, to put it
52:59
that way, so, not that AI is going to solve that overnight, but digitisation,
53:05
simulation tools, better predictability through models is important in the near term. Other things I think we'll see coming on in energy or
53:14
just optimisation on distributed energy systems, so if you go right out to the
53:20
other scale, where we've got wind or solar, being able to predict those better to
53:26
manage the flow into the network, and I'll use an example from CSIRO, we're
53:32
just commercialising a technology for cloud solar forecasting, so you
53:39
set out an array of cameras pointing up at the sky and all oriented in a
53:44
specific way, and it can be over quite a large scale, so a city scale, and these
53:51
obviously take pictures at high frequency and high resolution and all that data is integrated using machine learning methods, to provide a very
54:01
comprehensive picture and forecast of the solar irradiance coming
54:07
through in the near term, and that allows solar to be able to predict or to feed
54:12
into the market at a better price and be more competitive, so it's that type of thing as we move to more clean energy sources, just coming up with AI
54:23
or machine learning driven enablers to allow them to compete better in that market.
54:30
Good, well that brings us to time, actually we're just slightly over time, so I guess it's probably appropriate for me to now ask everyone in the audience to please
54:36
thank my panellists for this afternoon, it's been a fantastic discussion
54:42
and we're all going to exit the stage, but I'm going to leave Simon here because he's gonna have the final word for the day. So, ladies and gentlemen, thank you all for listening,
54:49
panellists, if you'd like to join me offstage, SImon, over to you, thank you. Thank you.
AI for Environmental and Social Good panel - D61+ LIVE 2019

Speaker: Dr Amir Dezfouli

About: An interactive AI-powered computer game that clinicians can utilise to diagnose a patient's mental health. The game provides medical professionals with the ability to see a patient's brain processes in response to tailored stimuli, unlike traditional mental health assessments which only measure a patient's response to direct questions about their mental state. Here, lead researcher Dr Amir Dezfouli, neuroscientist and machine learning expert at CSIRO’s Data61, explains how the platform works, challenges, and benefits.

Introduction
0:06
Alright, ladies and gentlemen, welcome back to the final panel session. Good to see you all sticking around, are you still all energetic
0:12
and enthusiastic? [cheers] That group is, we like them, who can do better?
0:18
Are we still all energetic and enthusiastic? [loud cheers] Love it, let's keep that
0:23
going for the next hour. I know it has been a big two days, there has been a huge amount of information to take in; hopefully it's been stimulating your
0:29
thought processes; you've got a ton of new knowledge to take back to your organisations afterwards. I'm not going to say we're saving the best for last
0:36
because we haven't actually delivered on that yet and I don't want to set the bar too high, but I'm certainly excited about what we have to offer today. So for those
0:42
of you who weren't in the session that I chaired this morning, my name is Brad Howarth, I'll be your moderator for this panel session where we are here to discuss AI
0:49
for social and environmental good. So let me be very clear, if you thought you were
0:54
here for the AI for social and environmental evil panel, you are at the wrong event. Everything we're going to be talking about here now is about the
1:02
positive, wonderful things that AI is doing out into the world. My group of scientists here will be doing that work for me, I just have to ask the questions,
1:10
they're the ones who do all the heavy lifting, which is good. We'll be discussing how AI is already achieving impact as a force for social and
1:16
environmental good, from predicting the climate in coming decades and using AI to better understand how and why droughts occur, to identifying the gene
1:24
that causes motor neurone disease. My panel will discuss the fascinating possibilities and outcomes already emerging as a result of AI research at
1:33
the National Science Agency across climate, health, energy, agriculture and biosecurity. Like I said, I don't know very much about
1:40
any of this, which is why I'm so grateful that you are all here. Let me tell you a little bit about my panellists. At the far end we have Dr. Terry O'Kane, the
1:47
Principal Research Scientist in the CSIRO Climate Science Centre. Terry's applying machine learning in climate science to understand future
1:55
climates and how and why extreme events like droughts occur. Associate Professor Denis Bauer is a Principal Research Scientist in the CSIRO e-Health Research
2:05
Program, she applies machine learning in biomedical research to understand the origins of disease and help develop fundamentally new treatment regimens
2:13
such as gene therapy. Then we have Dr. Tim Finnigan, the Director of CSIRO Energy
2:18
and an advocate for the increased use of digital simulations in energy networks and the use of AI to optimise energy resources across the network. Then we
2:27
have Dr. Jen Taylor, Deputy and Science Director for Agriculture and Food at CSIRO, who leads a research group using genomic and computational science for
2:35
improved crop performance. Then finally next to me, Dr. Simon Barry Acting Director for CSIRO's Data61, previously also Research Director of the
2:44
Analytics and Decision Sciences Research Program. I feel like quite a fraud, I'm the only one who is not a doctor, and probably never will be either.
2:53
Alright, let's dive into it, so maybe it's a good opportunity perhaps to tell us a little bit about some of the work that
2:59
you're doing. Maybe Denis, I might start with you, if you can talk a little bit perhaps about the work you're doing in using
3:05
machine learning in genomics, please. Absolutely, thank you, and hi everyone. So
Dennis
3:11
my research group focuses on using machine learning to understand the secrets that are hidden in the genome. You see, the genome really holds a
3:20
blueprint for every single cell and organ in our body, and as such, holds vast
3:25
amounts of information about our future disease risks, and the way that the mechanisms that keep us healthy. So from our perspective,
3:34
understanding all of this is really exciting and I'm really passionate about doing this, but at the same time it's also quite daunting. There are three
3:44
billion letters in the genome and every single one of them could be potentially contributing to disease. So for example, one misspelling in the
3:54
genome can cause life-threatening diseases like Huntington's or ALS.
3:59
Therefore, understanding all of this is not an eyeballing or statistics mechanism anymore, but really needs sophisticated machine learning, and I
4:07
hope to share that with you today. Now Terry, I live in Melbourne, and it's hard enough for the Bureau of Meteorology to tell me what's gonna be
4:13
happening tomorrow, let alone 10 years from now, so tell me a little bit about the work that you're doing then, using artificial intelligence, machine learning,
4:20
to model climate when we're looking forward over periods of decades. Yeah, so it really is blue sky research. That was pun intended? No - well, yes - but we
4:32
do two things, we try to reconstruct the climate over the last sixty years.
4:37
We think we can go back and have sufficient atmospheric, or estimates of
4:43
atmospheric, observations going back that far. We, because it's a... we have a
4:48
paucity of ocean observations, we can really only get a reasonable estimate of
4:55
the ocean from 2004 onwards, where the Argo profiling arrays came into play,
5:00
and so it's highly uncertain, so we treat this in a probabilistic sense,
5:08
so rather than one estimate of the climate from, say, 1960 to present, we
5:13
treat it as an ensemble, so we have a hundred realisations of this to try and get some estimate of the uncertainty in what we think the climate over the past
5:23
several decades was, and we use that information to then run that same set of
5:30
models forward into the future and again try to ascertain some estimates of the
5:37
uncertainty and also have, if you like, a four dimensional probabilistic estimate
5:44
of what the current climate is and what we think it will be. So where ML - machine learning - comes into play, we have these vast data sets; we really want to reduce
5:54
it down to just, what are the important cause of relationships
5:59
between the various modes that live within the climate system, so that's
6:04
our approach. Jen I imagine that a lot of this would probably tie somewhat to the work that you're doing, given you're using machine learning and artificial
6:11
intelligence to look at how crops are adapting to climate, so tell us a little bit about the work that you're doing and how you're going about trying to
6:17
understand these changes. Sure, so in agricultural food we're using machine
6:23
learning and AI really to make decisions, and often they're decisions, multiple decisions in the single day, to understand quite a complex system about
6:32
how our food and fibre is grown, so it's something I guess we all take for granted, that we can get food and we can get clothes and we get
6:39
building materials and things like that, but if you think about it, it's actually a very complex system, so what we need to understand to do that successfully is, we
6:46
need to understand the environment that that thing is growing in, so we need to say, take wheat for an example, we've already heard from Denis that weed
6:55
has its own genome and any single misspelling can cause disease, that wheat grows in soil and a single gram of soil can contain up to a billion different
7:08
taxa of microbes that do or do not help that wheat plant grow, and then you've got climates, so we've heard from Terry that climate is incredibly blue
7:17
sky, and thinking about how we can do that, so if you put yourself in the
7:23
shoes of a wheat farmer, they've got to make a decision in April when they sow that wheat plant about what its future is going to be. If they've sowed
7:33
it too late and there's not enough rain to help it establish, then they
7:38
lose, and they are betting the farm sometimes, they lose that crop if they sow it too early or late, then they risk frost in five or six times,
7:48
five or six months time, can you tell me when the frost is gonna hit, so they could lose the crop to frost. So they've got to weigh up those decisions and
7:56
balance that whole system and the reason I really find that fascinating is that
8:02
unlike ever before, we've had so much data about climate, soil, plants, organisms,
8:07
everything like that, to help them figure this out and we really need complex systems to help us work on that, but the other fascinating thing you'll
8:16
find about us, we've been doing agriculture for thousands of years, it's one of the first things humans did together, other than maybe dancing, Simon...
8:25
Not sure where you're gonna go with that. [laughs]
8:30
So I find it fascinating that we're still learning from this system, it's a very important system, we all need to eat. Tim, I guess the
Tim
8:40
energy network is probably a little bit more modern by comparison to some of the systems that we're dealing with here, so how are you applying AI and ML, or how
8:46
can they be applied in order to do things like give us a more secure and reliable energy network? Thanks, well you would think it was more
8:54
modern but it's actually not quite as modern as you, we all might expect or hope it to be. One thing it is, is it's a large machine, so if you think of the
9:04
electricity grid of Australia, it extends from the northern tip of Australia all
9:10
the way down the East Coast to the southern tip of Tasmania by way of the the Bass Link Cable and across to South Australia. Some have called it
9:20
the biggest machine in the world; it really is one large single machine that
9:26
has to operate as a perfectly orchestrated system. It doesn't always, so
9:32
we're all aware of problems that it has faced, it's changing all the time, so the
9:38
way that machine was built and planned in the 60s was from a small
9:44
number of central power generation, coal-fired power generators,
9:49
distributing electricity out to the end-users at the consumption point. All that's been completely changed in the last few years, from a few
9:59
dozen power sources to now over 2 million in the network, largely due to
10:04
rooftop solar that's been added. Those rooftop solar systems are often controlled by smart inverters, so all of a sudden, you've got what we
10:13
went from, a small number of power generators, to literally millions in the system, all trying to feed in, and can be controlled and balance this massive,
10:22
some say the world's biggest, machine in real time and you would think it was
10:28
more modern than it is, that it was all sort of somehow systematically automated, and if you've seen the Wizard of Oz movie, I'm sure everyone over the years
10:36
has seen that, I kind of pictured us much more like the the man at the end with the curtain and the levers, you're pulling the levers to get the system to
10:44
work, and in some ways it's a bit like that still, there's a fault in a
10:51
line or a lot of clouds coming over solar farms and somebody's got to actually make a call to say turn the coal plant up or down or the gas plant
10:59
on or off. Where machine learning comes in is we started to now put in place the frameworks for being able to simulate
11:08
that massive machine, digitally, heading towards a digital twin, that's some work
11:13
that's started, underway now, if we can get that set up and validated in the
11:21
future, that opens the possibility to apply machine learning to this massive
11:26
machine and resolve all these manual operations that it currently requires.
11:32
Big opportunity, that's one example. So we've got four very big problems
11:37
that are being tackled here using this technology, but Simon, this is really just the tip of the iceberg isn't it, in terms of what we're seeing
11:42
machine learning applied to, so can you tell us about a few of the programs that are also running within Data61 that are utilising this technology and where it's
11:49
being put to good use? I think one of the opportunities for Data61 is that we actually work with all the parts of CSIRO, so actually a lot of
11:59
the examples we've had there, Data61 is actually working in those places, so we're bringing that expertise there. I think machine learning,
12:08
it has a major... it's a major opportunity for science, so science has the conceptual side, which is, we have gravity, you
12:16
have concepts and I suppose processes, but you can also generate data and you can actually learn directly from data, and as we get the ability to...
12:24
as we get more data, we can use machine learning to almost learn directly, and the amazing thing about machine learning is it actually lets us
12:30
generate data more efficiently, we can generate data from images, we can generate data from simulations, we can generate data in all sorts of different
12:37
ways, so it's opening up amazing opportunities in lots of places. Examples could be in things like aircraft.
12:45
Aircraft now, we'll have 50 systems in them that generate data all the time, every every second they're generating data about their state, but
12:53
what does that actually say about the state of the whole aircraft? So how do you actually learn that, how do you actually deal with that in
12:59
real time? If you get a... in biosecurity, if there's an outbreak of a disease, if you find something in a certain place, what does that
13:05
actually tell us about where it actually came from? Now previously you'd think that's quite a hard... if you think of all the possibilities about how
13:11
something got somewhere, but if you have data about where trade goes to, you have data about where people look, where people
13:18
don't look, you can actually use the power of computation to actually explore all of those scenarios. I think there's lots of places, there's
13:27
conceptual models of how the world works, but as we get data they can be tested, so it's like we can accelerate science, and so there's a huge number of
13:36
places that we're using that. I like that idea of accelerating science because we're really talking about doing things we could never do before, aren't we? So how
13:44
far down the path have we got to, because machine learning and AI is an evolving science in its own right, so in terms of the tools that we've built today to be
13:51
applied to the problems that are being solved here, how much more capacity do we have with regard to developing the tools themselves to take
13:57
the work that this group is doing even further? I think it is a very interesting question because, to some extent, learning and making inference from data
14:06
it's probably got 100 years of tradition, and some of that is well established, whereas I think
14:13
just in the last 10 years the power of deep learning, which came from these neural nets which were, I suppose, understood from the
14:20
80s and the 90s, but once you actually brought that together with technology from GPUs - from graphical processing units - which are
14:28
basically coming out of the video gaming industry, that actually opened up whole new opportunities, and so the way technology
14:34
intersects with new ideas is amazing, so I think there is still a long
14:40
way to go. Our ability to learn autonomously, our ability to learn through artificial intelligence. I do feel gratified though,
14:47
just on that last statement that all the time I've spent playing video games is actually helping to develop the artificial intelligence space, so it's
14:53
nice to know that that was time well spent. I would encourage you to continue with that, that's very important. Good, thank you. Denis, I want to come back to
Dr Dennis
15:00
something that we talked about in the opening, motor neurone disease. Now this is obviously a critical illness that we need to be looking at, so how is the work
15:07
that you're doing, and the work of your colleagues, helping to deal with a disease of that nature. ALS is the motor neurone disease that
15:14
Stephen Hawking had, or that you might be familiar with from the ice bucket challenge, and as such, it's a really heinous disease where within the, on
15:22
average, the space from diagnosis to death, there's only two years, and there's
15:27
currently no treatment, certainly no cure. So therefore, we're part of the international consortium, the largest single disease consortium in the world
15:37
due to the prominence of the disease, that really looks at what is the origin of this particular disease, which genes are involved, and how could we
15:46
potentially find treatment - drugs - that could treat this disease. Doing that
15:52
requires an enormous amount of data; as I was saying, we have three billion letters in the genome, each one of them could be contributing to the disease,
16:01
therefore investigating which one truly is associated with that, we need to have
16:06
the amount of samples that is, that allows us to build a robust model, so the
16:12
project mine has 22,000 individuals, which is staggering. Therefore the
16:17
resulting matrix of 22,000 times 80 million features that comes from, on
16:24
average there are 2 million differences between you and the person sitting next to you, so therefore in a cohort of 22,000 that
16:33
averages out to 80 million, so 80 million times 22,000, that's one point seven
16:39
trillion data points, and any machine learning, current or in the past,
16:44
is not designed for actually dealing with that, therefore now for the first time, we've built, because we are thrust at the forefront of this, we had to build
16:54
our own machine learning model that deals with this ultra high dimensional data that others have not dealt with before, so for example, the Google's and
17:04
the Yahoo's of the world, they typically deal with tens of thousands of features, whereas 80 million sets, to the order of magnitude larger than that, so
17:14
therefore it's nice to know that a little Australian team is at the forefront of big data analysis. Terry, the climate, an incredibly complicated
17:24
system that we seem to be still learning about every day in terms of the complexities and how things inter-operate, how far down the path have you got
17:31
then with regard to building solid and reliable models, and can you see a pathway ahead that takes you to something that will progressively get
17:39
better over time? The difference between doing a climate forecast and a weather forecast is that a weather forecast, you're taking
17:50
a lot of observations now, you're synthesising them into a model,
17:56
assimilating them, and then you're running forward for a relatively short period of time, so the inherent biases or model errors don't have time...
18:07
those errors grow slower than the errors in the initial conditions, the uncertainty and the observations. In the climate forecasting space we have a
18:17
paucity of observations because we really rely on knowing the ocean very
18:24
well and at reasonable resolution, and we don't have observations at depth in the
18:30
ocean, and then we're running the model forward for up to a decade, so the inherent errors in the model manifest relatively quickly within that forecast,
18:41
and they're the real problems with how we forecast, so you become very aware of the limitations of your model. One of the
18:52
approaches that we take is to generate very large data sets and then try to use
18:57
machine learning to pull out what other, if you like, the relevant mechanisms
19:04
within that probability distribution, and then verify them against what
19:11
observations that we have from history. So it's a difficult problem; also, the
19:20
other complicating factor is we actually don't really know the dynamics that drive the major climate teleconnection modes, so if you think of the El Niño
19:30
Southern Oscillation which drives a lot of the large-scale weather in Australia, its relation to something like the Indian Ocean Dipole,
19:40
the causal relationships between these large teleconnections, we only really know them empirically, so a machine learning office and a tool to try
19:51
and do a purely data-driven approach to pull out and
19:56
mathematically underpin those relationships. Tim, you're talking about modelling this huge machine that you described in the notion of building a
Digital twin
20:04
digital twin, so I guess trying to replicate the physical world in the virtual world. If you can build that capability what will it enable the
20:11
energy industry to actually do, how will we benefit from having that digital model in place? One thing is to avoid overinvestment in the future grid,
20:21
so there's a lot of discussion about what the national electricity grid will look like in ten years or 20 years or 30 years, given that it was designed decades
20:30
ago for a different purpose, so once we start looking at the distribution of
20:35
energy sources in that grid, and again I said before, we had a dozen or two
20:41
central generators in the past and now we have millions, so coal is still
20:47
there, gas, solar, wind, rooftop solar, hydro, we're adding batteries, adding hydro
20:53
storage, all kinds of different contributions into this network, this
20:59
simulator will be able to help us decide where the poles and wires and transmission systems need to be placed in the future to optimise that system
21:08
without over-investing, without gold plating as they call it, which has happened before without clear view of the future, we just say, "well,
21:18
there could be a lot of solar farms out there because it's sunny, let's put a whole lot of power transmission lines in now to enable that," only to find
21:27
out that dynamically that's not the best situation, so it will avoid over
21:32
investment and that can be in the billions in savings; that's just one benefit, there's a number of others, so in the situation where you have faults in
21:42
the system or environmental impacts, for example bushfires, so with climate change we're expecting more and more intense bushfires, higher peak
21:52
temperatures, how will a grid system in those environmental conditions operate
21:57
in the future and how can it potentially repair itself if it is integrated with
22:03
some machine learning, if a fault goes out in a certain part of the grid
22:09
can machine learning help the system repair itself or reroute power so that we don't get a South Australia statewide blackout - we actually have a smart enough,
22:20
well-trained grid that can figure out other pathways for power to flow, so there's a number of benefits to it.
22:29
I suppose adding to that, we're doing work with New South Wales Transport where we're looking to build models, and a model is
22:36
a digital twin, as a representation of the Sydney traffic system. Are you telling me you get to build model trains?
22:46
That makes it sound even more exciting. That's the same issue, they're interested in understanding the
22:52
system, they're interested in forecasting demand, they're interested in when something happens, understanding what has happened and how they can respond to it,
22:59
and I think that's the opportunity in many systems, is to find representations which are predictive, because once you have that, they can
23:06
actually be used, they're actionable. Jen, you talked about helping farmers make better
Working with farmers
23:12
decisions, how far down the path have you actually got to delivering that information to farmers then, are we at the point now where you're actually
23:19
able to work with them directly in order to help guide them in terms of what their future plans will be? In some areas; I think farmers have been working hard
23:28
to get efficient in their practices for a while now but there's a lot more urgency in the system. I think everything needs to get more efficient, more
23:37
sustainable so farmers need to make some fairly complex decisions about how they
23:42
use their water, whether they need to worry about pests or pathogens and all that sort of stuff and there are systems around that do that, I guess one example
23:52
I'll point to is the whole planet is thinking about what is the most efficient way to make protein, and that's something that we've been working
24:01
on, and aquaculture in particular, I think as Genevieve said this
24:06
morning, aquaculture is quite old in this country, but just recently agriculture
24:12
and its sustainability credentials have been growing and growing and growing, so there's a booth outside that's been talking to how they help aquaculture
24:21
farmers monitor the water quality in their pond and then they can make essentially almost instantaneous
24:29
decisions about how to manage that system and prevent waste for their production system, so that's one example. Wouldn't have been able to do
24:37
that without sensors and without a way to interpret that stream of data that's coming in. That's the exciting bit about this; we're not talking about
AI in agriculture
24:46
machine learning and AI because it's just the latest tool, we're talking about being able to do things we could never actually do before, aren't we?
24:52
So I mentioned - sorry, Simon. I was just going to say, alongside the agriculture one, I
24:57
suppose the really exciting part about that is a lot of the machinery within agriculture is now becoming, it generates data, it generates what it's done and
25:06
so if it adds fertiliser it generates what it harvests, it knows where that happened, and so that's exciting in the sense you've now got data that
25:14
you never had before, but the even more exciting part about that is actually the machines are programmable so you can actually write programs to actually get
25:21
the machines to do things, so you can actually run experiments on your farm in an automated way and so your ability to learn about
25:28
your property and your production system, it opens up in ways you've never had before, so it's a really interesting time. There's a statistic of a 70 percent uptake of unmanned
25:38
tractors and things like that rolling around farms and making decisions about water, pests, pathogens, without essentially a farmer walking up and down
25:47
the field trying to make decisions about where chemicals go, and things like that. It's just so much more precise and efficient, which is fantastic for the
25:56
whole system. Terry, I want to come back to you for a moment, maybe pop another couple of questions in before we go out to the audience; I
Drought prediction
26:02
understand you've done some fairly groundbreaking work when it comes to drought prediction as well, so tell us a little bit about that.
26:09
I think it's more that we now understand how little we know about droughts, so I think
26:19
drought's one of those words where everyone intuitively has a sense of what
26:24
you think a drought is, but from a meteorological perspective, if you look back in the history of droughts, they're pretty different, and
26:34
the, if you like, the large-scale drivers that set up the
26:39
conditions for a particular drought at a particular location at a particular time can be quite disparate, so the work that we've been doing is to try and find
26:51
in a mechanistic sense, what are the common underlying drivers going back in time.
26:57
So I wouldn't say that we've succeeded, but we have a better understanding of what we don't know, which is a lot. One of the topics I'd like to get into then is
Machine learning in genomic research
27:07
how AI and machine learning is changing the research project process itself for the better.
27:12
Denis, I'm guessing the kind of work that you're doing, it just would not have been humanly possible to do this prior to having the tools available
27:18
that you have now. Yes, so genomic research typically in the past is very heavily statistics based, where you look at each location in the
27:28
genome independently and make some inferences of which should be involved in disease or not, but we know that complex diseases
27:35
are multigenic diseases, so there will be multiple locations in the genome working together in order to create the disease. Therefore you are never able to capture
27:45
this on a full genome - 3 billion letters - on the full genome scale by using
27:50
standard methodologies. Therefore, using machine learning, particularly what we're using as random forests, they build individual decision trees that look
27:59
at subsets of the data and then ultimately come up with a model that captures the whole 3 billion letters of the genome and make that
28:08
inference. So from that perspective, really understanding what the genome - and we're
28:14
certainly not there yet - but we're on the path of understanding how the genome influences our disease risks, and what we know though, is that almost everyone in
28:23
the audience here will have one mutation or more that should be clinical, actionable, so it could be a future disease risk or it could be something
28:32
relatively trivial of how quickly you metabolise a certain drug. And all of this
28:37
should be readily available to your clinician and to you yourself in order to make the best decision around your healthcare.
28:46
And all of this knowledge is currently not there, but with machine learning, we're slowly growing it. Good, so I thought we might go out to the audience now because
Audience question
28:54
you've been very patiently sitting there, and I want to give you an opportunity to be involved. Do we have a question? I can see a hand up there, if we can just bring
29:01
the mic down, and once again ladies and gentlemen, if you could just tell us who you are and where you're from please. [mic test] 1, 2, you there?
29:08
Sorry, it's Stephan Wagner from AusIndustry, I'm just going to ask a question which is a little bit around ethics; given that the great
29:16
unwashed of public have to trust data, and AI in and of itself is already a very
29:23
complex scheme for most in the public to understand, how do you then use
29:29
AI to mitigate against where there's fraudulent activities in manipulating
29:35
data? We've seen it happen in the automotive sector, we've seen it happen in pharmaceuticals, I'm sure there are other case examples where even scientists have
29:43
manipulated data, so is there a chance here that this would actually give more confidence back to the rest of the public, that the data can't be twisted
29:52
and we can believe and engender that trust over the long term?
29:59
That's a great question. I suppose I'll start, science has got, at its foundation,
30:05
the idea of peer review, and I think if you look at the history of the last hundred years of science, it is around understanding the nexus between
30:14
the data that you have and why that lets you tell something about the world, or let you make conclusions that are reasonable or have
30:24
a level of confidence to them. I think from some of the ethical problems that you see in machine learning, everything used commercially, is
30:31
that people just take any data and do anything to it and then claim that it gives them certain answers, and so at least science itself,
30:39
the science method, has actually been very careful about that. I think actually on the other side, to some extent
30:46
maybe science can become too religious about that in a certain sort of way, so it needs to be open about using the information that's available, and how
30:53
it does it, but I think at least in terms of your question, there is a set of pretty strong protocols within the profession I suppose,
31:03
to look at how you manage that, I don't know what other people think. And also, AI is about being able to deal with multiple
31:08
independent sources of data at the one time and so I've seen cases where in fact it's the reverse, where AI has picked up
31:15
anomalies, when looked as on its own perfectly reasonable, but when put in the environment, and the whole data fabric as
31:24
we say, it just doesn't seem to fit. Now, as Simon says, there's a sense of judgement that
31:30
has to, maybe there's something there, we're looking at complex systems, but also maybe something doesn't fit, so I've seen actually the opposite where
31:39
it's actually been rich brought truth to the system, when you look at it as a whole. Adding to that, at the end of the day, your data is the point of truth, it's a point of truth
31:49
about the world, and to some extent, the power of AI is having large amounts of data, and so it does test our theories and assumptions and our beliefs,
31:58
and that's where learning comes from. In our space we can take very large high dimensional data sets, use machine learning tools to
The power of AI
32:08
generate some reduced order model, and then do predictions with it and test
32:14
whether we can predict a historical period. Jen, I want to come back to you
Trust in data
32:19
for just a second, I mean the question I guess goes to that of trust in data. You're working with farmers, and farmers are often third, fourth, fifth generation, been
32:28
on the land for a long time; how do you go about having that conversation where you're telling them, "Maybe that's the way you've been doing it but the data says
32:34
maybe you should do it this way instead?" Yeah, it's a very complex question, I think, because you're right, people have been doing this for a while, and often
32:41
looking over the fence is what they trust, what their neighbours have been doing is what they trust, and also I think we need to think very carefully
32:50
about the question of data ownership, so I'm not necessarily going to go there, but I guess when you're talking about someone you want
33:00
them to change practices and various other things, you do have to build trust. The great thing about being able to handle multiple streams of data, as
33:08
I've said, is that you can essentially build in their knowledge and their contextual understanding and their lived experience,
33:15
and once you do that, and as Terry said, you show that you can predict back in time, then it validates to some sense your predictions going forward in time.
33:24
So the bit I really love about working in data is connecting people with data, and when you do that, and then they start to see, they start to
33:34
trust themselves in the data, so it's about grounding it and
33:39
respecting history to predict the future, if that makes sense. Good, do we have another question out on the floor? Looking for a hand, there, go
33:48
ahead please. Hi there, my name is Clement Yoong, I'm from the New South Wales Government Department of Customer Service, I've got
33:55
two questions around climate change. The first one is, what are we doing proactively to mitigate the risk of climate change with AI - Tim, you've talked
34:05
about energy efficiency, balancing the load, that's all great measures but can
34:10
you elaborate on some examples that we can use AI to better address climate change risks? And the second one is around the energy consumption for
34:19
increased computational power; I know that's kind of an emerging field and obviously with a lot of IOT, as well as AI, an increase of computer
34:30
power, what's everyone's thoughts about that? Who wants to grab that one? Tim?
34:37
I didn't quite catch all of that, especially the second part, but I think the first part, it didn't come through quite clearly, but the first part
34:44
was around climate change and despite the load balancing and automation of the grid, what are we actually doing to combat climate change
34:53
in the electricity system. So one of the things is just integration of renewable
34:58
energy into the grid, that's a massive challenge at the moment and a huge bottleneck to decreasing our emissions from the electricity system.
35:07
Keep in mind electricity is only about a third of the energy in Australia so there's a lot of emissions and climate change impact from other parts
35:15
of the energy sector, as well as transport, industry, and our export energy sector as
35:22
well, but in electricity there is a backlog of large-scale wind
35:29
and solar projects sitting on the fringe of the grid, waiting in a queue to
35:34
get connected and we just don't have enough capability in predicting and
35:43
simulating and designing the grid system to accommodate that input, so
35:48
that's one area, if we can work on that, will allow a quicker uptake of more
35:54
renewables which obviously would have a benefit for climate change. Also the
36:01
other side of it is not demanding the energy that we don't actually need,
36:07
so energy efficiency, so the further we can go down to the micro scale inside the home basically, and improve the distributed energy resources
36:19
or the demand response at that scale then we reduce the energy that actually
36:24
has to be generated and that'll drop emissions as well. So this
36:30
digitalisation of the whole system will have a net benefit in terms of combating
36:37
climate change for Australia, our own impact. The second part of the question was sort of related to what I was just talking about, the residential scale.
36:47
It was more about increase in power usage because of using AI and
36:53
IOT devices. I mean overall, I guess there might be an increased demand on energy.
36:59
Yes, that's a really interesting and growing topic at the moment, so for
37:08
everyone here, it used to be - you just had your manual power meter on your house
37:13
and you got one data point every three months or so when the meter reader would come along and read the numbers and that was it. But now with smart meters,
37:22
you've got live real time data, so literally hundreds of thousands of data points every minute and what we're doing in CSIRO is what we can get from those
37:33
meters, is a real time, high sampling rate, frequency and voltage signal,
37:41
and with that we're using machine learning techniques to disaggregate those frequency signals to work out what
37:48
appliances inside the house are operating in which way, so by just looking at frequency and voltage we can tell the washer went on,
37:56
or the air-conditioner went on and that's starting to build up a massive data set to teach us how consumers are actually behaving inside
38:06
the home with their electricity use. That's a project called the National Energy Analytics Research Program, it's a $50 million dollar project, Data61 of
38:15
course is again involved in that, with the energy market operator and the
38:21
Department of Energy. To get AI to really have a benefit you need a lot of
38:27
data, so this is... the start of it is building up this massive data set right to the consumer level, so that's one thing we're doing there. Terry, would I be right in
Climate variability
38:37
thinking perhaps that some of the work you are doing could lead to, feed into planning processes, particularly with regard to infrastructure
38:42
resilience? Yeah, so I suppose here's the point where we differentiate
38:48
between climate change and climate variability. So I think climate change is,
38:54
we're reasonably good at that, because it's really the forced response and if
39:01
you look at the IPCC projections they pretty much match the large-scale
39:08
observations. Where it gets difficult is when we start to talk about climate variability, so over the next decade, two decades, and that's when we're really in
39:18
in the climate forecasting space, where we're actually trying to predict the
39:23
phase, if you like, of the relevant large scales within the climate system
39:29
and that's a difficult problem because... and this is really at the cutting edge
39:35
of some of the machine learning methods that we're applying, because these
39:40
phase transitions are really regime changes, and so one of the starkest ones that Australia went through was in the late 1970s, the warming and
39:52
drying over South Western Australia coincided with a shift in the
39:58
mid-latitude jet, so these kinds of relatively fast changes in the jet
40:06
structure, or if you like, the variability in the climate system, are very difficult to predict, but they can have very large consequences in the short, in the near
40:14
term climate, and so that's really, these new machine learning tools that we're
40:20
applying are giving us not just a deeper insight, but also a chance to try and
40:26
extract these relatively weak but skilful predictive signals within our
40:32
large data sets. Right, do we have another question out there, hands up?
Changing protein sources
40:41
Thanks, Jason Schiemer from IP Australia, I'm actually just wondering, you've mentioned various things like changing protein sources, for example aquaculture,
40:51
and there's been other talks about meat substitutes and things that have been performed, there's also talk here about the climate variability and long term
40:58
trends; is anyone looking at who's farming what and where rather than just what they're doing on their farm? So I got some of that, if there's anyone looking at who's...
41:09
Who's farming what and where, are we growing things in the wrong place, maybe? Are we growing the wrong crop, using water that we don't have to make rice, for example?
41:19
Yeah, so it's an excellent question, and I guess there's been a lot of discussions about how we might do
41:26
this as a system, as a national system. Look, there are those middle
41:32
level decisions sometimes made, often it's market driven you can't run a non profitable farm; eventually resources will get to you and
41:43
our farmers...[inaudible] Yeah, exactly, there's no subsidies and there's no crop
41:49
protection in this country so you can't get away with it, but I think you're talking about middle level decisions, and look, I think the
42:00
conversation in this country is really rapidly maturing in a very short timescale about how we deal with how agriculture and biodiversity
42:08
can be allies, and how we use agriculture as a lever to actually
42:14
improve our natural resources rather than just being seen as a sink for it. So
42:21
yeah, that's a pretty ambitious conversation but I think there's a lot of levers in the system to start making it happen. Obviously water utilisation is
42:29
going to be the major driver, discussions around that, so there are rapidly maturing conversations about it but the key driver at the moment is the market
42:37
driver and there's some key sustainability signals, so you saw the
42:43
alternative protein signal, as you mentioned, that's an example of
42:49
a huge protein driver. Consumers are becoming increasingly aware and sensitive too, which is fantastic, to knowing more about and I encourage
42:58
everyone to learn more about how the food is produced, I think it
43:04
will drive the market in the right direction. I suppose there's slight tangential comments, I think
43:11
some parts of agriculture are becoming more intensive, all of your tomatoes are grown in greenhouses, and that's very strong automation, so we're
43:19
using smaller amounts of land to generate more food. You're seeing things happening in an urban side whereas I think you'll find other
43:27
parts of Australia, if you travel, travelling areas, you see the expansion of almonds, so to some extent there's specialisation in certain things, cotton coming south.
43:35
The tomatoes are a really good example, because systems at Sundrop farms include closed systems, solar water, they use salt water,
43:43
completely closed system, and they do that for, it's a sustainability
43:49
credential thing but also it's a good economic decision for them so that's a market driver.
43:55
I'll grab another question from the floor in a moment, but I want to insert one here myself. The topic we're talking about is the use of AI for good, but as we know,
44:03
a lot of good intentions often get railroaded or derailed by malicious actors. How do you go about ensuring then, that you don't lead to unintended consequences?
44:13
One of the topics we've talked about is the connecting of sensors up into networks, therefore putting sensors onto equipment, but theoretically every piece of
44:21
equipment we connect up now becomes connected potentially to a malicious actor, so how do we go about ensuring that the work that we're doing for good
Cybersecurity
44:27
doesn't end up having consequences that we didn't plan for?
44:33
Anyone want to have a go at that? Silence descends upon the panel. I'll just make a comment, I'm not sure if I can in any way ensure that we don't have unintended consequences,
44:44
but again with the grid system, if you automate that and you're right that
44:49
every internet router in someone's house becomes a target point for hackers
44:54
to get into that system and then that can proliferate, so in the
45:00
planning for things like a digital twin, or potentially automating parts or all of the grid, national security, cybersecurity is top-line part of that
45:12
whole planning and that process. I think it's just vigilance from the start
45:19
in looking at the architecture and how everything will be set up, so every step of the way it's protected.
45:24
Otherwise, I actually don't have an answer for how you make it foolproof so you absolutely ensure that there's no risk. In Data61, we've got a program, a group
45:39
around cybersecurity research, and we do work, say medical devices, I think we've done stuff, even quite particular devices, so I suppose it comes down to the sort of hazard
45:51
assessment that people need to take, in terms of for this device, what are the risks, the risks around there, but I think, as we all know, it's becoming more complex,
46:01
the risks are becoming different, so I think people need to keep revisiting
46:07
what those things are. In terms of actual science, I suppose you always have to be assessing
46:15
what's the ethical basis of the things you do, so if we do research that involves human subjects, and it might even be just to interview them,
46:23
that actually gets approved by panel, so there is oversight over that, but I think the important part of the question is, the landscape is changing, the things we do
46:31
are changing, so is our conceptual basis for why we think it's safe still good enough, and so we've got to keep working on that.
46:39
To quote Uncle Ben from Spiderman, with great power comes great responsibility. Alright, do we have another question out on the floor?
Technology adoption
46:45
We might just grab one more before we start to wrap up - I can't see very well from here, do we have anyone with a hand up? No? Alright, well I'm going to put another one of my
46:55
own in then. I might just add to that last point if that's alright, so I was just thinking
47:01
with a great power comes great responsibility, but the thing that I really like about some of the AI conversations, particularly the ones I've
47:06
had in TSR is technology adoption and just the realisation that we actually do have to understand how the technology is adopted, so in the farmer
47:15
case you actually do have to go on a farm and talk to them about, if we develop this will you use it, and some of the answers are surprising. What they're
47:22
thinking and how they will use it and what that will mean, and I think that the fact that we do that now, that technology adoption question happens, we're not...
47:33
I think it really informs some of the things that's wrong but we do need to think through, but that's the data that we're using to think that through.
Autonomous systems
47:41
So a lot of what we're talking about here is using artificial intelligence and machine learning to help us work with incredibly complicated datasets and
47:48
reach findings and conclusions from that. Often though, when people talk about AI, they're thinking about the notion of a fully autonomous system, so how far away
47:56
are we from actually starting to put more autonomous technologies into play in some of the areas that you're working? I'll take this one, so I often hear
48:08
the analogy, or there is a continuum that - machine learning, the next one is AI, and then there's fully generalised artificial
48:15
intelligence, think of Terminator charging in, and I don't think that's a
48:21
continuum that really is a realistic way of looking at it, like machine learning is not the poor cousin of AI, it's horses for courses.
48:30
The question is, can a certain technology answer the question that you have? If yes then good, if no then let's invent and come up with new
48:41
technologies, so therefore I don't think it's the goal of going to a fully autonomous system in any capacity if there's no purpose
48:51
around it, if the current technology can already answer the question that we have.
48:56
So, we are at the dawn of the AI era, so to speak, we're very quickly moving down
Future capabilities
49:03
a pathway of utilising these technologies, but there's still so much more that can be done that we haven't explored yet. What are you most
49:09
excited about, if you look forward over the next three to five years, in terms of the capabilities that are likely to be delivered to you, what really gets
49:16
your blood pumping around that? For me, the most exciting thing is that we will
49:24
have so much data in different disciplines that we can actually link together. So for example, CSIRO is part of the MRFF-funded outbreak consortium,
49:33
which looks at antimicrobial resistance and the rise there and how we can be
49:39
protecting ourselves, because antimicrobial resistance is going to be one of the largest challenges that we're going to face going forward, because it
49:47
will mean that if our current drugs are not effective anymore, then
49:54
effectively any surgery could be a death sentence, any graze or cut could be a death sentence or any infection could be a death sentence, so this is
50:03
something that as a society, as a community, we need to come together and really address that problem, and I think data, education and technology will be the
50:12
answer around that. Terry, what about for you then, in terms of the complexity of the system that you're working with, what will you be able to do five years from
Complexity
50:19
now that you can't do today? So I think the exciting thing is that AI and machine learning relies on a high quality continuous stream of data coming in, and
50:29
we've had that for atmospheric observations for a few decades now, since the late 70s with the satellite period, and oceanography is moving into a
50:41
new phase where we do currently have a pretty good coverage
50:46
for the first time of the upper thousand meters of the global ocean. There's new
50:53
profiles coming on board that will take that down to about 2,000 meters and it's really this information in the subsurface ocean
51:00
that's the key to understanding the predictability in the climate system.
Future for Australian farmers
51:08
Jen, what about for you then, if you think about the work you're doing with Australian farmers today, what will you be able to do for them three to five years from now that you can't do today, as a result of your capabilities in AI
51:16
and ML? I guess two things, one is understanding the system they're
51:22
operating in much better, and really inform their decisions so they can just generate value for their communities, for their regional communities, and that's
51:31
what really this is about for me, bringing that value back and making those decisions that really capture that value, that are good for them, are good for the
51:41
planet, and are good for what we need as a society to feed ourselves I think that's really important, we've got no choice in that, so I think making those decisions as informed
51:51
as possible with that quadruple bottom line, it's just really exciting that we can do that now. The second thing that excites me about AI and machine learning
51:59
is when I think about these technologies globally and what they can do, it's
52:04
about unlocking creativity to me. If I think of a farmer that's no longer
52:10
spending four days driving a tractor around the field, then what they're doing is actually using their brain and I'm excited about that, that they're freed up to do much more
52:21
creative, interesting, human type activities. I kind of like the idea of driving a tractor but I've never done it, so what would I know.
Future of energy
52:32
Tim, what about for you, I'm kind of getting the sense that if we don't start utilising these tools, we may not have a robust and reliable energy grid moving
52:40
forward, would it be right to think that? Yeah, that'd be right, and it's not a secret. I think the, well I'll move away from the National Grid, but it is
52:48
truth that it's not... the tools we have in place now are not fit for purpose in
52:54
actually managing that system going forward. It's a bit fragile, to put it
52:59
that way, so, not that AI is going to solve that overnight, but digitisation,
53:05
simulation tools, better predictability through models is important in the near term. Other things I think we'll see coming on in energy or
53:14
just optimisation on distributed energy systems, so if you go right out to the
53:20
other scale, where we've got wind or solar, being able to predict those better to
53:26
manage the flow into the network, and I'll use an example from CSIRO, we're
53:32
just commercialising a technology for cloud solar forecasting, so you
53:39
set out an array of cameras pointing up at the sky and all oriented in a
53:44
specific way, and it can be over quite a large scale, so a city scale, and these
53:51
obviously take pictures at high frequency and high resolution and all that data is integrated using machine learning methods, to provide a very
54:01
comprehensive picture and forecast of the solar irradiance coming
54:07
through in the near term, and that allows solar to be able to predict or to feed
54:12
into the market at a better price and be more competitive, so it's that type of thing as we move to more clean energy sources, just coming up with AI
54:23
or machine learning driven enablers to allow them to compete better in that market.
54:30
Good, well that brings us to time, actually we're just slightly over time, so I guess it's probably appropriate for me to now ask everyone in the audience to please
54:36
thank my panellists for this afternoon, it's been a fantastic discussion
54:42
and we're all going to exit the stage, but I'm going to leave Simon here because he's gonna have the final word for the day. So, ladies and gentlemen, thank you all for listening,
54:49
panellists, if you'd like to join me offstage, SImon, over to you, thank you. Thank you.
D61+ LIVE 2019 Interview Dr Amir Dezfouli Research Scientist CSIRO Data61 AI for Health

Moderator: Brad Howarth

Speakers: Prof Genevieve Bell, 3A Institute, Dr Stefan Hajkowicz, CSIRO’s Data61, Emma Martinho-Truswell, Oxford Insights, Adrian Turner

About: Artificial intelligence (AI) and machine learning systems are being rapidly adopted around the world. By 2030, AI is estimated to generate $13 trillion in economic activity across the world. This panel discussion explores how we must develop and deploy AI technologies today, and how this will critically shape our future world. Hear from some of the region’s top AI experts about the questions we need to be asking ourselves as a society, and how we can capitalise on the trillion-dollar AI opportunity.

Introduction
0:00
Alright Ladies and Gentlemen welcome back, Good morning, my name is Brad Howarth. I'll be leading you through our panel discussion now, so I'll do it sitting
0:09
down because it'll be easier to. It's a great pleasure to be here. I was actually thinking when I got invited to come along and share this panel, what a
0:16
fantastic experience it would be. I don't know if any of you ever play fantasy football, but this is kind of like fantasy football for me, and I essentially have,
0:23
basically my entire forward line here. Put Michelle Simmons, maybe a couple of others on there, and we'd have most the team packed out, so it's a brilliant
0:30
opportunity for me to now get here to sit and interrogate these wonderful minds that we have available to us.
0:35
So I work as a, for my sins I actually started off as a journalist, having
0:41
originally failed engineering back at RMIT in the early 1990s, and as a result
0:47
I try and get away from numbers as far as I possibly can and focus more on words, so when I was asked to her to look at this panel then, I looked at the title
0:54
and we looked at the idea of winning the AI race, and it's interesting that we should perhaps talk about the notion of what a race is, and what a race involves.
1:02
A race generally involves a favored competitor, the sort of person maybe that we are looking to see their progress throughout that, and we'll assume in this
1:09
case that that favored competitor would be Australia, and will looking will Australia be winning in this race. We'll look at the notion of a competing field,
1:16
and therefore we choose to understand who our competitors are, and what our relative position is. A race often obviously involves the notion of a
1:23
finish line, and I think when it comes to the race of AI that finish line is probably something that lies a long way into the future, and therefore is perhaps
1:29
not something that will ever actually cross, so therefore we need to understand what our comparative positioning is within the field. So to discuss these
1:37
ideas, I've got this wonderful panel here and I'll introduce them to you now. For those of you that aren't familiar, on the far end we have Adrian Turner.
1:45
Obviously quite familiar to many of you from his role at Data61 over the last few years, now out on his own being the entrepreneur that he was born to be.
1:54
Sitting next to him anyone who was here this morning obviously will be very familiar with, Genevieve Bell, Principal Scientist and Strategy, oh sorry, that's,
2:02
I'm reading the wrong one, Distinguished Professor and Director of the 3A Institute, ANU and Senior Fellow Intel Corporation.
2:09
Dr. Stephan Hajkowicz. Did I get that right? You did, how did you know how to do that? Because you told me! And I listened.
2:15
And I practiced. All right, I'll mess it up every time from now on though, but we got that right, Principal Scientist and Strategy and
2:22
Foresight at CSIRO. And stepping in at short notice, Susan Keay, Research Director and Cyber-Physical Systems at Data61. You
2:30
might have noted from your agenda that Emma Martinho-Truswell well was meant to join us, unfortunately she hasn't been able to do that, so thank you Sue for stepping in.
2:37
So, I've set up the principle here I guess, of discussion around the concept of the race, and I'm particularly interested in this notion of where do we
2:45
actually sit within the field, but if we're going to be discussing AI probably what we need to do is is set a grounding of what we are actually discussing, so
2:52
Genevieve I might throw this to you in the first place, and to misquote Raymond Carver, what do we talk about when we talk about AI? Wow, that's always a good
What is AI
3:00
question. I sometimes think there are probably as many definitions of AI is there are people in any given room, and it occasionally runs the risk of being a
3:06
bit like innovation, everyone talks about it, no one actually means it. Listen, I think there's a couple of different ways you need to think about
3:13
it, one is that you can't ignore the fact that we have nearly a hundred years of science fiction telling us what AI should feel like, and it's the bit that
3:21
is the sort of scary robots coming alive and nothing good will come of it, the notion of a fully sentient, fully thinking, system of systems there's that
3:31
piece. At a technical level, I mean I always think you have to go back to the first definitions of AI, which were really about computational systems that
3:37
understood human language, that were able to understand symbols and abstractions so effectively see, so listen and see, and then be able to perform tasks that humans
3:48
could perform and learn for themselves. I think that's it for me a useful definition. I'd say by the time we sit in 2019, AI is often also sometimes used as
3:55
shorthand for machine learning, the different kinds of algorithmic work. I think you probably shouldn't ever talk about AI divorced from a context of data,
4:03
so what fuels it, what does it need, what does it produce, I think those are conversations that probably thread a little bit more
4:10
together. Could anyone do better than that? Sure to. The only extra, in our work on AI we often differentiate between narrow
What is AGI
4:19
artificial general intelligence, which is like the strawberry picking robot which can reach a very complex task for a computer to solve, and robotics but
4:29
it is solving a specific task, versus artificial general intelligence which is the robot without clear objectives that helps design its own objectives. So AGI
4:38
is something we haven't really been able to build a tech roadmap out to within the 2030 time frame for a lot of our work, not to say it can't happen, but
4:47
we're more focused on narrow general intelligence and the only sort of other addition, like Genevieve, I'd see machine learning pretty central to
4:54
artificial intelligence, but it's the ability of the machine to learn and problem-solve and design its strategies without explicit guidance from a human
5:00
being. That's that's what's getting all the world's countries out of bed on AI, is that capability that the machine can generate its own solution to a problem
5:09
without explicit guidance from us, which takes us further and further out of the picture. Got it, right, I think we've got some reasonably good definitions there. I
5:16
won't get it everybody to define that because as Genevieve said we could be here all day defining. The preposition of a race though as I said, any race that's
5:24
being run, we want to know our position within that race. But my question here is, What is the metric that we should be looking at when we try to assess our
5:31
comparative strength in AI? And whereabouts in the field we might be? There's a very obvious headline metric that gets played out quite often which
5:37
is level of government investment, but it's not a metric that looks particularly flattering in Australia, when you compare us into what we might
5:43
perceive happening in Germany or China or other places in the world. So how should we be assessing our relative positioning when it comes to the AI race?
5:51
Sue, do you want to comment on that? Actually, I think that's a pretty good metric, and I guess the question then is, Do we qualify to enter the race? Have we paid our ticket? Stephan?
How should we assess our relative positioning
6:02
I'd say it's how much, it's it's true part of it, but I'd say how much useful application we can see of AI within the Australian economy, to boost productivity
6:10
and make our lives better, would be a really important one. How much evidence can we get that Australians are getting their lives improved by the use
6:17
of AI. A massive opportunity is AI in earlier diagnosis of cancer. 150,000
6:22
Australians that are diagnosed with cancer, a significant number of them late stage. AI, a computer vision used in skin cancer, can bring that forward - that's a
6:31
case of a useful application of AI. So the amount that we can see it doing that I think for me is a core metric, and then depth of capability and where we're
6:39
pushing the boundary forward. But you know, we don't have to win this race everywhere, we can afford to lose in some places and let the tech get developed
6:46
offshore, and then select where we want to be the world leaders in it. Well I think a nuance to what Stefan has suggested though, is that we need to be
Why is there a winner in a race
6:55
able to distinguish between the benefits that have accrued from technology that we've developed here, as opposed to those imported technologies, because yes, we
7:02
could import technology to do some of that medical imaging work and that's great, but how much more benefit would we get if we had been developing the
7:10
technology to do that here, and I don't think we have a good metric on that. Adrian, I can sense you want to jump in there. I do, because inherent in your
7:18
question, is, it's a race, and there's a winner in a race, and that there's, the
7:25
time matters, the speed matters in a race, and I think we should also be questioning why is that so, around AI? And the reason it's so,
7:35
is touching on Genevieve talking about the importance of data, it's the importance of data, but also data economics, and the current vertically
7:43
integrated platform model favors scale, and favors first to scale, and
7:52
you see that in terms of, you know the big companies getting bigger, in seven of the top ten companies by market capitalization now, being in the
8:00
world, now being platform companies. So I think an interesting question, and certainly one that Data61 is very focused on is, is there a different model?
8:11
like is there advancements in machine learning, federated machine learning, privacy-preserving machine learning, that
8:19
could unlock a different sort of data economic model that don't have these
8:24
winner-take-all dynamics, which would, I actually think, is necessary for more
8:32
acute equitable sharing of wealth in a networked in AI driven world. But also
8:38
for a middle power like Australia, critical to think differently about the underlying and resulting economic structures too. Cool. I mentioned at the top
Private sector investment in AI
8:48
that government funding is one of the metrics we often look to, but there is obviously a lot of focus on what government is doing in Australia and the
8:54
support the government is providing, but you just identified that ten of the best companies in the world are heavy users of AI, so surely there's a very strong
9:01
impetus for the commercial sector to be playing this game as well. Do we see that
9:07
in Australia, do we see the level of investment from the private sector really matching what private organisations are doing other parts of
9:12
the world today? By and large no. Our corporate sector is not spending close to them. Amazon spent thirty four billion Australian dollars, thirty billion dollars,
9:20
on research. A lot of it is in AI. Alphabet, Google, Microsoft, IBM similar
9:29
sort of expenditure levels. We're not seeing, even when adjusted for the size of the companies, we're not seeing Australia's largest companies spend at
9:36
that level, so we're not stoking the future pipeline of capability for Australia with R&D investments to the same extent these other companies are.
9:43
Ultimately Amazon can come and compete against Coles and Woolies in the Australian retail sector with advanced AI capable, natural language processing
9:50
capabilities and data science which lets it do things a lot more cheaply and more efficiently, and that's the sort of challenge that we're going to
9:57
continually get if we're not developing R&D capability - we're going to be subject to offshore competition. So what I think is, it is a challenge in that Australian,
10:05
from my point of view, you know that the ASX250 are not investing in R&D and developing AI, deep AI capability, that builds their business out to 2030,
10:14
anywhere near to the extent of the tech giants that we can see offshore, or large European companies from a lot of different sectors. So we've looked at R&D
10:22
data, and that's a pretty strong story that we get back, is that the percent of revenue Australian companies invest in R&D is fairly small, and that would that
10:31
would apply to AI as well, and I think as we look to the longer-term future there is a negative consequence for Australia for that.
10:37
I think there's, we need to make a distinction though, because a lot of the
10:44
deployments today are around driving operational efficiency and doing things that we do today, we're doing them faster and cheaper.
10:50
I think what's super exciting and compelling about what Genevieve speaks about, is when we start thinking about what new possibilities exist, what types
11:01
of new value can we create, and what what are the implications societally, economically and what we need to be careful about with these numbers
11:12
is, if I'm deploying an SAP ERP system that happens to be using machine
11:17
learning, and I get quizzed am I deploying AI, a company is going
11:24
to go, yeah, but are they unlocking and creating new value where it didn't exist before, you know, maybe not. And I think that's Australia's opportunity here, is
11:33
to think about in that new value creation. And there's some ways of thinking about that question differently, right? Which is to say, it's not all about
The race
11:42
next generation machine learning algorithms, sometimes it's about what are the data sets that are inherent inside those organizations, how do you want to
11:49
think about what is the data that sits inside both our government, in our private sector, in our public sector and how do you want to think about a
11:58
different kind of language? I mean I kind of rebel against the notion of a race, because I think that suggests there's an obvious fixed end point where everything
12:04
is clear and you can determine a winner. I think one of the most interesting lessons we have surely learned from the last 20 years of watching technology
12:12
unfold, is that if you called that race too early you would have thought Australia was an excellent country for internet uptake and adoption, and also an
12:18
excellent place for high-speed Internet connectivity. I think we might want to ask that question a little bit differently, because it turned out that
12:24
wasn't a hundred meter dash, it was actually a marathon, and we're not in such good shape at the 13 mile mark. I think there are interesting ways of
12:33
saying, if you want to constitute this as a race, which race is it? I think Sue's right to say there are probably multiple races, and thinking
12:40
about which one we want to compete in. D o we want to be building another tech giant, or do we want to be saying, we have particular questions inside our national
12:48
boundaries that ought to be addressed, and AI would be useful for them. Rather than saying, should we be building the next tech giant? So for me there's a
12:54
question about where's the end of the race, which race do you think it is, and to Adrian's point, is that is that the most useful metaphor? Because I wonder in saying it's
13:04
a race, what are the pieces that you'd want to be successful right, I mean you know well who is in this room grew up with good Australian race stories, we
13:11
could have the Phar Lap story, the Stawell Gift story, the Betty Cuthbert story, lots of stories about what it means to win races, but I think that then under
13:18
calls some of the things we are doing well. So if you look at what's been happening using next-generation automation in Australian mineral
13:27
extraction, so both BHP and Fortescue have had really interesting experiments in terms of turning over from human driven
13:35
vehicles to autonomous vehicles, and there are real early lessons about what that looks like. It means that Australia is in fact at this point, one of the
13:44
largest deployments of autonomous vehicles on the planet. That's happening within our shores right, and we don't actually talk about that when we talk
13:50
about AI, but we could. So you know there's a race where in fact we are well ahead of the competition, but it's not the traditional one, so I think for me
13:56
there's a bit about what's the story we want to tell, and why, becomes interesting. Sorry Sue. I was just going to add something, I know you want to move on. On an optimistic note, Australia is a company of SME's, and most
Australias AI story
14:09
Australians are employed by small and medium-sized enterprises, and on the plus side we see a lot of grassroots investment and activity at, in the AI
14:20
space, and an example of that is the community up in Queensland, where you know, an AI meetup group very rapidly got to more than 2000 members of people who
14:29
are interested in having their own AI startups and supporting one another to learn about the latest development so that they can apply them in their
14:36
businesses. So that's a positive. So how do we take those areas where we are
14:41
strong, such as autonomous vehicles in mining, and translate that capability into other sectors in order to perhaps build out the new leaders? What's the
14:49
component that we need to be focusing on then in order to be able to spread that knowledge? So I think the first thing is having a point of view. So it's implicit
14:58
in the question is we know those things, and I think we have a sense, but I think getting really clear and more directive as a country around where we think we
15:08
can lead, and then coming off of that is these systems need data and if you look
15:16
at the structure of the economy that strength of SMEs is also our weakness in a sense, because how do you drive data scale in a world full of
15:26
vertically integrated platforms, how do SMEs compete? And if you look at a couple of areas, so let's take health and I'm sitting on the national genomics
15:36
mission steering committee, that's fundamentally a machine learning you
15:42
know big data problem, genotype, phenotype, data problem which if we get it right,
15:48
will restructure the healthcare sector, from crisis interrupt-driven health care to preventative health care. But then you go, actually we've got a population of
15:58
twenty five million people, and if you go down to a particular disease vector you don't have statistically significant sample sizes to be doing interesting
16:05
things, so that then leads you to, we need to create multilateral, multi-country
16:12
data sharing agreements for things like anonymized genomics data, but there's
16:19
going to be whole other classes of industries that are going to need to do the same thing. I think we kind of do it in security, but even then, I know you
16:29
Data61, we were approached inside the last six months by another country, saying could we pool our cybersecurity research data. So there's a big data
16:38
piece here that I think will need to be more prominent for us to achieve the potential where we identify areas we can be world leading. And I think there's
What it means to be successful
16:47
an interesting other thread to that Adrian, too right, which is that what it suggests is what it means to be successful in this quote-unquote race
16:53
isn't just about the technology, it's also about how do we think about our regulatory framework, how do we think about who our partners are, how do we
17:00
think about what it would mean to imagine pooling data with other countries, other organizations, how do you frame and structure that, and those
17:08
aren't technical questions, right, they are questions that are also about regulatory frameworks and process and goals. And I think one of the challenges
17:16
and we keep talking about, an AI race that we keep thinking about, who's got the best technology, rather than saying who has the best point of view, was the
17:23
word you used, but who has the best notion about what the end state might be? Who's thinking about this more broadly? Because if you wanted to say that's
17:30
where Stephan was going right, what does it mean to think about next-generation automation, what does it think mean to think about productivity uplift, that's
17:37
not about putting technology into places, that's about how do you change your workforce composition, and how do you change your workforce
17:42
training, how do you think about different skill sets, how do we then think about how goods and services and people all move, and then you realize
17:49
that winning a race - it actually involves all the other stuff too, like the training regimes and all the other pieces there, and I think we're not very
17:58
good at bringing that into the conversation because it makes it both more complicated but it's more opportunity too. It's systems, systems
18:05
thinking which we're inherently not good at, and it's also, you think about how we got here, we got here through combinatorial innovation. Data, computer
18:17
algorithms coming together, miniaturization, all of these things
18:23
coming together. It is important that we get this right, and we do have a point
18:29
of view, because right on the edge now, and it's starting to come into focus, is not only by cyber-physical systems, but bio-digital systems, where
18:39
we're evolving to be able to, you know program life, in a sense, that uses these
18:44
underpinnings. So the most valuable thing I think we can do as a country, is
18:49
have that point of view. So we're talking about something that gets very big, very quickly with regard to the complexity of the issue that which we're trying to tackle here.
18:57
And yet yesterday it was discussed that we don't collaborate particularly well in Australia. So to what extent is our failure..With each other or with others? Sorry? I said with each other or with others?
19:08
Can you collaborate with yourself? Some organisations don't even collaborate in themselves. There you go. So is that part of the issue here, that we need to
19:17
maybe start thinking about some of these broader more structural, socially structural, issues before we can really get on to the right footing to compete? I
19:24
don't, I don't think we do ourselves any favor in the way that we partition things into separate industry sectors, so yes, you know we have the mining industry
How can we work together
19:33
leading the world in the development of autonomous technologies, but how is that
19:39
information shared across to manufacturing services such as construction and other areas where it'd be really useful. We actually just don't
19:47
have good mechanisms for that, and that's leaving aside you know whether research and industry collaborates particularly well together.
19:54
I think it's a bit like the invention of electricity in the late 1800's, this is a technology that cuts across every single, it'll build a - and a lot of
20:01
Australians got great jobs in the electricity sector - building out our electricity network which was magical technology at the time, but it also cut
20:08
across every single other industry sector and changed how manufacturing, agriculture, mining all worked and enabled them. And I think that's where
20:16
we are at with AI, is the same sort of shift that's going on so, it does tend to
20:21
cut across, there's a huge up-scaling upgrade. But picking up on the earlier discussion, I do think there is this issue of sovereign capability, you know,
20:29
that actually we can't just import a lot of the AI we need, it's getting used in mission critical, infrastructure, defense cyber security applications in Australia
20:39
now, it doesn't need to think around what are the AI capabilities we need in terms of the hard tech infrastructure, and then our skills. Like we think this is a whole
20:48
new workforce of AI specialists has got to get built by the year 2030, and then
20:54
pretty much most jobs get reshaped and changed, they don't not as many disappear
20:59
as we initially thought, but a lot of jobs are reshaped and changed by AI. So we've got to build an entirely new workforce in machine
21:06
learning, natural language processing, data science, predictive analytics, a lot
21:11
of sub-fields of AI we desperately need more talented people that we do not yet have. So how do we get them? Train them up with Genevieve, Genevieve's gonna teach them all.
21:22
Excellent, good, so no pressure then! There's got to be a couple of pieces, one
Education
21:27
is we do need to start thinking now about what that set of skills will look like and we know that means not just what happens in universities, but what
21:35
happens in primary school and secondary school. It's about how do you expose people to both the new technology, but frankly some of the old systems that we
21:44
already know you need to do in order to be able to think about many of the things that Stefan rattled off. You actually don't just need to have a
21:50
technical education and a STEM education, you also actually need to be taught about how to ask critical questions. You actually need to have an awareness about
21:58
how social systems work, not just technical systems, so I think there's a you know an argument for a more integrated education system through high
22:04
school as well as the University sector. I would say that one of our challenges has been is that we keep thinking that if we just change what we
22:10
do with that, that fixes everything, and the reality is a whole lot of the people whose jobs will change complexion and composition are not gonna go back to a
22:19
university. So part of it is, what's the the role universities need to play in taking what they know and bringing it out to the broader world. So whether it
22:27
is the kinds of micro-credentialing and micro educational experiences that Australian universities are deploying, whether it's what bill Simpson-Young is
22:34
doing with Gradient in terms of education, there's a whole lot of work a whole lot of us need to do to up, just sort of level-set the conversation again,
22:42
and then it's about how do you do on-the-job training, how do we think differently about what skills acquisition looks like. It's a whole
22:49
broader conversation. And then there's a piece about how do we all educate ourselves as citizens, about what all of this means for us, because I think
22:55
there's a bigger conversation there too. On that front then if I look to my colleagues in the media we often see AI portrayed as the robots will take our
23:04
jobs, very quickly moving through to the Terminator will come and kill us. I'm personally very excited about the Terminator movie coming back with Sarah
23:10
Connor, but assuming that stays within the realm of fantasy for the moment. Do we have an issue here that we just don't really want to talk about it? Or that we
23:19
don't know how to talk about it? In such a way that we can actually build the enthusiasm in the community that others want and Sue, I'll throw that to you because
23:24
you mentioned before working with grassroots organizations. So how is that conversation coming about, how is that the perhaps that people out on the
23:32
streets are actually embracing this? Yeah I think sometimes people will, the experts in the area do avoid talking about it. I've been in conversations
People embracing AI
23:40
where people have actively tried not to use the word robot, because they think there's going to be negative connotations, and I think we may as well
23:47
shut up shop if we're going to do that. I think we just actually have to get people more familiar with the technology by talking about it and unfortunately I
23:55
don't think we're very good at telling those stories, and I'm not quite sure what the reason is for that, but yeah, I think we should not be avoiding using that
24:03
terminology. We need for people to have, to feel comfortable with it. It's a
24:08
conversation we need to have. We work with a steel company north of Brisbane called Watkins Steel, a father-son company been around for about 80 years,
24:14
and they went from making steel products. I met a guy who worked on the factory floor doing welding of steel - he left school in grade 10, but he had
24:23
transitioned, and the whole company was transitioning, they'd gone from making steel, to steel design work and he was sitting in front of two big computer
24:30
screens, he was off to a tech conference in California for 3D visualization and he'd got skilled up in graphic design. And I'm sure AI will start to feature in
24:38
his job. But it was interesting to me, this was somebody who left school in grade 10, but he's benefiting from this AI and digital revolution, and I think
24:47
that we need to make this real for people like that, you know this is not inaccessible, this is a job opportunity in a career pathway, and he's got a
24:55
better salary, he's got a family, things are going well, and the trends and this company didn't lose a single employee through this transition from steelmaking
25:03
which is a very sort of hard traditional business, through to steel design and technology, and eventually all it will do is steel tech and design work, it
25:11
won't actually make any steel products, within years, so that kind of transition is possible and we need to look at how it happens. It's part of winning the AI race.
25:20
They won the AI race if you want to look at it that way. I think it's also contextual, what contextualizing
25:27
right? So we developed a drought map to provide decision support
25:36
tools around investment for water management. Well you talk about
25:41
predictive maintenance or failure prediction for water pipes, or you
25:48
talk about medical image diagnostics, this team's doing great work
25:53
around Alzheimer's, so I think the way that we need to talk about it is in
25:59
context, so that the general population gets a sense of the benefits that are
26:05
coming from the technology, not just for technology's sake. But having said that, there has to be a conversation in the country around
26:12
what's the sovereign technology, are we investing to push the science forward in
26:18
some of these areas as well. General population - if we bring it
26:23
back to the business community, and we think about it as a system,
26:29
this economy is undergoing structural change right now, it is, and I think
26:36
there's not a huge window for a lot of our incumbent industry to really think
26:43
differently about the business that they're in, come back to thinking about the data assets that they have, and I think that starts with boards, and what
26:52
sort of questions are boards asking in challenging operating teams in these
26:57
businesses? And then if you think about it from a systems point of view, you go well who are the boards accountable to? Shareholders, who are the shareholders?
27:06
What are shareholders? Big superannuation funds. So there's a whole system's opportunity here, and I do think, and I see firsthand the
27:15
conversation changing a lot in the last 12 to 24 months in those sort of
27:21
environments. We have a productivity dilemma in Australia. Productivity is how much goes into the sausage factory and how much comes out. The efficiency with
27:28
which we convert inputs to outputs and crucial for long term wealth creation. Any economists will say this but productivity growth rates are about half
27:36
what they have been long term average, and we know we need to double them roughly to see us and our children have the same enjoyment of continued
27:44
improvement of quality of life. There was nothing that comes into the Australian economy that's new like AI, AI is the thing that is going to see this happen
27:52
right across all our industry sectors, like electricity did in the late 1800's, it's what will lift productivity again. So part of winning
27:59
this race is this absolute urgency we have to boost industry productivity across all sectors, which means as Adrian says, it means boards waking up to what
28:08
it can do, and how to do it, but I think then also developing ourselves for the year - for the future. You know we're just adopting AI and using it cleverly in our
28:17
existing industries as an imperative now but let's not forget how good the world is gonna get at AI. All the investments that are being made in
28:24
deep R&D capability are gonna build something. I think Australia wants to choose things we're good at. When you mentioned mining which we think is
28:31
definitely one, Agrotech is another. We are leading the world in AI for the great outdoors. Digging, searching planting, doing stuff with dirt. We're very
28:42
good on those sorts of applications with AI and we can start to build whole industries around this as well, but boosts existing industry productivity.
28:50
Let me..Sorry Genevieve. Can I put a small, in football you said, I want to put a flag on the play.
AI and society
28:57
Listen I think it's really easy to be reductive and think about the health of a society only by economic metrics, and I always find myself faintly uneasy that
29:05
way, where we talk about you know we will be better because we'll have improved productivity and productivity gains are one measure of a healthy society, but
29:11
they're not the only one. And thinking about an entire new set of technical systems that have implications for how we are made sense of, how we are seen, how we
29:22
are received, is part and part of the conversation we should be having too - right,
29:27
it's not just about economic productivity it's also about how do we want to think about citizenship, participation, equity it's also about how
29:34
do we want to think about what are the implications for technical systems like this, for creating things that aren't necessarily strictly about industry or
29:40
productivity gains, but about how do we want to think about where climate sits in that and sustainability, but I would also have set a whole lot of other
29:47
things that are slightly more intangible, around things that make healthy societies that are about well, other social things - they're about community
29:55
building, they're also about things like art and religion and other pieces of that. When we looked into the
30:03
ethics of AI, I became convinced AI is a boon to the criminal justice system
30:08
for example, a lot of error that happens via humans in the criminal justice system we can actually start to correct via AI which can work smarter faster and
30:17
to rules better without human bias, if we put the right data into it, if we do it
30:22
well. And you and I both know the systems that have been built already that don't do that. We do. We do. But we've learned from the systems like compass that didn't work
30:29
and got corrected, and I think that was interesting, we saw the bias in the data and we could correct for it. But legal friends of mine do say that this, you
30:38
know, who know the criminal justice system and the the failings of human beings in that system. This is a, this is not, so it's not just a chance to uplift
30:47
productivity as you say, this is a chance to uplift the quality of the human experience in Australia. If we do it well. And we could also get it wrong.
30:54
I want to pause there for a second. In a moment I'm gonna go to the floor so if you have any questions that are starting to buzz away in your brain
30:59
get them ready because I think we have a roving mic that we'll be able to come around to you. Before we do that though, just let me play devil's advocate for a
31:05
moment, because a lot of what I'm hearing here in discussion about the bright and shiny future of AI, in many ways sounds very similar to the stories I was
31:11
writing at the Australian in the late 1990s about the benefits of digitalization. And yet 20 years later in a highly digitalized society, we have a
31:19
productivity slump, wage stagnation and rising issues with regard to mental health. So how do we avoid that, the perpetration or the the continuation of
31:29
that scenario? How do we go about ensuring that as we move through this AI revolution that we don't end up creating something that we didn't really want in
31:36
the first place? Well we kind of got here with the internet in a sense right, because there wasn't the sorts of discussions that Genevieve is alluding
31:45
to, didn't take place. Like there was a sense of, is this liberating thing called the internet, the network, and then what cropped up was you know basically you
31:55
know a surveillance driven, advertising model to build socio-economic profiles. And now we're in a state of unintended consequence from some of that, and so I
32:05
personally believe it's about having the conversations, and I think it's about not under estimating the population to want those conversations, and to engage in
32:14
those conversations, and their domestic conversations. But also in a networked world, you can you can argue that geographic boundaries, they matter,
32:23
they absolutely matter, they will always matter, but they matter less in a
32:28
connected world, where you've got other sorts of you know boundaries being drawn
32:34
that are not geographic boundaries, could be ideological boundaries, could be, you
32:40
know it could be societal values boundaries, but I think we've got to be
32:46
having the conversation, and a deeper, richer conversation as a country right now. We're in the driver's seat and we've got a, when it comes to technology, we can
32:54
really shape what it looks like, and maybe we didn't do enough of that previously. Well today you know despite all these fantastic new
The gap between rich and poor
33:03
technologies, we're not seeing an even distribution of the benefits, and so we've never had such a wide gap between rich and poor for many, many years, so I
33:12
think you can only expect to have less social cohesion and more troubles if we
33:18
can't somehow bridge that gap. Right. So to the floor then, do we have anyone.. I've
33:23
got a hand up in the middle here, I can't see too well with the lights in my eyes but if we can bring the microphone across. We've got another one at the front so
33:30
we'll come to that in a moment, and then I'll be relying on my mic runner to pick out the rest. Go ahead please. I would like to ask a very practical
33:37
question. Oh yes, if you can please let us know who you are and where you're from. It's Joe. I'm from
33:43
Orica, we are a mining technologies, mining explosives company in Australia.
33:48
We try to integrate AI and machine learning to improve the mining
33:54
efficiency. But regarding the AI and a general situation in Australia, I would
33:59
like to ask a very practical question. With this current economic situation in
34:04
Australia, and very weak Australian dollar, how can we convince the tech giants like Google, Amazon to open some of their offices in Australia - to
34:16
benefit from the cheap labor that they can? Because when you look at Canada which is our sister country, with the same culture and same socialist system,
34:26
we see that a lot of offices of Amazon and Google are moving to Toronto for example Why can't we do the same thing in Australia to create more jobs and
34:36
opportunities for people who are interested in that, and passionate about it rather, than seeing a lot of Australians moving to United States to
34:44
follow those type of careers? Well Genevieve at least lntel invested in Australia by bringing you back here which was a very kind of you which was a good thing. We had to steal her. Borrowed.
Future of tech in Australia
34:54
I think one of the interesting things has been watching the complexion of where big tech companies are putting their next generation hubs. So we see a
35:03
growing presence of Microsoft in Sydney. Salesforce has just made an announcement this week about where they're going to put people. We see
35:08
Amazon building up its centers here, and I think that's as much a positive reflection on our education system, on the kind of skills that are here,
35:18
on about the kind of opportunities there are here, so I think there are signs of positive activity. Yeah and that it my sense, is that is a
35:26
priority, that is a national priority to attract multinationals here, and there are success stories. So Boeing is an example. The biggest R&D facility outside
35:39
of the US is in Australia. CSIRO has a long-standing relationship with Boeing
35:46
and you know north of a hundred million dollars worth of collaborative research undertaken together. So I think it is shifting, and I know it's a
35:57
priority for the country. It's big in all of our work, and it's something I think a lot about too and wonder why. I know Canada had a really purposeful
36:04
strategy to make that happen, and what it's achieved, and that's been great. We are seeing it happen a bit internally in Australia there, if we look at the stock
36:12
market, and we take the ASX200 and look at the index of performance of all of those, and we get the Infotech companies like WiseTech, Rea Group
36:22
Technology One to name a few, and Car Sales would be another one, and we see a split. In about 2017, the Infotech companies start rising a bit like Nasdaq
36:31
and the Dow Jones did sort of 15 years ago, that is beginning and I think if we
36:36
look at some of our large tech companies - incredible rapid growth, and it's really
36:41
quite exciting to see it happen with jobs and salaries going with it as well. It could be sort of really about to launch and explode, so I think watch
36:49
Australia's tech sector, watch our platform companies. I think where we're about to see them launch. You know as of about mid 2017 is when the lines on the graph
36:58
really start to depart. I'll cover it more in my speech this afternoon, but that is I think a really important trend for us to look at. But then we need to
37:06
start to answer your question and ask why, what conditions do we create here that really attract, not just like the pure, the real R&D parts of these
37:15
companies to set up in Australia and draw upon and build our workforce for advanced digital tech R&D, so my kids get good jobs doing this as well, we need to
37:23
answer that question. One more thing, so if you look at other parts of the world, they're very, very good at creating new industry and
37:30
turning ideas into products, into companies, into global franchises, there's a function called product management that doesn't
37:39
exist as strongly here. And it's an artifact of history that a lot of
37:45
multinationals view Australia as a sales and marketing outpost, versus doing
37:51
primary R&D and product development, and that function is sitting across a deep understanding of technical, what's possible technically, strong
37:59
technical aptitude, with a point of view of where the markets going and bringing those things together. And it's something that I think there's
38:08
opportunity, more opportunity in the university sector to produce people with
38:13
those products. I really agree. I think Australia does all this amazing research, and all the bits and pieces, but never puts them together to create the iPhone
38:20
which the customer actually wanted, so you know being an insider for 20 years, that's where I think one of our failings is. We do awesome research in all sorts
38:28
of ways, but no one puts it together into a consumer product. Is that how you would see it Adrian? I think it's the
38:39
integration and the packaging, but it's also thinking about markets where are markets going, its thinking about a value chain, way to enter in a value chain, is
38:47
thinking about pricing. So with a lot of the new data driven industry there's usually elements that are not, you know you don't charge for them, you give them
38:55
away, so thinking about you know different criteria for valuation, and I
39:01
know some of the platform companies, it took a long time for analysts here to understand the differences in the businesses and the economics, and they
39:11
were undervalued and misunderstood, and I think that's turning around as well. But lots of opportunity for universities here as well I think. Great. Question at
39:20
the front, can we get the mic? Got it? Where's our microphone gone?
39:28
Nope? Not sure. Microphones up there. Shout it out. Can you yell? Yes shout it out.
39:35
First of all thank you so much for an erudite and stimulating
39:48
discussion. My question's about ethics and it's about whether we should be winning
39:53
the race, or running a good race. The Human Rights Commission recently
39:59
concluded an inquiry into the ethics of AI which was I think probably the first
40:05
time in the world a human rights commission had done that. Dr. Evatt was the president of the General Assembly when the Universal
40:12
Declaration of Human Rights was introduced and was a big proponent of that. Do you think Australia can continue to play that sort of game on
40:21
the international stage? 3AI Institute is doing some of that, thank you, but do you
40:27
have faith in the current systems of governance to deliver that sort of good
40:33
race? What a great question. And yes, I think though the thing is for me, if we
40:40
look at the metrics of what winning looks like, which is Australians with better quality of life in the future, we've got to win the ethics and the tech
40:48
development all at the same time. We you know that without ethics we won't win this race in a genuine sense, we won't end up with with the benefits of AI, so
40:57
yeah, absolutely crucial. Ethics in itself may become a bit of an industry for us
41:03
as well, doing ethical AI, people buy AI that is more ethical, I think that
41:08
putting that into the mixes is important too, so the two are kind of together, but it's something to be very aware of. You know our first
41:17
project was on the ethics of AI, and the Australian Government will publish some principles around that soon, but you know we've seen it as pretty crucial to
Ethics
41:25
winning. I think one of the challenges there however, is that ethics is another one of those loaded terms, where the relationship between ethics,
41:34
morality and values on the one hand, and ethics standards and policy on the other,
41:40
is a complicated one right, so what it means to think about something that is ethical or not is more contested than it sounds. I spent a lot
41:48
of time at Intel where the engineers I work with used to say, just tell us what's ethical, like the five things, we'll just build it. You're like okay,
41:54
let's take a step back there for a moment, and so one of the things about well ethical decision making is that often by the time you get to well you
42:02
know the trolley car problem, it's the what's the least worst choice you could make, and that's not where we want the technology to be, but the proceeding set
42:10
of conversations about what we regard as ethical, is not straightforward. You know in the arc of our lifetimes in Australia we have had conversations that
42:19
spanned what was in the law, versus what the population believed was morals, versus what some people believe was ethical or not, and those were not simple
42:27
conversations. In the last 60 years those have included in Australia, the use of the death penalty, ideas about death with dignity, ideas about our Aboriginal
42:37
sovereignty, ideas about land rights, ideas about damning the Franklin - all of those were litigated as conversations that were sometimes about
42:44
values conversations, often about morality, frequently glossed as ethical, and in any single one of those, even at the point that we reached a resolution
42:52
at war, we didn't have an agreement in the Australian population about what was ethical, and what wasn't. And I think one of the really complicated problems we
43:00
have in talking about AI and ethics, is that un-bundling that, of what is the
43:05
ethical stance, and how does that then relate to both standards policy and the law is not as straightforward as saying we should have AI and ethics. And frankly
43:14
it is one where, and I do think about Doc Evatt, and I think about the Declaration
43:19
of both the original League of Nations and the human rights framework and of thinking about what that might have looked like had it started in a society
43:26
that wasn't driven by the individual. And about what it might mean to imagine different kind of values. Now, the interesting piece about that's
43:34
conversation starting in Australia, is that we're one of two countries in the world where there is a conversation about data sovereignty, that isn't about sovereignty
43:42
in the sense that my two colleagues are using it, but in the sense of we have people who've been inhabitants of this country for 60,000 years and their
43:49
notion about what is data, and what it means to access that, provides a completely different way into a conversation about privacy trust, risk as
43:57
well as sovereignty, ownership and transparency and there are opportunities there for conversations that are much more interesting and some ways
44:04
unsophisticated, than what's happening. I do think however, to be responsible, a
44:09
conversation about AI and ethics actually has to be preceded by a conversation about what it is we believe we are as a nation, and I don't think
44:17
that's an easy conversation, and I don't think it's a simple one, and I actually don't think even in this room that would be agreement about what it means to be
44:23
an ethical Australian in the 21st century. So to come back to that, how do we balance this then? On
44:29
the one hand we're talking about the need to create a sense of urgency because of the the wonders that lie ahead of us that we'll miss out on if we don't
44:36
move quickly enough, but on the other hand, if we move too quickly, then we create systems and processes and unintended consequences. So how do we
44:42
balance out? It wouldn't be the first time we've said we didn't want to move with undue haste right, let's think about how we manage new medical technologies.
44:50
Let's think about how we think about certain kinds of forms of biosecurity. Let's think about how we think about certain kinds of drug regimes. On the one
44:56
hand, on the other hand we know what happens in Australia when you move hastily and do it because everyone else does it, you end up with rabbits, and then
45:04
cane toads, and then goats. That wasn't CSIRO, cane toads, we got blamed for it.
45:10
No in fact CSIRO gave delightful advice to the government organization. We told them not to do it
45:15
and then everyone thought it was us. Yeah, no indeed. But you know, we know what happens when you introduce something, because then you go, Oh, It'll be
45:22
fine. But so Google company Deep Mind in their approach the National Health Service in the
45:28
United Kingdom with a machine learning algorithm that they were gonna trawl through people's private health records, that could be used to predict acute
45:35
kidney injury, which is if something you have, if there's something wrong with one of your kidneys - you want to know you can manage the condition and that can save
45:42
your life potentially, so it's really important, really useful. But the NHS pulled them up and said No, you have not got permission from all these people to
45:49
use their private data. Now, the algorithm was just gonna trawl over it and as far as I could tell there was no risk to any exposure or incorrect use of people's
45:57
private data, but it was held back. The consequences, probably some people are going to get dead because they don't know they have
46:02
acute kidney injury, and we could have done it. So holding back had another ethical consequence as well, and I think that's where we can't just say let's
46:10
hold back because it could be unethical, by doing that we actually have a cost in somewhere else. AI, I'm convinced, in cancer diagnosis for example, if AI can
46:19
be used, we don't even need AI, we just need basic data analytics, actually to start saving human life in Australia with cancer diagnosis doing it earlier,
46:27
But you know if we hold back on it, we prevent those positive outcomes as well. So there's trade-offs. It's not the sort of that we can sort of say let's hold back
46:34
and wait till we're sure it's ethical, there's really positive applications we can make with AI in Australia right now. And right there is the classic case that
46:42
any philosopher will tell you is what an ethical dilemma looks like. How do you know, it is, and it's exactly the right piece of, how do you then imagine a
46:49
trade-off between two outcomes, neither one of which is unambiguously good. And
46:55
thinking about what an ethics framework is, isn't just about stating a series of principles that you agree are all you know, or all wonderful and good, it's also
47:04
about what do you do in a situation where you're making that trade-off. I was wondering how long it would take us to get the trolley problem. And if you
47:09
haven't watched the series called The Good Place, I definitely recommend you do so. Adrian. And also, if we take this the next step, so it's positive we agree
47:18
right, there's an agreement in principle on things, then how do you
47:23
encode that in systems. And that's where you've got you know Australian groups
47:29
like the Gradient Institute that are starting to develop tools to visualize
47:34
the trade-off decisions, and make them explicit. And there I think Australia has
47:41
an opportunity in developing those sort of tools. So you're at the,
47:47
can you still say coalface? Are we still allowed to say that word? You're at the coalface when it comes to the development of cyber-physical systems.
47:52
Carbon neutral coalface. Carbon neutral coalface, clean coal face, whatever, okay. The Solar face. Anyway, where do these considerations sit then in terms of the research programs
48:02
that you develop, at what point do you start having discussions with your researchers about ethical consequences of the work that you're doing? Well, I
48:08
guess we can't do any work that's going to impact on people without having formal ethics clearance, so that's often quite a, can be quite a long and
48:18
complicated process, to get approval from ethics committees to be able to conduct that research. So it has to be very central to mind when developing these
48:28
things. Can that then be scaled out? We talked before about the fact that many of the world's most valuable companies are private sector organizations who are
48:34
not necessarily so well regarded when it comes to the ethical outcomes of the work that they've done. Is there some chance then perhaps that we can take the
48:42
learnings we already have from the research sector and apply that in the commercial world? Yeah I think so. Has it been done? Geneviève, have you ever seen
48:51
that work? Yeah, I mean there's a number of obviously large global companies who
48:56
are starting to think through a set of technical systems that had potential at none of them imagined. I mean it's an interesting proposition right, if you
49:04
were 20 years ago building a tech company that you thought was solving a very narrow band problem. I'm going to deliver things that can be flat packed.
49:10
Which was the Amazon value proposition. From when Bezos initially made that company it was, I will sell things that you can stick in an envelope, so it was
49:17
sort of like, what could my flat pack? That was his kind of you know starting proposition. Flash forward more than 20 years, that is a company that has a
49:27
physical and digital footprint in at least 50 percent of American homes, and many more globally. What it means to go from a company that thought they were
49:35
selling things you could stick in an envelope, to a company that effectively has an object that can listen to 50 percent of the activity, 50 percent of
49:41
American homes, is a very different kind of end point. I think for companies like Amazon and Google and Facebook but also companies like Microsoft, IBM, Intel,
49:52
all those companies have been grappling for the last couple of years with how it is that you want to frame those conversations up, both in terms of how do
50:00
you start to have a conversation about what are the consequences and unintended than otherwise of your technology, and then also the other questions which I
50:07
think are equally complicated, inside government and inside the public sector, about if you decide you have an ethical guideline or an ethical set of
50:16
principles or a principle about those things, whose enacting them? Who's held accountable? What's the chain of reporting that goes up into? What other consequences are
50:26
violating that? Where are the conversations gonna get staged when you get to a moment where you're trading things off to Stefan's
50:32
point. And I think we've been very good at articulating the beginning of that
50:40
think we need scenario planning, you know, I mean I don't think Facebook had in its imagination what would happen with Cambridge Analytica in the U.S election and its
50:48
eventual Australian 7 billion dollar fine that got leveled against it. You know, those sorts of outcomes weren't really being envisaged, but we're almost
50:55
there was some sense of inevitability once the the power and the size of the tool they created and what would get done with it, so I think we need to start
51:03
to generate scenarios around these emerging technologies to try and think about these things earlier, and maybe get better outcomes. I suspect there's
51:10
also a call here, this is not just about how do you change the conversations in industry, I think it's about how do you change the conversations in government too.
51:16
What does it mean to think about up-skilling our regulators, our policymakers
51:21
and our politicians to have these conversations too. Great. We might go back to the floor then, I saw one hand go up, but I because of the lights.
51:30
Hi, it's Peter Carney I work for Toll which is a logistics company. Not just for
51:39
Toll, but for Australian companies generally, who's missing from management you know, who do we need extra in the team to sort of be able to take
51:49
advantage of all these great developments, how do we, what conversations should we be having, and who should we be collaborating with? If
51:58
you had a chance to influence Australian companies, what would you say?
52:03
We work with Australian companies a lot and we get the, you know, we've heard of this thing called AI and data science, we know it really matters we've got a strategy, but
52:11
then it starts to get really blurry about what's actually happening. And our response is really to start to improve, we're working on a data-driven
52:18
organizations paper, which tries to look at what a data-driven enterprise does, what it looks like, what its capabilities are, and how it works, but we're trying to
52:25
put evidence, and sort of explain what these business models look like
52:32
that actually work in this world. So there's an education challenge ahead of us to try and give them the next level of detail so that they can
52:39
make these technologies work in their businesses. There's a preceding problem too, which is about how do you ensure you have the greatest diversity of voices in
52:48
the room. I think you know some of the pieces that Adrian was flagging earlier about challenges with data sets in particular, but also the whole future
52:56
we are moving to. I know that it's not just about having people who are good at data science, or tech or foresight, all which are excellent things to have, it's
53:05
not about having all of those in the room, it's about how do you have a total voices in the room that represent a more broad set of lived experiences and
53:12
backgrounds, and disciplinary practice, because frankly you actually need to have the people who are going, huh, I have no idea of that thing that you've just
53:20
said. The challenge with that I will say having run those kind of organizations in my time in the US, is the more diversity you have in the room, the
53:28
harder you have to work to manage the inevitable conflict that comes with it. It's one of the things we never talk about when we talk about having diverse
53:34
voices in the room, is that you actually have to work on the inclusion piece too, because as soon as you have people who come from different backgrounds, you
53:39
don't have the same shared shorthand, you don't have the same shared frame of reference, and it's really easy then to either argue just for the sake of it, or
53:48
not listen. And I think one of the challenges we actually have in any workforce of the future, and any management team of the future, is how you
53:55
actually create the space to have the kind of conversations that are necessary, which actually means having a whole lot of voices in the room who aren't usually
54:01
there, that people don't know how to listen to. And so that not just having more voices, but having a set of practices that are better at making
54:07
space for them, is an interesting part and parcel of the challenge we're facing. I think it's everybody's responsibility in a management and
54:16
leadership team, to lift their digital literacy. It's not about plugging in a
54:22
digital expert. Having said that, I was having a conversation in the last week
54:28
with one of the big global recruiting firms who were saying, if you think back
54:34
in the last five or ten years, the path to a CEO for a big multinational
54:40
incumbent company was usually the CEO, was usually through the CFO, so you do a
54:47
stint as a CFO around the world, you get moved around, and you're in pole position to be CEO. Saying that's not the case. The
54:56
case now, is it's call it Chief Digital Officer, call it somebody who's responsible for growth, usually M&A, probably marketing and brand
55:06
as well. That function is the fast path to leadership, and what they were saying was
55:12
in the past you'd build a 20 methodical, 20-year career collecting experiences,
55:18
now there's a sense from incumbents that there's an urgency, that there's
55:24
structural change going on in their industry, there's competition coming laterally - they don't have time, and so you've got people who can collect the
55:33
right set of experience, over a short period, and then kind of zoom straight to that position in big global companies so. Good and that
55:45
just brought us to the point where my little countdown clock has turned red and got angry with us, so that tells us that we need to wrap this up. So ladies and
55:52
gentlemen, if you could please join me in thanking my fantastic group of panellists today. Adrian, Genevieve, Stephan and Sue. I'll be back a little bit later to host another panel
56:01
looking more deeply into some of the issues around AI for social good, but for the meantime, thank you all for listening.
Winning the AI Race panel - D61+ LIVE 2019

Speaker: Dr Mahesh Prakash

About: Dr Mahesh Prakash on Spark, Data61's bushfire prediction modelling tool. Spark is an open framework for fire prediction and analysis. It takes our current knowledge of fire behaviour and combines it with state-of-the-art simulation science to produce predictions, statistics and visualisations of bushfire spread.



0:02
My name is Mahesh Prakash and I'm one of the group leaders in Data 61 and my
0:08
group is mainly focused on hazards and how it actually impacts infrastructure
0:14
so mainly in a city context and we also look at climate risk in that context
0:19
most of Australia a population is based either along the coastline or lives and
0:25
bushfire prone areas so most Australians probably about 70 to 80% of them are
0:31
going to be impacted either by floods or fires winds land is one of the most
0:35
disaster-prone states in Australia there was a lot of interest in that area and
0:41
then Sydney because it's a financial hub I can see banks coming to our booth and
0:46
asking can you actually predict the impact on house prices you know climate
0:51
risk so it actually affects the community a fair bit and it also makes a
0:56
big difference to things like infrastructure investment and so on so
1:01
yeah that's kind of the reason why I think it's it's quite important
D61+ LIVE 2019 Interview Dr Mahesh Prakash

Moderator: Alezeia Brown, CSIRO’s Data61

Speakers: Dr Cheng Soon Ong, CSIRO’s Data61, Dr Sue Keay, CSIRO’s Data61, Dr Olivier Salvado, CSIRO’s Data61 Hear from our subject matter experts on the latest developments in artificial intelligence and machine learning, what the future has in store, and the big research questions that remain.

0:01
All right, good afternoon everybody and thank you very much for joining us today. We're here to have a panel to talk about AI and how it can
0:10
amplify the human experience. Some of the numbers that you may have seen being advertised recently is that it's predicted that AI will be a $13
0:18
trillion dollar economy by 2030. Closer to home in Australia there was a report
0:24
that was released by AlphaBeta last year that said that AI would be a $315
0:29
billion dollar opportunity for Australians to take a hold of, and so today I'm joined by some experts in Data61's AI and Machine Learning group
0:38
to discuss how it can amplify our experience. Directly to my left is
0:43
Dr. Cheng Soon Ong, who is the Principal Research Scientist within Data61's Machine Learning Group. Next to him we have Dr. Sue Keay, who you may have seen
0:52
speak earlier today. Sue Keay is the Research Director of our Cyber Physical Systems Group, which includes robotics and cybernetics, and last but definitely
1:01
not least we also have Dr. Olivier Salvado, who is the Group Leader of
1:06
Computer Vision and Imaging at Data61. My name is Alezeia Brown and I'm the
1:12
Product Lead for also a machine learning program called StellarGraph, which is machine learning on networks which you can see further down. So, let's get to it.
1:20
What I'd like to do is throw over to our panellists, because AI is such a varying
1:26
area, and I want them to tell us who they are and what they do. Cheng, let's
1:31
start with you. So my research area is in machine learning and I spent quite a lot of my time working with scientists and other fields of science
1:40
such as genomics, systems biology, medical imaging and astronomy. Now, what
1:47
that means is essentially as science becomes more data intensive we're starting to use machine learning more and more to analyse the scientific
1:56
data sets and to use machine learning to make predictions. It's kind of important
2:01
to remember that machine learning is purely about prediction. We have some data, let's say your genome, and you want to make a prediction about something
2:09
like your risk of disease. Now, this prediction is never perfect and the word prediction is also somehow not very accurate, but this
2:17
is kind of the thing that machine learning is trying to learn from data.
2:22
Now, I'm also very pleased to be able to co-lead an activity that
2:27
CSIRO is putting forward; it's called the Machine Learning and Artificial Intelligence Future Science Platform which is a strategic investment
2:36
that we're making over the next three years with tens of millions of dollars to try and push this agenda forward of using machine learning and artificial
2:45
intelligence for scientific discovery, so I'll tell you more about this later on.
2:52
So I'm the Research Director for Cyber Physical Systems within CSIRO's Data61,
2:59
and I like to think of cyber physical systems as just a flash way of saying that we work with bringing the digital and the physical together.
3:09
The sort of research that happens within my program includes robotics and autonomous systems, distributed sensing systems, imaging, computer vision and cybernetics
3:17
and all those things can be applied across all different industry sectors, so
3:23
it's just fascinating to see where a lot of this technology might take us. Another way that I like to think of the work that our research program does is by
3:32
simply calling it embedded artificial intelligence, so it's artificial intelligence that's being applied to physical objects. I'm the Group Leader for
3:43
Imaging and Computer Vision, I'm working with Sue, and basically we develop machine learning methods to analyse images. So we have three categories,
3:53
we have technologies for when the videos or the images are coming from fixed cameras in a facility like that, for example; the second category is
4:03
where the video is mobile, so either on a robot or in an augmented reality
4:09
setup, and third category we have a bunch of projects where
4:15
we try to understand its dedicated imaging equipment, such as magnetic
4:22
resonance imaging or diverse scanners, to measure and quantify objects that are
4:27
known, and so either we try to understand what people's behaviour appears by
4:34
extracting the skeleton of when they do actions in 3D in an environment, or try
4:41
to localise where the different cameras are in unison, or finally to measure
4:48
things happening like rare events or objects in a scene as well, so there is
4:55
lots of projects on, and lots of opportunity. Isn't it just amazing how broad
5:01
machine learning and artificial intelligence can be? Just with the four people on this stage, they're covering such a wide array of technologies and
5:08
potential applications. The thing that gets me really excited is how we can actually grow this particular industry and grow the data science capability
5:16
within Australia. So within the program that I work on it's all about trying to grow the data science capability within the public service by using graph
5:27
technology to take lots of different data sets, connect them, and create a three-dimensional model so you can then gain greater insights from the
5:35
relationships. It's the kind of thing that gets me up in the morning. Cheng Soon, what's the thing that gets you up in the morning, what excites you
5:41
about this industry? I think it's worth thinking back a little bit. What we
5:48
think of a scientific discovery today is roughly four hundred years old; it's almost exactly four hundred years ago that Francis Bacon proposed this
5:56
idea of scientific discovery, where on one side you have data and on the other
6:01
side you have knowledge, and to go from data to knowledge you do this thing called observation, you observe the world to gain knowledge from data, and machine
6:09
learning so far has really been focused on applying computational tools
6:15
to go from data, to understanding the data, to gaining knowledge. Now, there's the reverse
6:20
side which is to use the knowledge that we gained - remember I said machine learning is about building a predictor - when you have a predictor you have this
6:28
knowledge embedded in your machine learning tool. You can use it to go from knowledge to data, which is, you can ask questions like
6:37
where should I measure, what type of data should I get, where is the most value for money that I can use to go that way, and that side of
6:45
scientific discovery is not just limited to science; I mean, it's applying everywhere, you can use it in industry, health services, government
6:53
wherever you like it, this question of where should we measure such that we gain the best bang for our buck. Olivier, when we think about
7:03
autonomous vehicles, obviously computer vision plays a big part of that; what else can you use with that tech, and what's going on in that space at the moment?
7:11
So these techniques are used for not just autonomous cars; they also apply to
7:18
autonomous vehicles that are used for agriculture, for example surveying,
7:25
surveillance of geographical imaging, but there are lots of
7:31
applications where we want to understand behaviour of people, we want to detect if
7:38
it's a safe environment for people to work, or to detect how many objects
7:44
are in a scene, for example the hospitality industry or restaurants,
7:51
how many customers are present in a restaurant at a given point in time, or in a factory where many, many people are moving sensitive assets.
8:04
The results were... There are other applications, where one of them is
8:12
agriculture, so agriculture uses a lot of computer vision to estimate yield of
8:18
crops, to move autonomously tractors, to understand the science behind
8:26
the plants and the food that we are eating, and another big area that takes a
8:31
lot of projects on energy at the moment is trying to quantify radiology.
8:37
Medical imaging produces a lot of MRI, PET scans, CT scans, and that creates lots
8:45
of images, and the current way, or the traditional way, is for radiologists to
8:53
spend a lot of time doing very tedious looks at hundreds of slices of the brain
9:01
or the body, so all those tasks can be quantified by AI now, and there is an
9:06
awful lot of startups and projects that demonstrate that medical imaging can be
9:14
quantified to support and help the radiologists to do the more interesting stuff and to increase the number of people that can use those technologies,
9:23
those medical technologies. Just for the rest of the audience here, so computer vision, is that camera technology that's observing the world around us? Is it
9:33
technology that's crunching the information from it, is it a combination of both? Right, so we are looking at speed, sometimes specialist cameras or like a
9:43
hyperspectral camera, which are not just taking a red, green and blue, but hundreds
9:49
of different colours, it comes from video but there is also those that are coming from LiDAR
9:56
with laser scanning for 3D, stereo imaging, and of course all the medical imaging parts of 3D datasets, so we get a
10:06
cube of data sometimes over time, which complicates of course everything we do
10:12
and you need more memory and more computers. Excellent, and I guess one of
10:18
the things that we find when we think about software development or machine learning is sometimes it isn't always very visible about what we do, but Sue,
10:26
when we think about robots, there's such a visible thing, that tangible thing that people can really get a hold of; what are some of the big developments that you've
10:33
seen in your space? So one of the biggest changes is going to be I think how we
10:38
view what robots are, so I guess up until now there's been a lot of work
10:45
in developing different types of robots that can do different things, but we're now starting to get to the point, as we've seen with self-driving cars, where
10:54
you can just turn commodity products into robots, and I think that that's
10:59
going to open up a lot of opportunities. So you can imagine if you could make your chair a robot, there might be a lot of interesting uses you
11:08
could put that to, but rather than us actually building everything from scratch, more about how we can work with what is actually already in existence
11:16
and serving a useful function, and just making that autonomous. That's super cool, can you give the
11:24
audience some other examples about how we might take a normal object and then kind of turn it into a robot? Is there some tech that we've got that does that?
11:31
One example would be in the logistics area where you might
11:37
need to move material from one spot to another spot, so you can imagine that if
11:42
you were shopping, you might look at how you can have an autonomous shopping trolley that could do, rather than having a human pushing it around, it
11:51
could actually be finding its own way around, and without having to reconfigure all the shops then you could just use the existing technology but make it a
12:01
robot, and that also makes it a lot cheaper because at the moment we use what are called AGV's, or automated guided vehicles, and typically
12:09
each unit would cost about $100,000 at minimum and then you'd have to pretty
12:14
much design your factory or warehouse or your logistic space to suit that AGV,
12:21
whereas if you could just make the material and the trolleys and other things that you already have in your warehouse into robots, you can imagine
12:31
there's a lot more things that we could do. Do you feel that if we're talking about how AI and machine learning can amplify our own human experience, if we've got
12:39
an ageing population, do you feel that this kind of tech might be really supportive and helpful for them, if we can take things that they're used to and
12:45
that help automate them to make their lives easier? Yeah, I used an example in my talk this morning of companion robots, and I do think that
12:54
we're going to see a transition from home assistance into robots because really all that's doing is about making your home assistant vision enabled using
13:02
some of the technologies that Olivier specialises in, and then making them mobile so that they can be walking around or moving around the house and
13:12
keeping an eye on things, because I think you could probably appreciate that you could achieve the same ends by completely censoring up your house and
13:21
putting cameras everywhere but not many people feel very comfortable with that, but they probably feel quite comfortable having an assistant that is wandering
13:29
around, just making sure that things are okay but it's not like having eyes on you all the time. That brings a really interesting topic into the
13:38
forefront; a lot of the time we speak about ethics in machine learning and AI but that starts to bring in the concept of trust, so how do we trust the robots, how
13:47
do we trust the data that has been generated by algorithms? I know a lot of the work that we do in my team, because it is in high consequence situations
13:55
where the data that is presented based on predictions, you might have to make a decision that could lead to a high consequence outcome, we do a lot of
14:03
research into how can you help people trust the data, how do they need to interrogate the data. I'd really love to throw the question out to
14:11
the rest the panel and see in your work, has that come up, and what are the kinds of things that you're experiencing - Cheng Soon? I think one of the things
14:19
about trust is this question of being able to understand what the machine
14:26
learning algorithm is predicting, such that it helps us make good decisions.
14:31
So you can do this in multiple ways; one way we try to do this, and this is a very
14:36
statistical view, is that when the machine learning algorithm makes a prediction it also reports its uncertainty, or its confidence if you
14:45
like, whichever way you like it, about this prediction such that you can use it in downstream decision-making. Now how to calibrate that uncertainty is a big and
14:54
difficult question because part of it is to understand the consequences of the
15:01
decision. I like to use this analogy - I don't know how many of you can remember from high school statistics where you go there's type 1
15:09
and type 2 error, there are two types of error that you can make when you make a decision, you can make false positive and false negative decisions. I could
15:16
never remember what that is, so the way to remember it is there's the story of the boy who cried wolf, the villagers first made a type 1 error
15:26
and then they made a type 2 error. First they believed that there was a
15:31
wolf when there wasn't a wolf and then they believed, when the wolf finally came they didn't believe that the wolf came, and in some sense when
15:39
you use machine learning predictions to make decisions you make both those kinds
15:45
of errors and you have to trade off one from the other because you cannot get both of those errors down to zero, and I think this is the kind of fairness type
15:53
question we have to answer. That kind of leads to maybe not just necessarily trust but
15:59
trustworthiness, so how do you make people, how do you build trustworthiness in the system so they can actually take a decision? With autonomous vehicles,
16:08
Olivier, you must have to deal with this all the time as part of computer vision, so if an image is presenting itself and you've done some kind of processing, how
16:16
do you build that trustworthiness that people will act or understand or take advantage of it? That's a big topic, and a chance to explain
16:25
a few areas and that which apply also to that case, but there it is that
16:30
most of the machine learning that is used at the moment is supervised, so that means you have very large data sets which have been
16:41
labelled, which means they have been put in two buckets, and you train a computer
16:46
over millions of hours of computation to recognise the patterns in the data that
16:52
are associated with either the blue team or the red team, and the computer,
16:57
when you present a new data set, just tells you which one is the label, the
17:04
most likely, according to what has been seen in the training data, and this is
17:11
how things work most of the time. The problem is that for developing
17:17
autonomous vehicles, for example, you would have hours and hours of video and then the system would be able to learn from those videos, and if you
17:26
compare that to a human, so you get a teenager and we can learn to drive a car in
17:32
probably 10 hours, and then assuming you trust your teenager, that teenager would
17:38
probably be able to drive that car in different countries or any different countries, whereas the computer, even though you spend those millions of hours
17:45
training on that particular data set that is coming from a German database
17:51
with German signs on roads, driving on the right, the same program
17:56
driving a car in Sydney would not be very trusting, I wouldn't trust that
18:03
car, to sit on the passenger side. I think there is a big gap between what the computer can actually do, which is just reproduce at the moment
18:13
what has been seen in the training data, with what is the expectation of people, so
18:19
when you get into a car, the car has been seeing millions and millions of hours of
18:26
driving - that's what Google is doing, they have cars driving everywhere to get the training data sets - and trust would come when perhaps we get the next level of
18:39
intelligence, where the system would have some kind of understanding of the basic
18:44
rules or use common sense. At the moment the machine learning system, the best one,
18:51
is probably much stupider than the worst rat, but a rat is much more
18:58
intelligent than any system you can develop, so this is a big topic of research. That's really interesting when obviously the work that CSIRO and Data61
19:06
does is very much based in deep research, and while we've made such great headway it still sounds like there's so much more that we need to do. Sue, as the
19:15
Research Director of the Robotics Group, what are some of the key areas that you think we need to start research on, or are kind of the next emerging areas? Well, if
19:25
I say the application of computer vision to robotics then Olivier's going to ask me for more money, but obviously we are increasingly applying techniques
19:35
like deep learning to robotics, and to good effect. It increases the speed
19:44
with which robots can learn the tasks that they need to accomplish, and decreases the amount of complicated coding that needs to go in
19:52
and also enables the robot to really modify its behaviour depending on
20:00
the circumstances, in ways that haven't been possible before deep learning was applied, and that really has only happened in the last five years so
20:08
that's been a significant change to the way a lot of that research is conducted, and just to give you a little preview of a talk that you have to go to tomorrow
20:17
morning by one of our researchers, David Howard, I think another really interesting area is the application of machine learning to the design of our
20:25
robots, so David has been applying machine learning to first work with our
20:31
material scientists to look at what composition of material we should use to
20:36
create a robotic component depending on what we want that component to do, and then applying machine learning to determine the shape that that component
20:45
should be to achieve its objectives, and so you can imagine we can come up with
20:50
some pretty funky designs using machine learnings that perhaps would take a human some time to think of, and then finally David is using machine learning
21:00
to actually get that robotic component to train itself up on how it's supposed
21:05
to behave, so really we're just skimming the surface now on the sort of
21:11
opportunities that are going to be available to a lot of these physical systems by applying machine learning. Someone literally asked me at a conference last week about
21:20
could machines actually start designing things? So would you say now that's an active area of research, that you could use machine learning to design new products or
21:28
potentially houses or areas that we live? Yes, as Olivier was pointing out with
21:34
this afternoon's afternoon tea, lamingtons, I think pink lamington - round pink lamingtons don't work for me but Olivier did suggest that there was
21:41
probably some machine learning experiment. That's a test, we're testing to get some more data on that straight away.
21:48
Excellent, and Cheng Soon, what about you? If we think about where the research is going in the area that you work in, what are some of
21:54
the things where you see the research going in the next two, three, five years? There are different ways to answer that question, so at some sense, I work a
22:03
lot with people who look at the fundamentals of machine learning, so these questions about what is learnable, what are the limits of learning,
22:11
how do we uncode the things that we take take for granted, intuitions we have, into a function, often called the loss function, such that the
22:22
machine learning algorithm can attack this problem, so those are the very fundamental questions, but I think given all the successes that machine learning
22:30
has had so far, I think it's somehow surprising when I say that actually we
22:35
can only solve a very, very small number of problems with machine learning. If you have a problem that you can convert into something called a classification
22:43
problem, something with a small number of categories, machine learning can solve it, but when you leave it, for example if you want your classification to be
22:53
somehow unsure, to allow more than one possible category for a
22:58
particular object, this already becomes quite a hard problem, so in fact a lot of the machine learning research today is about extending the types of problems we
23:09
can solve with machine learning, so instead of going from classification, where you either have to be a dog or a cat, maybe we can look at
23:18
hierarchical things - it's a cat, and maybe inside the cats we have tigers, house cats and different types of cats, so these small fine-grained things
23:26
are what we need as a society, and these are essentially open problems in machine learning. Olivier, what about yourself?
23:36
To go back to my story about the driving, I think the fundamental question that lots of people are thinking about is, what kind
23:44
of techniques do we need to invent so that a computer can learn by itself, to drive a car in ten hours like a human does, and there are two
23:55
broad categories, there are people thinking just a matter of complexity, so you just throw more neurons and more layers,
24:03
slightly different tuning of stuff and eventually it would come out of this black box, and then other people say this is not possible with that stuff,
24:13
you would just be able to find patterns. We need something new to generalise the
24:18
training and to go to general intelligence. This is one big question.
24:25
Other big questions related to that, is how do you do the training when
24:30
you don't have a lot of data, or when you don't know what you are looking for? Medical applications are especially acute in that because the data set is
24:41
so much more limited there is lots of variation, and nobody knows where. It's really very difficult to know what you need to learn. A doctor might tell you
24:51
what they think it is but you actually don't know what it is, so learning under in certain fields with very limited data, almost no training data set, this is
25:00
a very exciting area. Certainly in the area that I've worked in, which is linked with
25:06
law enforcement, there is this really interesting challenge that there isn't a ground truth, so machine learning tends to be quite data hungry and you need to
25:15
have lots of data to train it on, and you also need to have what is known, what is a ground truth, and if you don't have that ground truth, how do you then train
25:23
the model, and so this is where we start thinking about we've got to give it information that we have, so things that we determine are high risk, or things
25:30
that we might determine are most likely, and that's a big challenge for us at the moment that we're trying to unpack, is how do you train things to happen when
25:39
you don't have the underlying absolute ground truth, because if we think about what we do as humans, sometimes we go, "I reckon that this is the way that we
25:47
need to go forward," and you have a gut feel, whereas in machines obviously don't have a gut to go on the gut feel, so we've got to give
25:54
them an approximation. What I want to do for the moment is, because we've got 45 minutes scheduled for this, we want to make sure that you all get the
26:02
opportunity to ask us questions, so I'm wondering, is there anyone in the crowd that would like to pose questions to the panellists while we're up here, otherwise
26:10
we could keep banging on for the next half an hour. Do I have any hands that have any questions that they want to ask? We do have one question down there, I
26:19
think I've got a roving mic - Matt, I think there's a guy over here to your left.
26:30
I am Siva from TCF Services, I've got a question regarding trust and ethics; the ground
26:35
truth that you are talking about, how do you establish that or is there some metrics that we are looking to establish so that the research can be progressed
26:45
in that direction? That's a really great question, we could probably throw that out to everyone to have a comment. I'll start off, so in the area that I
26:54
work in, there isn't always a ground truth of what has actually happened. If
26:59
you take the example of imports, we've got people that need to potentially check or mail items that come in; is there anything dodgy coming in? We can't
27:07
check every package so there isn't a ground truth to say what was a hundred percent okay and what was not 100 percent okay, so you need to start with
27:15
some kind of risk factors, and to Olivier's point, one of the things that we're trying to do is work on how do you establish a ground truth when you only
27:22
have an indication of risk. It's certainly an open area but I'd love to discuss with you after. Does the panel have any other comments on that space?
27:30
Perhaps it's worth mentioning another way to go around this problem of
27:36
ground truth is to create your own data, so if you have enough knowledge
27:42
about what you are looking for, let's say for example, you
27:47
want to find lesions in the lung, I see my colleague Julia is working on trying to
27:52
detect lung cancer, so if you discuss with a doctor and you come up with a model of
27:58
lesions that you expect to find in the lung, what you could do is to take normal lungs, from which you have a lot, and then create a million different
28:08
lesions of where you think the lesions are and because you created the data, you
28:13
would have the perfect ground truth, and then you can use this massive data set
28:18
to train your method. Of course this is going to be working if your
28:25
model and what you're creating is realistic, but that's a good way to boot
28:30
start your training and maybe do most of the training on that big data
28:36
set and then finish on where your data is more expansive together.
28:41
Just to answer, there are also two parts to this question about ground truth; one part is this question. Machine learning has an algorithm, but at the
28:51
same time it uses data, so one challenge we have deploying machine
28:58
learning is to understand the biases we have in that data set, because very often the data that you use to train a machine learning model isn't actually
29:08
the kind of data you're interested in. You often train it on some model, maybe it's based on data collected in the USA but you want to apply it to the
29:16
Australian context, and that bias that you have from the American data is something you want to figure out how to do. The other part, and maybe Sue can
29:24
comment a bit further on this, is very often we don't have ground truth for every single step of the thing we want. We might have to take a
29:33
sequence of five or six steps to reach that final outcome and where we might be
29:38
able to measure ground truth at that final outcome, but we don't have ground truths on all the earlier steps, and this is very common in robotics where
29:46
there's a goal you want to achieve, you get points if you reach the goal, but nobody tells you all the intermediate steps that you have to do to reach the
29:53
goal, and so ground truth is tricky because machine learning is data hungry. Do you want to add something? Do we have any other questions at this point? Yes, the
30:02
gentleman over here in the middle.
30:09
Hi, John Murray from PainChek, where do you think we are in the evolution of unsupervised machine learning or semi
30:18
supervised machine learning, and how are you seeing that being applied? I think it's
30:28
a bit linked to the ground truth problem, and so there are lots of people thinking about developing techniques which does not require
30:37
ground truth, which doesn't require labelling data, so one exciting field in
30:43
imaging is self supervision, and what you try to
30:48
do is try to... I think it comes from observing a baby learn the world,
30:55
a baby would build very quickly in his
31:00
brain or her brain a model, a physical model of the world. We know that at one point his brain would understand that if you drop the bottle it would go
31:10
down, it would not go up, and you can see that because that was just on
31:17
edge, if you drop a bottle and then you trick the bottle to go up, the baby is going to laugh because he knows that the bottle should go down, and so
31:26
there are techniques that try to move into that direction, and for example you could take a video and then from one frame try to predict what is going
31:38
to be the next frame, and you can try to do that with a further and further frame, that are further and further along, and the idea is that if you
31:49
can predict that very well, somehow in the model you would have a physical model of what's going on, you will know that if you see a
31:57
person walking then that person should keep walking in the particular direction,
32:02
so I think this is a step in the direction of trying to to be a bit more general, to have a bit more of a general model to understand and to generalise your
32:15
learning, and that's self supervisory. Any other questions
32:23
from the audience so far? One down the front.
32:33
Hi, you mentioned before about the bias that comes in from using certain data sets, I was wondering if you had
32:41
any ideas about how we can remove the innate extra bias that we all bring to the research we do?
32:48
That's a really great and layered question, I know that there's a number of institutes that are now looking at unpacking bias in an ethical way, and
32:56
building machine learning algorithms that are ethically considered. I think before we answer the technical question on the data, it's also interesting to
33:04
consider that building these algorithms and tools for the future require lots of different data sets but also lots of different diversity in
33:11
teams, to try and acknowledge and understand what is that bias, and how do we remove it. Some of the great work that I see within Data61, we don't just have
33:20
data scientists and engineers, we have UX researchers, social researchers, product
33:26
people and people that are really looking that are data experts but also people experts, so we can identify where we might have gaps in our
33:33
knowledge that we need to fill. I might throw it over to Sue and maybe have a comment on some of the questions around bias and ethics, if you care to comment?
33:44
Well I'm not sure there's a good answer for that yet, really in terms of how you can... apart from by having diverse teams
33:52
involved in when you're first looking at how you're going to solve
33:58
that problem. It's a pretty difficult thing to unwind, particularly because a lot of our data sets inherently do have bias, and sometimes it's not
34:06
easy to recognise or remove. A hundred percent, so what we're finding is that there is anthropological knowledge, plus the technical knowledge so a lot of
34:16
where machine learning has gone wrong is because they've always looked at how society has acted or has been processed before, not what we wanted to be in the
34:25
future, and so I think it's when you're first designing these systems, its imagining what do we want the future to look like, so what might the gaps be that
34:33
we need to consider as we design the systems for the future, and I know that there's a master class tomorrow which is kind of thinking about that exact
34:41
problem which is about future state design, not just taking this status quo but what could the potential future look like that you could consider. While we
34:49
wait for - I'm sure there's a couple of other hands going up in the audience - but I'd love to poll the audience ourselves. I'd just love to get a raise of hands,
34:57
who's actually with a data science or technical background, who is actively working on machine learning today? Can I get a... so there's a few hand waves.
35:07
Okay, and I guess for the rest of the audience, are you people that are
35:13
investing or looking at bringing machine learning into your business? Hand raised
35:18
for that? Okay, and otherwise just really curious about how our fundamental
35:23
humanity might change with artificial and machine learning, anyone in that? There's a few, yes, one definitely, a hand up there, excellent, and a few over here.
35:33
I guess for the panellists, I'd really love to ask you a question, if we're here to discuss artificial intelligence and how it can amplify our
35:41
own humanity, what do you see as the ideal future, do you have a vision of
35:46
what an AI enabled future looks like for everybody? I'd love to just go through the panellists and get their thoughts. Cheng
35:53
Soon, let's start with you. I think the way I see it is machine learning and
36:00
artificial intelligence is a technology very much like all other technologies we
36:05
have. If you think back to the time before there's slabs of glass that we
36:11
carry around with us, this is a sting of technology and we now learn to live with it, so I think if anything, it's another technology that we're gonna use and
36:20
understand well and I think what's super important for me is that for everybody to be somehow educated and understand a little bit about the impacts of using
36:32
machine learning systems in our society, because it is here and now, it is being
36:38
deployed as we speak, it is used to recommend books that we read, it is used
36:43
to make decisions on social security, it is used in a lot of situations that,
36:49
whether we like it or not, it's going to be part of society. Now I think as part
36:54
of an educated society in my ideal world, we all understand the implications of
37:00
what this means. Now a lot of these implications are subtle and I don't, I'm not claiming I know the answers to most of this, but I think it's
37:08
important as a society to know the implications of deploying these systems.
37:13
Thanks you. Sue, what about you? I agree with Cheng Soon, that it's critical that we bring everyone
37:20
along on this technological journey with us, and so the better that we have
37:25
technological literacy at all levels of society is critical, but I think that
37:33
alongside that we also need to be thinking big, so where I see the real advantages of the sort of technologies that we're developing within CSIRO is
37:41
that we have an opportunity to not only augment human potential so that we can
37:48
do more things than we've ever been able to imagine before, but that we can do it
37:53
at scale, and so if we can really push these technologies forward then we should open our minds to the possibilities of what
38:02
we can do and what we want the world to look like, because I think we should be applying these technologies to having more inclusive societies, to giving
38:12
people from disadvantaged circumstances to be able to flip that so that they have the tools within their own hands to help make the world a better place, so
38:20
that we can help protect the environment and actually restore the environment in
38:25
areas where we haven't been doing such a great job, so that we can tackle the big questions like climate change, war, famine, these are all the sorts of
38:35
things that we really should be having a hard think about if we can really push this technology forward, what can we
38:43
do with them, I mean already, these technologies are going to be used to help us to create a lunar global village and take the first steps towards having
38:51
missions to Mars, so the sky is the limit, and that's what we should be aiming for; we should be aiming to be building the planet that we would almost
39:00
like to live on. Perhaps in addition to that I think the current state of
39:09
the technology at the moment is a bit like medicine was, two, three or four
39:15
hundred years ago where it's a bit 'Wild West', where people will do experiments and there is very little... not entirely understanding what's going on,
39:25
there are little laws, and I'm not confident when posting anything on
39:30
Facebook, I'm not comfortable sending my saliva to 23andMe to scan my DNA, and I think the legislation and guidelines on
39:43
ethics are lagging the current technology, so I expect that it's going to take a
39:50
while but eventually we would have robust laws that allows to deploying AI
39:57
and where the people would be confident to share that data, they would be
40:03
confident to know what's going on with the data, they would be comfortable
40:08
knowing that their insurance is going to cover if something happened that was not planned. Medicine came up with this level of their ontology, so
40:20
all the doctors have to swear on Hippocrates, I think, and I
40:27
think it will end up with the same situation; it's a technology that can have a lot of applications but the current legal and the current
40:37
legislation is completely lagging the capabilities that we can have at the moment. Absolutely, and I think the thing that always comes loud and
40:46
clear through these kinds of conversations, which I love having, is that it's always human first but digital by design, and that this isn't just
40:53
building technology just for technology's sake - we are trying to make the world a better place that is more inclusive and this is one of the
41:00
technological tools that we can do to get that to happen. I'm also really fascinated that a lot of the questions are around bias and ethics and
41:08
semi-supervised and supervised learning so it's not just let the robots go and do their thing; everyone else is also really interested in that human
41:16
first but digital by design. We're almost out of time for today but I recognise that many of you may have lots of other questions; I know that
41:25
whenever I walk away from a session I get five that pop into my head. There is a networking event later on today that starts at around 5pm, many of the
41:32
panellists will be still around, so if you do want to come and approach them and ask them a question, please do, otherwise there is a website that's called Expert
41:40
Connect that enables you to connect with researchers from not only Data61 but all around the globe that do this research, and so please do reach out
41:48
either through LinkedIn or through the networking. So please, join your hands in thanking Dr. Olivier Salvado, Sue Keay and Cheng Soon Ong.
Artificial Intelligence panel: D61+ LIVE 2019

Artificial Intelligence Roadmap

Without a coordinated effort to drive AI development and adoption in Australia, valuable economic opportunities will be missed to build and capitalise on our capabilities in AI and related domains.

CSIRO is uniquely placed to drive Australia’s national AI program, positioned at the intersection of government, industry and the research community, and with deep connectivity into the nation’s digital and domain capability.

Artificial Intelligence: Solving problems, growing the economy and improving our quality of life outlines the importance of action for Australia to capture the benefits of artificial intelligence (AI), estimated to be worth AU$22.17 trillion to the global economy by 2030.

To learn more about the adoption and development of AI technology in Australia, download the AI Roadmap.