Blog icon

10 February 2021 News Release

Scientists ran three experiments where participants played games against a computer. As the machine gained insights from the behaviour underlying participant responses, it identified and targeted vulnerabilities in people's decision-making to steer them towards particular actions or goals.

CSIRO scientist Dr Amir Dezfouli, a neuroscientist and machine learning expert, who spearheaded the research, said it highlighted the potential power of AI and underscored the need for proper governance to prevent potential misuse.

"Although the research was theoretical, the implications of this research are potentially quite staggering."

"AI and machine learning offers significant benefits across many areas including health. Ultimately, how responsibly we set these technologies will determine if they will be used for good outcomes for society, or manipulated for gain. This research advances our understanding of how people make choices and will help us better detect and avoid patterns which could be misused," Dr Dezfouli said.

The research was conducted in partnership with the Australian National University (ANU), Germany's University of Tübingen, and Germany's Max Planck Institute for Biological Cybernetics.

Dr Jon Whittle, Director of CSIRO's Data61, said the discovery highlighted the growing importance of ethics within AI and machine learning.

"This research is further proof that AI technologies are powerful, with tremendous potential for societal benefit, but also ethical risks," Dr Whittle said.

"Like any technology, AI could be used for good or bad, so proper governance is critical to ensure that AI and machine learning are implemented in a responsible manner. Organisations need to ensure they are educated on what these technologies can and cannot do and be aware of potential risks as well as rewards."

Last year, CSIRO worked with the Australian Government to release an AI Ethics Framework to provide a foundation for both awareness and achievement of better ethical outcomes from AI.

The framework was designed to guide Australia's first steps in the journey towards integrating policies and strategies that support the positive development and use of AI.

The research was published in prestigious journal Proceedings of the National Academy of Sciences (PNAS).

About the research

The paper, "Adversarial vulnerabilities of human decision-making", was co-authored by Amir Dezfouli, CSIRO's Data61, Richard Nock, CSIRO's Data61 and Australian National University, Peter Dayan, Max Planck Institute for Biological Cybernetics and University of Tübingen, Germany.

The research explores whether it is possible for machines to learn how to influence humans. The research was broken up into three main experiments around action selection, response inhibition, and social decision-making to develop an adversarial framework. Based on this framework, the research found that it is possible to teach a machine to induce certain behaviours and outcomes from interactions with humans. The research also found that the framework can be used to generate non-adversarial behaviour and build trust.

Background information

About CSIRO's Data61

CSIRO's Data61 is the data and digital specialist arm of Australia's national science agency. We are solving Australia's greatest data-driven challenges through innovative science and technology. We partner with government, industry and academia, through the D61+ Network, to conduct mission-driven research for the economic, societal and environmental benefit of the country. Our research expertise includes artificial intelligence and machine learning, robotics, cybersecurity, privacy preserving technologies, blockchain and analytics.

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.