Artificial Intelligence (AI) is increasingly used by industry and government organisations to make consequential decisions that affect people’s lives.
AI can be highly efficient at achieving business objectives, from automating data-heavy information gathering, creating personalised customer experiences, or using natural language processing to improve support services.
But without being explicitly designed with the appropriate checks and balances, AI can also have unintentional negative impacts. This can result in privacy loss, data breaches, and ethical issues.
The Australian Responsible AI Index recently found that despite 82 per cent of businesses believing they were practicing AI responsibly, less than 24 per cent had actual measures in place to ensure they were aligned with responsible AI practices.
To help bridge the gap between the Australian AI Ethics Principles and the business practice of responsible artificial intelligence (RAI), the National AI Centre (NAIC) has worked with Gradient Institute to develop 'Implementing Australia’s AI Ethics Principles: A selection of Responsible AI practices and resources.'
The report explores some of the practical steps needed to implement the Australian Government’s eight AI ethics principles, explaining each practice and its organisational context, including the roles that are key to successful implementation. Practices such as impact assessments, data curation, fairness measures, pilot studies and organisational training are some of the simple but effective approaches outlined in this report.
Read the report to learn how to implement Australia's AI Ethics Principles to create responsible AI practices.