Blog icon

woman and man standing at whiteboard discussing plannningArtificial Intelligence (AI) is considered top strategic technology in many organisations due to its ability to transform, automate and analyse huge amounts of data. Despite its potential, there are serious concerns about its ability to behave and make decisions in a responsible way.

Compared to traditional software systems, AI systems involve a higher degree of uncertainty and more ethical risk due to autonomous and opaque (black box) decision making. Ethical issues can also occur at any stage of the AI development lifecycle - from planning right through to monitoring.

We have evaluated the effectiveness and limitations of existing AI risk assessment frameworks to provide advice for companies looking to develop responsible AI.

Tasking a risk-based approach to operationalising responsible AI

These include an evidence-based guidance catalogue for operationalising responsible AI in your company, as well as an ethical risk assessment tool for risk assessments against AI ethics principles.

A recent example of this is where we worked with the NSW government to develop their new AI Assessment Framework. You can read more about our approach and methodology:

Companies looking to co-develop innovative tools and technologies to address responsible AI issues and opportunities can partner with us in a number of different ways to access and develop specialised solutions.

Artificial Intelligence (AI) is considered top strategic technology in many organisations due to its ability to transform, automate and analyse huge amounts of data. Despite its potential, there are serious concerns about its ability to behave and make decisions in a responsible way.

We're developing innovative software engineering tools and technologies for responsible AI. ©  ThisisEngineering, unsplash

Compared to traditional software systems, AI systems involve a higher degree of uncertainty and more ethical risk due to autonomous and opaque (black box) decision making. Ethical issues can also occur at any stage of the AI development lifecycle - from planning right through to monitoring.

We have evaluated the effectiveness and limitations of existing AI risk assessment frameworks to provide advice for companies looking to develop responsible AI.

Tasking a risk-based approach to operationalising responsible AI

These include an evidence-based guidance catalogue for operationalising responsible AI in your company, as well as an ethical risk assessment tool for risk assessments against AI ethics principles.

A recent example of this is where we worked with the NSW government to develop their new AI Assessment Framework. You can read more about our approach and methodology:

Companies looking to co-develop innovative tools and technologies to address responsible AI issues and opportunities can partner with us in a number of different ways to access and develop specialised solutions.

Work with us on responsible AI research

Get in touch to discuss your responsible AI needs.

Contact us