Key points
- Until now, there was no specific guidance for investors to analyse AI-related environmental, social and governance (ESG) risks and opportunities.
- We’ve co-developed a framework to help the investment community assess responsible AI practices and integrate ESG considerations.
- Investors can download the report and framework for free on our website.
Many investors use environmental, social and governance (ESG) frameworks to assess non-financial metrics. Climate change, human rights and corporate governance are all examples of such metrics.
However, with the meteoric rise of businesses using AI, can investors identify whether companies are implementing AI responsibly?
Partnering with boutique active equities fund manager, Alphinity, we interviewed 28 listed companies to find out.
Nothing to see here: AI policies seldom disclosed
While most ESG reports are public, we found only a small percentage of companies publicly disclose their responsible AI (RAI) policies.
Forty per cent of interviewed companies had internal RAI policies, yet only 10 per cent shared these publicly. Despite this, 62 per cent of companies were starting or had implemented an AI strategy.
Global companies were more advanced than Australian companies in implementing these strategies.
Even the companies that were doing considerable RAI work didn’t reflect it in their external reporting. Some companies fail to mention AI in their risk statements, strategic pillars and annual reports. This despite expressing enthusiasm about it in discussions and making significant investment into exploring the technology.
What impact can AI use have on reputation?
Many companies express concerns about the potential negative impacts of AI on their reputation, consumer trust, and regulatory consequences.
We found that companies with good overall governance structures are more likely to balance AI threats and opportunities and therefore had a healthy curiosity of new technology.
Conversely, companies with weak overall governance are unlikely to show leadership characteristics when it comes to developing and implementing RAI. This could limit the opportunities that AI offers. For example, some companies we interviewed restricted employees from using AI tools such as ChatGPT, while others took an educational stance.
ESG: the way to see
So, if public reporting isn’t commonplace and a balanced view of threats and opportunities is needed to mitigate harm and leverage AI benefits, where can investors look for answers?
We found that a strong track record in ESG performance is an indicator of confidence for investors. Companies that carefully consider how their actions affect people, their reputation, and how they’re seen by society will approach new technologies like AI with the same care. These companies generally have well-respected Boards, robust disclosures and ESG commitments. They’re also likely to implement AI responsibly and in a measured way.
Because AI is evolving so rapidly, good leadership on existing topics like cyber security, diversity and employee engagement suggests that the impact of AI will also be considered thoughtfully.
But wait, there’s more
While looking to existing ESG frameworks is a handy stopgap for investors, we’ve created something much more robust.
In our report, The intersection of Responsible AI and ESG: A framework for investors, we’ve created a framework to help the investment community assess RAI practices and integrate ESG considerations.