How can we ensure that AI systems, including ChatGPT, are developed and adopted in a responsible way?
The concept of a chatbot can be traced back to the 1950s when computer scientist and inventor Alan Turing proposed the Turing Test, which aimed to determine whether a machine could exhibit intelligent behaviour indistinguishable from a human.
In 1966, Joseph Weizenbaum created ELIZA, the first known chatbot, which was designed to simulate a psychotherapist by responding to user inputs with pre-programmed responses. In the 1980s and 1990s, advances in natural language processing and machine learning led to the development of more advanced chatbots, such as Parry and ALICE.
In the early 2000s, the rise of messaging platforms and mobile devices made it easier for businesses to integrate chatbots into their customer service systems. In recent years, advancements in AI, such as deep learning and natural language processing, have made it possible for chatbots to handle more complex and natural conversations with users, leading to the widespread use of chatbots in various industries, including finance, healthcare, and e-commerce. Despite the increasing popularity of chatbots, users aren't sure if they should trust them, and organisations don't fully understand the risks and how to mitigate them.
Globally, significant efforts have been put into programming solutions which focus on privacy, fairness, and explainability. However, there is lack of responsible AI governance and engineering guidance to assess and mitigate the ethical risks of chatbots against all of the AI ethics principles.
Creating patterns for responsible AI
Based on the results of a review, we analysed successful case studies and generalised best practices to create a set of guidelines, or
patterns, for a range of industries - including the finance industry - to use to shape the development of their AI products.
We applied responsible AI patterns to the development process of chatbots for the financial sector using the IBM Watson Assistant and discuss how this approach can be used to address various responsible AI risks.
Successful identification and mitigation of chatbot risks
From planning, conversation design, implementation, testing, deployment and monitoring, responsible AI practices can be embedded at every step along the way. Using this approach, we successfully identified and mitigated risks through the chatbot development.
Financial services can now work with us to access these skills and resources for their own AI development.