CSIRO Australia’s National Science Agency AI for better insurance Enhancing customer outcomes amid industry challenges Citation Bratanova A., Kaur S., Banyard S., Chamikara M.A.P., Walker G., Chen H. & Hajkowicz S. (2025). AI for Better Insurance: Enhancing Customer Outcomes amid Industry Challenges. A Consulting report for the Insurance Council of Australia by CSIRO, Australia. Copyright © Commonwealth Scientific and Industrial Research Organisation 2025. To the extent permitted by law, all rights are reserved, and no part of this publication covered by copyright may be reproduced or copied in any form or by any means except with the written permission of CSIRO. Important disclaimer CSIRO advises that the information contained in this publication comprises general statements based on scientific research. The reader is advised and needs to be aware that such information may be incomplete or unable to be used in any specific situation. No reliance or actions must therefore be made on that information without seeking prior expert professional, scientific and technical advice. To the extent permitted by law, CSIRO (including its employees and consultants) excludes all liability to any person for any consequences, including but not limited to all losses, damages, costs, expenses and any other compensation, arising directly or indirectly from using this publication (in part or in whole) and any information or material contained in it. CSIRO is committed to providing web accessible content wherever possible. If you are having difficulties with accessing this document, please contact csiro.au/contact. Acknowledgement of Country CSIRO acknowledges the Traditional Owners of the lands, seas and waters, of the area that we live and work on across Australia. We acknowledge all Aboriginal and Torres Strait Islander peoples and their continuing connection to their culture and pay our respects to Elders past and present. CSIRO is committed to reconciliation and recognises that Aboriginal and Torres Strait Islander peoples have made and will continue to make extraordinary contributions to all aspects of Australian life including culture, economy and science. Acknowledgements The authors would like to acknowledge members of the Insurance Council of Australia AI Working Group for contributing their time to this project through the workshops. We also extend our gratitude to Simone Dossetor, CEO of Insurtech Australia, for generously sharing her time and expertise with the research team. Additionally, we thank our colleagues from various CSIRO business units for their valuable input, consultation on aspects of the research and review of early drafts. Augmentation with generative artificial intelligence We used ChatGPT, Elicit and Microsoft Copilot as extended search tools, to generate and augment images and to assist with copyediting tasks in this report. This approach allows the research team to focus on tasks requiring creativity, analytics, reasoning, comprehension, and social and emotional intelligence. Insurance Council CEO foreword Australia is at the relative start of its economic digitisation journey. While major corporations have deeply embedded technology in their processes and Australians are accustomed to the world at their fingertips, many industries and individual businesses have a way to go. Emerging technologies are transforming industries, reshaping business models, and reconfiguring the way consumers interact with products and services. Artificial intelligence (AI) is one of the most powerful enablers of this transformation and it is one that presents both tremendous opportunity and the need for careful, responsible stewardship. The general insurance industry, like many others, is exploring the potential of AI to improve services and streamline operations. But the way we adopt these technologies matters. We must take a consumer-centric, values-led approach to the development and deployment of AI, ensuring that the benefits of innovation are delivered fairly, safely, and transparently. This paper represents a first step in building a shared understanding of AI in the context of general insurance. It is designed to be a reference point for insurers, regulators, policymakers, and consumers to consider the role of AI, and the safeguards that must accompany its use. AI is already being applied in several areas across the industry and while there are limits to its suitability and use case, we are seeing early use cases in underwriting and pricing, claims triage, fraud detection, and customer service automation. These applications are helping to improve customer experience and outcomes, efficiency and accuracy, reduce manual workloads, and deliver faster service to customers. However, without the proper governance, the deployment of AI could come with significant risks. AI could present additional data privacy concerns, cybersecurity vulnerabilities, unintended bias in automated decision-making and limited transparency – all of which can impact consumer trust and create new forms of unintended complexity. The broader regulatory environment also remains a consideration as insurers navigate an increasingly complex environment. All these factors underscore the importance of robust testing, governance and ongoing human oversight as AI is deployed within our industry. The insurance industry understands its responsibility to innovate in a way that enhances, not compromises, consumer outcomes. That means working closely with government and regulators to establish fit-for-purpose guardrails and actively engaging with research agencies like CSIRO to promote safe and responsible adoption of AI. We want to pursue these opportunities because the opportunities are significant. AI can support more tailored products, faster and enhanced claims experiences, advanced risk management, and smarter, data-informed decision-making across the entire insurance value chain. It also has the potential to help the industry better understand and respond to complex and emerging risks – from climate change to cyber threats. Andrew Hall Executive Director and CEO Insurance Council of Australia Contents Insurance Council CEO foreword........................................................................................................i Executive summary...................................................................................................................................21 Introduction...........................................................................................................................................42 Industry challenges and AI adoption........................................................................................62.1 General insurance industry today ..........................................................................................................................62.2 AI as the next chapter in the insurance transformation.......................................................................................62.3 Shifting consumer expectations.............................................................................................................................82.4 Growing cost pressures and natural peril risks...................................................................................................103 Use cases, risks and opportunities for AI adoption in insurance................................123.1 Five focal use cases................................................................................................................................................123.1.1 Automated claims processing and triage................................................................................................123.1.2 Fraud detection and prevention..............................................................................................................143.1.3 Enhanced underwriting and risk assessment.........................................................................................153.1.4 Natural disaster impact prediction and response...................................................................................163.1.5 Operational control and compliance.......................................................................................................173.2 Opportunities: transforming customer experiences and driving sustainable choices....................................183.3 Risks and their implications for consumers.........................................................................................................203.4 Governance and guardrails: regulatory responses to AI in insurance...............................................................234 Advancing AI adoption in the insurance industry.............................................................265 Conclusion............................................................................................................................................29Appendix....................................................................................................................................................30References .................................................................................................................................................32 Executive summary The insurance industry, both in Australia and internationally, serves as a cornerstone of the financial system, providing essential protection against risks to individuals, businesses and communities. This research report, prepared in collaboration with the Insurance Council of Australia (ICA), explores the challenges and opportunities presented by artificial intelligence (AI) adoption in the insurance sector. Consumers in focus Rising costs: Driven by worsening climate events, urban expansion into high-risk zones, and rising costs, the insurance premiums have increased by double digits over the past two years, adding to the financial strain caused by the already rising cost of living and making more Australians reconsider their insurance coverage. Decreasing trust: There is growing scepticism about the insurance industry which aligns with a broader decline in confidence in Australian institutions. Generational shift: Younger generations are more inclined to live without insurance, challenging traditional notions of its necessity. Evolving expectations: Many customers demand seamless, technology-enabled experiences for their insurance needs, along with personalised solutions. Trust in AI: Despite a limited understanding of its capabilities, many Australians are concerned that AI may bring more drawbacks than benefits. Climate change: By 2030, one in 25 Australian homes could become uninsurable, limiting coverage in high‑risk areas and raising concerns for vulnerable groups. AI adoption is occurring at a critical time for the insurance industry, as rising costs and evolving risks – including those related to climate change – intensify financial pressures and social vulnerabilities. Increasing living expenses, higher insurance premiums and growing underinsurance trends are further straining consumers. Amid these challenges, consumer expectations are shifting towards more personalised, efficient and transparent services. The industry is responding to these pressures by evolving, aiming not only to protect Australians from unexpected financial impacts but also to reinforce its role as a trusted partner in fostering resilience and recovery. Five AI use cases: ICA’s challenges and opportunities In consultation with the ICA AI Working Group, this report identified five priority AI use cases for the Australian general insurance industry. Each use case explores the specific challenges and opportunities associated with AI adoption, addressing distinct industry pain points and highlighting possibilities to streamline operations, enhance customer experiences, and improve risk prediction and management. However, adopting AI also brings challenges such as data privacy concerns, cybersecurity vulnerabilities, biases in automated decision-making and limited system transparency. Regulatory compliance remains a critical consideration as insurers navigate an increasingly complex landscape of evolving national and international guidelines. Five prioritised AI adoption use cases 1. Automated claims processing and triage 2. Fraud detection and prevention 3. Enhanced underwriting and risk assessment 4. Natural disaster impact prediction and response 5. Operational control and compliance Seven key areas for advancing AI adoption for better insurance in Australia This report presents seven areas to guide the safe, ethical, and effective adoption of AI in the Australian insurance sector. Delivering better insurance for all: Leveraging AI‑driven solutions has the potential to enhance affordability and accessibility, ensuring underinsured consumers benefit from adequate coverage. AI innovations can also address evolving risks, such as those linked to climate change, by helping to create tailored insurance products. Strengthening governance for responsible AI adoption: Enhancing already comprehensive frameworks for responsible AI adoption can help strengthen oversight mechanisms, ethical standards and accountability for high-risk applications. Incorporating regular audits, model explainability and transparent reporting as core governance components can support ethical and effective AI use. Fostering collaboration and resilience: Promoting industry-wide cooperation through dedicated working groups, data-sharing initiatives and partnerships with research organisations can drive innovation, improve risk modelling capabilities and strengthen collective responses to hazards such as climate risks and cybersecurity threats. Adopting AI strategically and proactively: Moving from proof-of-concept AI projects to scalable implementations can unlock significant opportunities for automation, real-time decision-making and predictive analytics. These advancements can optimise operations and deliver improved consumer outcomes. Building AI skills for a future-ready workforce: Expanding investments in AI literacy and training programs can help bridge the skills gap and prepare leadership and operational staff for the challenges and opportunities of AI adoption. Sector-wide collaboration to share resources and expertise can further enhance workforce readiness and foster a future-ready industry. Becoming a trusted partner through transparent AI: Increasing transparency and fairness in AI-driven processes can reinforce consumer confidence. Clear communication about AI’s role in improving affordability, efficiency and equity can respond to public concerns and highlight the industry’s commitment to ethical practices. Innovating insurance: Harnessing AI to create new products, such as dynamic pricing models and AI liability insurance, can help tackle emerging challenges like climate-related risks and enhance competitiveness while aligning with societal and environmental priorities. The way forward AI holds transformative potential for the Australian insurance industry, enabling greater operational efficiency, tailored customer experiences, and advanced risk management. By approaching challenges responsibly, the industry can effectively harness these capabilities. 1 Introduction The general insurance industry, both in Australia and globally, is undergoing a significant transformation driven by technological advancements, evolving consumer expectations and challenging economic conditions. Over the past two years, this shift has been further accelerated by the rapid development and integration of artificial intelligence (AI). Less than a decade ago, no machine could reliably perform tasks like language or image recognition at a human level. Today, AI systems outperform humans in test environments for tasks such as language comprehension, image analysis, speech recognition and handwriting interpretation [1]. Meanwhile, generative AI (GenAI) is bridging the gap between human and machine capabilities in knowledge work, opening new markets for products and skill sets [2] while presenting opportunities and challenges across all sectors of the economy [3, 4]. Like many other industries, AI is being increasingly deployed in the insurance industry to enhance efficiency, accuracy and customer satisfaction. In insurance, AI-driven tools: • automate parts of the claims processing, enabling faster and more accurate assessments while reducing human error [5, 6] • support customer service through AI chatbots and virtual assistants • enhance fraud detection systems, helping identify suspicious claim patterns and mitigating potential losses [7]. Despite advancements, many opportunities remain untapped. AI can help streamline operations, cut costs and deliver tailored solutions while driving innovation in products and markets [8, 9]. And as the technology evolves, the horizon of what is possible with AI in insurance expands further. Alongside these opportunities come significant challenges. Key technological risks include data privacy and security concerns, hallucinations, biases and integration difficulties with legacy systems. In addition lack of transparency and flexibility in AI models is a major problem in many automated decision‑making systems, such as in the Robodebt case, where insufficient oversight, transparency and adaptability led to significant failures [10]. Non-technological difficulties include resource allocation for AI development, compliance with evolving regulations, balancing human expertise with AI capabilities and effective AI project management. Mitigating these risks requires robust governance systems focused on well-defined use cases and adoption strategies. This report examines the opportunities and challenges associated with AI adoption by Australian insurers to deliver better insurance outcomes while supporting the general industry in uplifting governance with a goal to enhance consumer experiences and foster more innovative, efficient insurance solutions. Informed by consultations with the Insurance Council of Australia (ICA) AI Working Group, the report identifies five key AI use cases and seven actionable recommendations to strengthen the industry-led guardrails, enhance customer outcomes, and promote safe and responsible adoption of AI, paving the way for a resilient, future-ready general industry that protects and serves the community (Figure 1). Figure 1: Overview of the research approach and results Focal industry challenges 1. Improving the affordability and availability of insurance products 2. Enhancing consumer outcomes beyond improving affordability 3. Addressing natural peril risk Analysis of research and industry literature Analysis of company reports Ten use cases for AI adoption in insurance in Australia WORKSHOP 1 • Shared understanding of use cases • Use-case specific risks and opportunities for AI adoption • Prioritising use cases WORKSHOP 2 Uplifting the governance and guardrails for AI adoption in insurance Five prioritised AI adoption use cases 1. Automated claims processing and triage 2. Fraud detection and prevention 3. Enhanced underwriting and risk assessment 4. Natural disaster impact prediction and response 5. Operational control and compliance Advancing AI adoption in the insurance industry 9 Delivering better insurance for all 9 Strengthening governance for responsible AI 9 Fostering collaboration and resilience 9 Adopting AI strategically and proactively 9 Building AI skills for a future‑ready workforce 9 Becoming a trusted partner through transparent AI 9 Innovating insurance ICA AI Working Group consultations 2 Industry challenges and AI adoption 2.1 General insurance industry today The general insurance industry in Australia encompasses a wide range of insurance services and products designed to protect against property and financial risks. These include home building and contents insurance, motor vehicle insurance, travel insurance, liability covers (including medical indemnity), and various other types of coverage, but exclude life and health insurance [11]. The sector plays a crucial role in ensuring financial protection for individuals and businesses, safeguarding assets from various risks. With 88 companies and 86 million active policies in general insurance as of July 2024, the industry also makes a substantial economic contribution [12]. The sector employs approximately 46,000 people directly [13], with thousands more engaged indirectly in related services such as brokerage, legal services and technology development. The insurance industry’s supply chain comprises multiple interconnected parties, including underwriters, brokers, claims adjusters and reinsurers. It also involves a broad range of goods and service providers, including repairers, builders, legal firms and financial institutions. Each of these components plays a role in ensuring that policies are accurately priced, distributed and serviced. This entire supply chain is undergoing a significant transformation driven by technological advancements. And, as the next section details, while the adoption of digital platforms has already streamlined many aspects of the insurance lifecycle, the integration of AI stands to revolutionise it even further [14]. 2.2 AI as the next chapter in the insurance transformation The insurance industry has undergone profound digital transformation over the past decade, driven by technologies such as big data analytics, the Internet of Things and cloud computing [15, 16]. These tools have revolutionised risk assessment, enabled dynamic pricing models, streamlined claims processing and fostered innovation in products and services [16–18]. Digital platforms and mobile apps now facilitate seamless transactions, paperless claims and improved agent workflows, increasing efficiency and enabling the rise of InsurTech companies that are reshaping the competitive landscape [15, 19]. AI marks a new turning point in this transformation, especially with the wave of advancements in generative AI (GenAI), which reached the ‘peak of inflated expectations’ in 2023 [4, 20]. Financial institutions, including insurers, were early adopters of AI [21, 22]. A recent Australian Securities and Investments Commission (ASIC) review of 624 AI use cases across banking, credit, insurance and financial advice found that 57% of use cases were implemented or under development within the past two years, with 61% of licensees planning further AI expansion (Figure 2) [23]. Figure 2: Number of use cases of AI by deployment year as reported by ASIC licensees Note: *in development as of December 2024. Source: ASIC [23] AI applications in insurance are established in areas like fraud detection, claims processing and customer service, with machine learning and natural language processing tools delivering improved accuracy, efficiency and customer satisfaction [5–7, 24]. For example, predictive models and optical character recognition have enabled the analysis of millions of claims documents, expediting fraud detection and claim responses [25]. Similarly, natural language processing has been used to cut loss data extraction time in half, enhancing underwriters’ interactions with brokers and clients [26]. Innovations in image recognition further accelerate claims processing, while dynamic pricing models based on individual risk factors improve affordability and availability [19, 27]. The advent of GenAI is expanding possibilities for insurers. For instance, GenAI can facilitate near real-time claims processing, with reports of 70% accuracy in document interpretation [28]. AI-powered predictive analytics and real-time decision-making can support loss avoidance and streamline operations during catastrophic events, while tools like telematics can enable tailored offerings, such as insurance for automated and connected transport [8, 29]. AI adoption also presents opportunities for innovation in new insurance products, such as liability coverage for algorithmic errors, regulatory compliance assurance and micro-insurance tailored to specific customer needs [9, 30]. Beyond consumer benefits, AI can foster employee satisfaction by automating repetitive tasks, allowing staff to focus on innovation and complex decision-making. Research also shows that AI can accelerate organisational growth, drive job creation and expand the diversity of skills within firms [31, 32]. Despite its promise, AI adoption has experienced notable failures and reported problems. An estimated 80% of AI projects fail to progress beyond pilot stages – double the failure rate of non-AI information technology (IT) projects – due to weak governance, immature digital foundations and poorly defined human–AI interaction [33, 34]. AI also introduces risks that affect both operations and customers. Systems reliant on personal information are vulnerable to misuse, bias and errors. Challenges in automated decision‑making (ADM) – such as inflexibility, over-reliance on automation, lack of transparency and insufficient human oversight – are particularly relevant for insurers [10]. As technology advances, shifting consumer priorities are reshaping the perception value of, and access to, insurance products. In Australia’s general insurance sector, rising costs, climate risks and evolving consumer behaviour create both challenges and opportunities for AI adoption. The next section examines how these factors drive AI implementation. 2.3 Shifting consumer expectations Consumers are under growing financial strain due to rising living costs that coincided with two consecutive years of double-digit premium increases [35] (Figure 3). These price hikes disproportionately impact individuals with high-risk properties and greater exposure to risks [30]. Fewer customers are automatically renewing their insurance policies [36]; many are downgrading their insurance coverage, or forgoing it altogether to save money [30]. Younger generations are especially likely to forgo coverage entirely [30]. Lifestyle shifts, such as remote work turning homes into hybrid spaces, have changed property usage patterns, impacting the type and scope of insurance coverage required. These financial pressures and changing needs underscore a growing demand for affordable and trustworthy insurance products [37, 38], which AI can help insurers design and deliver. Figure 3: Consumer price indexes as a percentage change from corresponding quarter of previous year: Consumer Price Index (CPI) overall, insurance and financial services Note: ‘Insurance’ includes all insurance products as defined by the Australian Bureau of Statistics (ABS), not only general insurance. Source: ABS [39] Consumer expectations in the insurance sector are evolving rapidly. Many customers now demand seamless, technology-enabled experiences for policy purchases and claims processing, along with personalised offers tailored to their needs. To meet these demands, insurers are investing in digital transformation initiatives, including AI-powered customer service tools [7]. At the same time, as highlighted by the Royal Commission into the Robodebt Scheme, consumers are seeking access to challenge the decisions made with the involvement of automation, including AI [10]. Overall for insurers, balancing rising operational costs with evolving consumer expectations and maintaining affordable premiums is becoming increasingly challenging [40]. While personalisation enhances customer satisfaction, it requires insurers to collect and analyse more personal data, raising ethical concerns about privacy and data security [7, 41, 42]. Although many consumers prioritise cost savings over privacy, insurers face increasing pressure to address these concerns responsibly to maintain trust. Following the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry, the responsibility for asking the right questions now lies with insurers, while policyholders need only ensure their responses are accurate. This shift underscores growing expectations for transparency in data usage and fairness in pricing [43]. These trends add complexity for insurers, requiring them to innovate with AI while maintaining affordability, protecting privacy and upholding fairness – essential elements for earning and sustaining customer trust in a rapidly transforming marketplace. Lack of trust in Australian institutions and in automation challenges AI adoption. Australians are among the most concerned about AI, sharing similar sentiments with respondents from the United Kingdom, Canada and the United States of America (USA) [44, 45]. Many feel they lack a clear understanding of AI or knowledge of which products and services incorporate it, fostering a perception that AI may bring more drawbacks than benefits [46, 47]. A 2023 Roy Morgan survey found that 57% of Australians believe AI ‘creates more problems than it solves’, with job losses cited as the primary concern [48]. Cybersecurity risks further exacerbate public distrust in AI adoption as individuals are being increasingly targeted [49]. This distrust in AI coincides with growing scepticism towards Australian corporate and government organisations, including insurance institutions. Many perceive that insurance companies prioritise profits, fuelling doubts about the value and fairness of their products [50, 51]. These issues create a significant trust challenge for organisations implementing AI-driven tools. Concerns about losing access to human agents underscore the importance of balancing AI innovation with human oversight to maintain consumer confidence. Skills gap among consumers and the insurance workforce is hindering AI adoption. Like other industries in the Australian economy, insurance faces a shortage of technology skills [52]. The shortage of cybersecurity skills alone is projected to double by 2030 [52]. A 2019 study by the Australian Institute of Company Directors and the University of Sydney revealed that only 3% of board directors had a background in science or technology, and just 35% felt equipped to evaluate the ethical and practical implications of advanced technologies [53]. The insurance industry is actively working towards bridging the skills gaps [54] and AI can help overcome some of the shortages through task automation and augmentation [55]. However, the shortages in AI expertise still pose risks to the Australian insurance industry (Figure 4). Filling these gaps is essential for the industry to successfully adopt and manage AI technologies. Risks associated with the skills gaps in AI development and adoption Increased operational costs Delayed implementation of AI initiatives Increased operational risk Escalated data privacy and security risks Ineffective AI solutions Challenges in compliance and ethical standards Employee mental health concerns Reputation risks Missed opportunities for innovation Figure 4: Risks associated with the skills gaps Australia’s digital divide further compounds these challenges. A 2023 study revealed that over half of customer interactions with general insurers in Australia are conducted via digital-only channels, a sharp increase from 35% in just one year [56]. This shift creates significant challenges for individuals and community groups with limited connectivity and digital literacy [57, 58] and highlights questions about how insurance products can be effectively delivered to the population. Looking ahead, these trends indicate growing pressure on insurance uptake and trust in insurance institutions among Australian consumers. The industry faces a whole-of-society challenge in provisioning risk mitigation within an affordable envelope. Enhancing transparency and demonstrating a genuine commitment to customer wellbeing will be pivotal in rebuilding trust and fostering broader insurance adoption. As insurers strive to meet these demands, external hazards such as climate change, natural isasters and cybersecurity risks are reshaping the environment for AI adoption and affordability. 2.4 Growing cost pressures and natural peril risks Rising operational costs are impacting AI adoption in insurance sector. One notable contributor is the rise in construction costs, which grew by over 30% between 2020 and 2024, adding nearly $100,000 to the average cost of a completed home. This surge has been driven by soaring material prices and escalating labour costs, posing challenges for home owners and insurers alike, who must cover higher replacement values (Figure 5) [59, 60]. In response to elevated construction costs, broader inflationary pressures, and rising climate-related risks globally, reinsurers and direct insurers have faced mounting cost pressures alongside heightened return expectations from investors. These additional costs have been managed partly by reinsurers absorbing a share and partly by passing them on through the repricing of reinsurance premiums. Claims costs have also risen, driven by more frequent and severe weather events, higher property repair expenses and evolving customer expectations for improved claims service. Fraud is another growing cost to insurers and consumers alike [61, 62]. While AI offers opportunities to streamline operations, mitigate fraud and manage costs, its adoption is no guarantee of cost relief, especially given the significant investments required for effective implementation. Insurers remain under intense cost pressure as they balance financial viability and long-term sustainability with the need to keep premiums affordable for customers. Figure 5: Building construction output prices Source: ABS [59] Climate change challenges demand AI solutions in insurance. The frequency and severity of extreme weather events, such as bushfires, floods and storms, have increased markedly in recent years and are expected to grow further. Today, one in 12 properties in Australia – 1.36 million properties – faces some level of flood risk, while 5.6 million properties are at risk of bushfire [13]. Insurance claims related to catastrophic events have increased by almost 50% over the past five years and are projected to grow by 5% annually by 2050 [13]. These escalating losses burden insurers, drive up premiums and create difficulties in balancing affordability for consumers. Climate change is also expected to reduce the availability of insurance, particularly in high-risk areas. Researchers estimate that by 2030, one in 25 homes in Australia could become uninsurable [63]. Underinsurance, or a total lack of insurance, remains prevalent among lower socio-economic groups and in regional and remote areas, disproportionately placing these communities at risk. As disasters strike, the financial losses borne by these groups exacerbate their situations, perpetuating a harmful cycle of vulnerability and underinsurance that amplifies the risk of future losses. Stakeholders are increasingly demanding that insurance companies disclose and address environmental, social and governance (ESG) issues, including climate risk. Customers, investors, regulators and socially conscious employees are scrutinising the industry and calling for meaningful progress in these areas [64]. Amid these challenges, AI‑driven innovations are offering solutions to better predict and manage natural peril risks (Box 1). Technologies such as IBM’s hurricane forecasting advancements and Australia’s AI-powered smoke detection and bushfire response systems demonstrate the potential of AI to enhance early warning systems and mitigate climate‑related risks. By leveraging such innovations, insurers can take proactive steps to reduce losses, stabilise premium costs and make coverage more accessible to vulnerable communities. Box 1: Risk prediction and management using AI AI is increasingly used for predicting and managing the risks associated with climate‑related events, leveraging vast datasets to improve detection and mitigation strategies. Globally, AI models are transforming the prediction of extreme weather events and risk assessment, leading to significant advancements in early warning systems. For example, AI-enhanced hurricane forecasting has reportedly improved the precision of intensity predictions by nearly 40% compared with traditional models [65]. A notable example occurred in July 2024, when Hurricane Beryl was moving across the Atlantic Ocean. Google DeepMind’s GraphCast accurately predicted a sharp turn from southern Mexico towards Texas a week earlier than traditional methods [66]. Similarly, GraphCast outperforms traditional numerical models in accuracy and compute efficiency, predicting hundreds of weather variables globally at 0.25° resolution for the next 10 days in under one minute [67]. In Australia, AI initiatives are addressing critical challenges like bushfire detection and response. Forest Fire Management Victoria (FFMVic) launched a project in 2023 to evaluate AI-powered smoke‑detection software. Through the 2023–24 and 2024–25 fire seasons, and with the possibility to extend the trial, this initiative has deployed ground cameras at a selection of FFMVic’s 64 fire lookout points to detect smoke early, particularly during unmanned periods, enhancing rapid response capabilities [68]. Similarly, Spark – CSIRO’s AI-enabled framework – is being used to model and predict bushfires across Australia [69, 70], alongside other operational fire simulators [71]. AI has significant potential to model, evaluate and prepare for compounding weather events. AI-enabled systems have been developed that can analyse the cascading effects of extreme weather and simulate multiple ‘what‑if’ scenarios for different characteristics of climate‑related disruptions [72]. AI can support understanding and response to polyrisk situations, such as extreme weather compounded by infrastructure failures – for example, prolonged heatwaves paired with power outages or challenges in bushfire prevention. A study in Florida demonstrated the use of neural network‑based approaches to assess the cascading impacts of natural hazard events across interconnected systems, evaluating ‘co-resilience’ in infrastructure networks such as electricity, transportation and communication [73, 74]. AI tools can enable dynamic risk modelling and scenario planning to map preventive measures and optimise targeted responses, minimising consumer impact. While AI models for climate risk prediction show great promise, they also face limitations. For instance, early flood prediction models were criticised for overestimating risks, leading to unnecessary disruptions. Challenges also remain in achieving consistent accuracy across diverse geographical regions and varying data quality [75–77]. Despite breakthroughs like GraphCast’s ability to forecast hurricane landfalls three days earlier than traditional models, the warning time for events such as tornadoes remains short –currently around 15 minutes – with ongoing efforts to extend it using AI [78]. Moreover, AI’s reliance on historical data poses a difficulty in accounting for rare or unprecedented events – a critical issue as climate change introduces new and unpredictable weather patterns [77, 79]. These challenges underscore the importance of rigorous development to ensure AI models are both reliable and trustworthy in natural peril risk management. Cybersecurity has become a significant challenge for the Australian insurance sector as digital transformation accelerates. With increasing reliance on digital platforms and cloud-based services, insurers have become prime targets for cyberattacks. These incidents compromise sensitive customer data and cause substantial financial losses through service disruptions, reputational damage and remediation efforts. AI integration further intensifies concerns around privacy and cybersecurity. According to a KPMG survey, 85% of insurance-sector CEOs identified cybersecurity as a major hurdle for implementing AI technologies [80]. To address these risks, insurers are enhancing internal security systems, educating customers and expanding their portfolios with cyber insurance products that help businesses mitigate the financial impacts of cyberattacks [81, 82]. Adoption of these products is steadily growing among Australian businesses but insurers remain vulnerable to emerging threats, including AI-enabled attacks such as deepfake scams [83] (see Section 3 for further discussion). The success of AI adoption in the Australian insurance industry hinges on carefully evaluating specific use cases, as the risks and opportunities vary significantly across different applications. Section 3 delves into practical use cases of AI in insurance, highlighting the unique risks and opportunities associated with each, and outlining strategies to leverage AI effectively in practice. 3 Use cases, risks and opportunities for AI adoption in insurance 3.1 Five focal use cases To better understand the risks and opportunities for AI adoption in the insurance industry, we examined how they vary across specific use cases, each presenting a unique mix of challenges and opportunities that shape AI’s overall potential for insurers. Our approach included a review of academic and industry literature, the application of CSIRO AI strategy tools and an analysis of reports from insurance companies on the New York Stock Exchange (NYSE). This process identified 10 high-potential AI use cases, which were refined and prioritised during two workshops with the ICA AI Working Group in November 2024. Through polling and in-depth discussions, the group selected the top five most relevant applications for the Australian general insurance industry. The working group then identified the specific risks, opportunities and guardrails associated with each use case. Figure 6 presents the finalised set of use cases identified through these deliberations. Sections 3.1.1 to 3.1.5 provide an overview of each use case as identified by the workshop participants, highlighting the associated risks, opportunities and recommended guardrails. Figure 6: Five focal use cases for AI adoption in insurance 3.1.1 Automated claims processing and triage Table 1: Use case at a glance: automated claims processing and triage OPPORTUNITIES • Efficiency: faster claims processing and accurate repair allocations • Customer experience: clear communication and enhanced service quality • Best use: ideal for high-volume, low-complexity claims RISKS • Technical risks: cybersecurity, data leakage and high compute costs • Operational challenges: poor model maintenance and siloed GenAI governance • Model risks: bias, false positives and lack of transparency GUARDRAILS • Ethical guardrails: comprise checks, training and explicit customer consent • Operational oversight: embeds accountability, cost monitoring and customer tracking systems • Technical safeguards: ensure model monitoring, data security and transparency audits ADOPTION Adoption over the past year: medium to high Likelihood of adoption over the next two years: medium to high Automation and escalation within the claims process using AI models allow for the assessment and approval or rejection of claims based on predefined criteria. This offers the potential to accelerate the process, reduce operational costs and support the prioritisation of high‑risk or complex claims for assignment to human agents. Five prioritised AI adoption use cases 1. Automated claims processing and triage 2. Fraud detection and prevention 3. Enhanced underwriting and risk assessment 4. Natural disaster impact prediction and response 5. Operational control and compliance Opportunities Operationally, AI systems can enable deeper analysis of claims trends and introduce efficiencies through faster claims processing. Enhanced liability verification capabilities ensure faster processing of positive customer cases, while greater accuracy in repair allocation help get customers to the right repairer the first time. For customers, the technology enables better outcomes through an improved experience across multiple channels, including clearer communications with consistent language for liability decisions. The system also allows for freeing up claims handlers’ time, so they can focus on higher priority, more complex work and personal customer support. It can improve the accuracy and quality of customer interactions. Risks Technical risks include cybersecurity concerns, such as the vulnerability of personal data and the potential for external data leakage. Operational challenges include poor maintenance of models after‑implementation, insufficient model training, and the risk of treating GenAI used for triage and automation as a siloed activity, rather than being governed by best practices established for classical AI systems. Bias emerges in multiple forms, including conflicting objectives between customer indemnity and insurer financial outcomes, and varying levels of awareness about vulnerabilities in claims handling. Model-specific risks include errors in triaging claims (false positives), model bias and lack of transparency. These risks are particularly concerning as the use case may fall into a high-risk category under AI regulations, potentially attracting public concern and negative media coverage of negative experiences, even if such incidents are rare. The cost associated with development and implementation of AI systems, along with cybersecurity challenges, could disproportionately impact smaller insurers, limiting their ability to adopt AI technologies. At the same time, the scaling opportunities provided by AI can benefit smaller industry players by supporting competitive balance. However, they also increase competitive pressure on larger insurers, who can no longer rely on the scale of human capital as a source of competitive advantage. Guardrails Foundational elements of guardrails to minimise these risks include ethical checks and assessments, mandatory workforce training covering both technical and ethical considerations, and clear requirements for customer consent where relevant. Operational oversight requires defined accountability for AI outcomes, robust customer monitoring systems and specific non-repair cost tracking (including time, lifecycle and potential model errors) to detect cost overruns. Technical safeguards include comprehensive model monitoring, data security measures and third‑party audits, with a strong focus on transparency of data sources to mitigate potential bias. Governance frameworks should involve risk, audit and compliance teams while ensuring human ownership of decisions – emphasising augmentation rather than automation when using AI. Broader governance requirements include pro-social framing of AI use cases, disclosure of AI usage and regular reviews of AI model risk management. A dedicated assurance and independent review process for GenAI may be necessary to manage the unique and emergent risks associated with this newer technology. Flood waters in Lismore, NSW (2022) 3.1.2 Fraud detection and prevention Table 2: Use case at a glance: fraud detection and prevention OPPORTUNITIES • Efficiency: scaled operations and reduced need for fraud investigations • Customer benefits: lower premiums as fraud-related costs decrease • Fraud prevention: AI-identified fraud trends and validated asset quality RISKS • Training data limitations: AI may miss new or AI-enabled fraud patterns • Human expertise loss: automation risks eroding manual knowledge • Customer impact: delays and anxiety from enhanced, error‑prone verification processes GUARDRAILS • Bias prevention: excludes protected features and tests for embedded bias • Human oversight: ensures investigators retain authority, supported by clear dispute processes • Collaboration: uses data-sharing to improve fraud detection and ensure responsible use ADOPTION Adoption over the past year: medium to high Likelihood of adoption over the next two years: medium to high Use of AI to analyse patterns in claims data to identify suspicious activities indicative of fraud enables insurance companies to detect and prevent fraudulent behaviour more effectively. By reducing fraud, these systems help lower operational costs, which can translate into more affordable premiums for customers and ensure resources are allocated to genuine claims, ultimately benefiting both the customers and the broader community. Opportunities The implementation of AI-driven fraud detection systems enables scalable operations, allowing insurers to process more cases while reducing the overall number of investigations required. This efficiency gain could benefit customers by potentially lowering premiums as fraud-related costs decrease. Operationally, AI enhances fraud prevention capabilities by rapidly identifying fraud trends and validating both identity and asset quality, such as confirming vehicle conditions. Risks A significant technical challenge emerges from the potential limitations of historical training data. Specifically, AI systems trained on historical fraud patterns may be unable to detect new AI-enabled fraudulent activities. Additionally, insurers may not have sufficient volumes of training data, which may create blind spots in fraud detection capabilities. These risks are compounded by the potential for bias in the AI systems, and errors in the models which could accumulate and become chronic over time. As fraud detection becomes more automated, there is a risk of losing valuable human knowledge and pattern recognition of fraud previously gained and embedded in well-understood manual processes. This creates a concerning ‘black box’ scenario where businesses remain responsible for investigations but may lack the deep understanding needed to effectively oversee them. Further technical risks arise from handling sensitive data and the potential for incorrect claim rejections due to insufficient or biased training data. The investigation process itself presents challenges, as genuine customers might face anxiety and delays during enhanced verification procedures. Guardrails As a technical guardrail, protected features and their proxies must be excluded from AI models, with thorough testing for bias embedded against protected attributes. Regular AI system audits and sufficient human intervention points are critical, including oversight by fraud investigators who retain ultimate decision‑making authority. This would be underpinned by existing internal and external dispute-resolution processes, including oversight by the Australian Financial Complaints Authority (AFCA). To ensure effective human oversight, AI systems, along with the data used for training and assessment, must be explainable and transparent. Additionally, industry-wide collaboration to responsibly share data on fraudulent claims can enhance fraud detection effectiveness while promoting responsible implementation practices. 3.1.3 Enhanced underwriting and risk assessment Table 3. Use case at a glance: Enhanced underwriting and risk assessment OPPORTUNITIES • Customer benefits: tailored pricing and improved risk awareness. • Operational gains: enhanced data reclassification and anti‑selection measures • Professional development: AI enables upskilling of intermediate underwriters RISKS • Bias and precision: risk of uninsurable areas and market inaccessibility • Operational risks: loss of human expertise in underwriting decisions • Transparency challenges: contestable rejections pose reputational and operational risks GUARDRAILS • Human oversight: tasks portfolio managers with ensuring expert review of AI decisions • Key metric: uses combined ratio measures to AI portfolio performance • Auditing: undertakes regular checks of AI accuracy and fairness over time ADOPTION Adoption over the past year: medium to high Likelihood of adoption over the next two years: high AI can help refine risk models by integrating various non-traditional and unstructured data sources (e.g. telematics data from vehicles), enabling more precise risk assessment and tailored pricing. This can ensure fairer premiums and promote a safer community by incentivising risk-reducing behaviours, such as safer driving or better property maintenance. Opportunities For customers, the benefits include more precise and tailored pricing, with improved claims discounting capabilities (though this is not strictly related to underwriting). The technology also enables broader and more nuanced occupation and career considerations in risk assessments while enhancing risk awareness and mitigation strategies. From an operational perspective, AI supports better pricing by reclassifying legacy claims data using GenAI. It enhances selection and anti-selection measures, contributing to a stronger book of business that benefits both insurers and customers through more accurate risk assessments. Additionally, AI offers the potential to upskill intermediate underwriters, fostering professional development within the industry. Risks A fundamental concern is the potential for bias within AI systems, further complicated by transparency and contestability issues in underwriting decisions. Specific concerns include individual decisions lacking alignment with broader strategic contexts, potentially affecting overall portfolio management. A notable risk is that if the system becomes too precise, certain areas may become uninsurable, raising critical questions about market accessibility. Technical risks include cyberthreats, while operational risks focus on the potential erosion of human expertise in underwriting. The challenge of maintaining transparency in underwriting decisions, particularly when defending rejections, poses both operational and reputational risks. Guardrails Guardrails include requiring human underwriters to maintain a portfolio manager–style role, ensuring expert oversight of AI-driven decisions. The integration of GenAI should be carefully managed within existing business processes, with regular assessments of outcomes over time. A key metric introduced is the combined ratio for AI portfolios, focusing on bottom-line performance as a measure of effectiveness. Additionally, regular audits of the AI solution are recommended to monitor and ensure ongoing accuracy and fairness. 3.1.4 Natural disaster impact prediction and response Table 4: Use case at a glance: natural disaster impact prediction and response OPPORTUNITIES • Disaster response: improved assessment, warnings and faster servicing • Customer benefits: reduced gaps, quicker claims and immediate payments • Loss mitigation: enhanced loss prevention and early warnings RISKS • IP risk: hesitance from large players over intellectual property (IP) loss • Bias impact: risk of unfair disaster response prioritisation and allocation • Risk-based pricing: might negatively impact insurance affordability for some customers GUARDRAILS • Segmented data: protects IP and sensitive community information • Aggregated analysis: enables insights while addressing bias and privacy • Participation support: facilitates insurer collaboration without compromising proprietary data ADOPTION Adoption over the past year: medium Likelihood of adoption over the next two years: medium AI enables insurance companies to predict the impact of natural disasters, such as bushfires and floods, on insured properties and communities by analysing satellite imagery, weather patterns and historical loss data. This facilitates the development of proactive response plans and the delivery of timely, effective support, helping customers and communities recover more quickly and efficiently. Opportunities Insurers can better prepare for disasters and mitigate their financial impact, lessening the burden on both the insurer and the insured. AI enables greater risk identification for stakeholders and supports synchronised analysis across insurance, emergency services and government agencies. In natural disasters, where safely deploying teams is challenging, aerial-imaging AI enhances impact assessments and provides a clearer picture of disaster zones. AI technology also supports pre-emptive warnings and quicker servicing, particularly during catastrophic events. For customers, AI helps reduce protection gaps, facilitates faster decision-making at claims time and improves supply chain management for accessing essential resources, such as building materials and temporary accommodation. It also enables immediate payments for those impacted by events such as floods. Most importantly, AI contributes to loss avoidance, mitigation and minimisation while offering early warnings to keep people safe. Risks Risks include the potential loss of intellectual property (IP) when it is shared along with the data, leading to hesitance from large players who have heavily invested in their own data collection systems. Bias is a critical risk factor, particularly in determining which impacts are considered and who is affected by natural disasters. This bias could influence response prioritisation and resource allocation, potentially disadvantaging certain groups or regions. More accurate risk-based pricing may increase premiums for higher risk areas and households, reducing insurance affordability. Guardrails Guardrails include segmenting data to ensure the appropriate handling and analysis of sensitive information. Given the risks to IP and participation hesitancy from insurers, segmented data approaches allow organisations to retain control over proprietary information while contributing to collaborative disaster response efforts. This segmentation also helps manage community impact risks by ensuring sensitive community data are partitioned and protected. Another guardrail is aggregated analysis, which enables collaborative insights while protecting individual stakeholder interests. This approach addresses bias risks by providing a comprehensive overview of disaster impacts, maintaining privacy and data protection standards. 3.1.5 Operational control and compliance Table 5: Use case at a glance: operational control and compliance OPPORTUNITIES • Cost reduction: centralised compliance checks and scalable population analysis • Customer service: complaints handled and vulnerable customers supported • Consistency: augmented decisions, ensuring clarity and compliance adherence RISKS • Over-reliance risk: AI may weaken monitoring and regulatory reporting • Lack of human judgment: essential for compliance, especially in subjective scenarios • Model limitations: difficulties with context in nuanced compliance situations GUARDRAILS • Regulatory acceptance: ensures the AFCA and regulators support AI system outputs • Auditable processes: establish transparent, auditable AI compliance frameworks • Governance standards: develop implementation guidance for emerging AI technologies ADOPTION Adoption over the past year: low to medium Likelihood of adoption over the next two years: medium AI can be used to monitor and support compliance with regulatory requirements and internal policies by automating the detection of anomalies, tracking policy adherence and generating real-time compliance reports. This can improve operational accuracy, reduce regulatory risks and ensure that insurers remain aligned with evolving legal standards. For customers, this translates to greater trust and confidence in the insurer. Opportunities At a foundational level, using AI can reduce costs by linking compliance checks and reporting to single sources of truth. It enables greater assurance through analysis of larger populations and scales controls from small sample checks to all claims or calls. A significant operational benefit is sustainable profitability for smaller organisations, even with large compliance overheads. Customer service quality can be improved by ensuring all complaints are identified and actioned, and by enabling vulnerable customers to access specialised support. AI can augment human decisions with consistent, well-referenced rationales (e.g. legal frameworks and product disclosure statements), while improving visibility of communication timing standards. Additionally, AI can support better complaint management by quickly identifying themes helping address customer concerns efficiently. Risks A significant concern is over-reliance on, and inappropriate use of, AI, which may lead to missed or inaccurate operational monitoring internally and deficiencies in reporting to regulators. Reduced dependence on human control is particularly concerning in regulatory compliance, where human judgement and accountability remain essential. Guardrails To mitigate the identified risks, the critical guardrail for this use case is proactive collaboration with the AFCA and regulators regarding AI system outputs. This can support the successful development and adoption of AI-assisted compliance processes and establish a framework for transparent and auditable processes. Additionally, the need for GenAI governance standards is critical, as this type of AI technology remains nascent and requires more detailed implementation guidance for effective use in this context. 3.2 Opportunities: transforming customer experiences and driving sustainable choices The adoption of AI in insurance offers a range of opportunities that can transform the industry by improving operational efficiency, enhancing risk management and delivering better consumer outcomes. The following opportunities are observed across the prioritised use cases and can address longstanding challenges and unlock value for both insurers and their customers. Enhancing operational efficiency and cost management AI can introduce significant efficiencies by automating core processes, reducing manual workloads and improving decision-making accuracy. In claims processing and triage (Use Case 1), AI can accelerate claims assessments, improve repair allocation and reduce operational costs by prioritising low-complexity cases for automation, while reserving human agents for more complex claims. Similarly, in fraud detection (Use Case 2), AI enables scalable operations by quickly identifying suspicious trends and reducing unnecessary investigations, helping insurers process higher volumes of claims efficiently. Improving risk management and resilience AI enhances insurers’ ability to identify, predict and manage risks with greater accuracy. AI can leverage a broader range of non-traditional and unstructured data sources to refine risk models, enabling more precise pricing and fairer premiums for customers (Use Cases 3, 4). This improves insurers’ financial resilience, facilitates proactive disaster response planning and helps minimise losses, potentially benefiting both insurers and vulnerable communities. AI’s predictive capabilities also help insurers better assess fraud and liability risks, ensuring resources are allocated appropriately (Use Cases 3, 4). Enhancing customer experience and trust AI adoption can deliver improvements in customer service by ensuring faster, clearer and more accurate interactions. In claims processing (Use Case 1), AI can improve communication through consistent and transparent decision-making processes, and reduce fraud-related costs (Use Case 2), which can contribute to more affordable premiums for customers. AI can also enable insurers to provide timely support during natural disasters, facilitating resource allocation and early warnings to at-risk communities (Use Cases 1, 2, 4). Additionally, AI can strengthen consumer trust through better compliance management. For example, in operational control and compliance (Use Case 5), AI can improve complaint handling, ensure adherence to regulatory requirements and help identify vulnerable customers for specialised support. By delivering consistent, well-referenced decisions aligned with legal and policy frameworks, AI can help foster confidence in insurers’ fairness and transparency (Use Case 5). Supporting innovation and professional development AI adoption encourages innovation by enabling deeper analysis of data trends and supporting the creation of tailored insurance solutions. This can include more personalised pricing models in underwriting (Use Case 3) and proactive resource management during disasters (Use Case 4). Furthermore, AI technologies can help upskill the insurance industry workforce by augmenting human decision-making processes, fostering professional growth and higher quality outcomes (Use Cases 3, 5). Beyond operational improvements, AI presents an opportunity for insurers to drive positive societal and environmental change. By leveraging AI’s predictive and analytical capabilities, insurers can encourage responsible behaviours and contribute to sustainable practices. Box 2 highlights how AI can help shift the industry from a reactive to a proactive approach in tackling challenges such as underinsurance and climate risks. Box 2: Driving positive change with AI The insurance industry holds a unique position to influence the behaviours of individuals and organisations through its role as a financial risk manager. By structuring policies that incentivise healthy behaviours, sustainable practices and risk-reducing actions, insurers can drive positive change. AI can significantly enhance this capacity, enabling a shift from a reactive ‘detect and repair’ model to a proactive ‘predict and prevent’ approach, particularly in managing climate change risks. A ‘predict and prevent’ strategy using AI can operate in three key areas, provided sufficient data are available: • Predicting underinsurance: AI can help identify underinsured households or businesses, enabling better advice and more effective risk-sharing within portfolios. • Mitigating incidents: by analysing risk factors, AI can recommend cost-effective prevention measures to reduce the likelihood of future claims. • Infrastructure and government interventions: AI can help predict systemic vulnerabilities and advise governments or infrastructure managers on preventive measures to mitigate large-scale risks. However, researchers emphasise that integrating mitigation measures into pricing structures requires further study. Additionally, insufficient data-sharing poses a significant barrier to realising the full potential of these analytics [63]. While AI has the potential to improve insurance accessibility and pricing, inadequate oversight of AI-driven decisions could inadvertently harm marginalised groups at the edges of insurability. Greater scrutiny of AI-based pricing and trends is essential to avoid macroprudential risks. Addressing climate risks requires not only adaptive measures but also proactive efforts to prevent the impacts of climate change altogether. Initiatives like the Taskforce for Nature-related Financial Disclosure (TNFD) encourage insurers to adopt ‘nature-positive’ strategies, benefiting both business and the environment [84]. And while surveys demonstrate that insurance industry leaders recognise that handling of climate risks impacts company reputation, competitiveness and financial standing, more action is required to actually shift the businesses towards sustainable operations [64]. Premium incentives for environmentally friendly goods and services also remain an untapped opportunity for the insurance industry. As sustainability reporting standards evolve – particularly in Europe – data on the environmental impact of products and services will become more accessible, allowing insurers to adjust premiums accordingly. However, such incentives must carefully consider socio‑economic disparities, as lower-income groups may struggle to afford eco-friendly options. Striking the right balance is essential to avoid exacerbating inequalities while promoting sustainable choices. 3.3 Risks and their implications for consumers While AI offers vast potential, it introduces a range of risks and challenges. While there are multiple classifications of AI-related risks in the literature, we identified seven categories of risks. These risks are interrelated and can amplify one another (Figure 7). Figure 7: Interrelated groups of risks and concerns associated with AI adoption in insurance Financial and AI investment risks Financial risks stem from the economic aspects of AI adoption, including compliance costs, litigation, computational costs and costs of errors in decision‑making processes, such as underpayment of claims or undetected fraudulent claims. Stricter AI transparency and fairness requirements can also increase operational expenses and reduce revenue margins. Examples of financial risks, highlighted in the use cases, include high compute costs and cybersecurity challenges (Use Case 1). Delays or inaccuracies in claims processing and fraud detection (Use Case 2) further exacerbate financial burdens as they can to potential revenue losses and increased customer dissatisfaction [85]. Operational risks Operational risks can arise from implementing and maintaining AI technologies. These include system failures, design defects and inadequate data quality. For example, reliance on AI for automating core processes such as underwriting and title insurance has introduced risks such as misapplied technologies and failure to meet customer expectations [86, 87]. Automated claims processing (Use Case 1) highlights operational challenges such as poor maintenance of models after implementation, errors in triaging claims and insufficient oversight. These risks can lead to biased or incorrect decisions, which, if publicly exposed, could generate negative media coverage. Fraud detection systems (Use Case 2) illustrate the operational problem of relying on historical data, which may fail to detect new fraud patterns. Additionally, the loss of human expertise in manual fraud detection processes creates a ‘black box’ scenario where insurers are responsible for outcomes without a deep understanding of the AI model’s operations. Another group of operational risks comes from Use Case 3, which highlights the risks of erosion of human expertise and difficulty maintaining transparency in decisions, especially when justifying rejections. This group of risks would require robust solutions for dispute resolution for decisions made with AI assistance. Compliance risks The evolving regulatory landscape presents significant compliance and management challenges for insurers adopting AI technologies. Insurers are navigating complex frameworks, including data protection laws, privacy laws, anti-discrimination guidelines and AI-specific regulations. Ambiguous policy wording and limited exclusions related to AI usage can create uncertainty, leaving insurers exposed to regulatory risks. Company reports highlight difficulties in maintaining approved procedures, ensuring data security and adapting to these evolving standards. Failure to meet these obligations can result in fines, reputational harm and operational setbacks [86]. Compliance monitoring (Use Case 5) highlights the risk of over-reliance on AI or inappropriate application of AI, which may result in missed or inaccurate reporting to regulators. This issue is particularly concerning in handling vulnerable customers and complaints, where human judgment remains essential. Limitations in the ability of AI models to address subjective controls or principles-based regulations can lead to regulatory scrutiny and reputational harm. Compounding these issues, high failure rates of AI initiatives – often caused by poor management practices, lack of strategy or inadequate governance – further exacerbate the risks. Such failures not only hinder compliance efforts but can also delay service delivery, reduce trust in AI-driven solutions and negatively impact customer outcomes [33, 34]. False positives and predictive accuracy concerns Privacy and cybersecurity concerns Financial and AI investment risks Operational risks Compliance risks Reputational risks Bias and discrimination concerns AI adoption in insurance Reputational risks AI-related failures can severely damage trust and a company’s brand. Biases in claims triaging (Use Case 1) and underwriting decisions (Use Case 3) pose reputational risks, particularly when outcomes can appear unfair or discriminatory. For example, overly precise risk assessments in underwriting could render certain areas uninsurable, raising concerns about equity and accessibility. While misuse of statistical models and competitive disadvantages in AI capabilities are frequently cited as reputational concerns, cyberattacks and other vulnerabilities also amplify reputational risks [88]. Fraud detection systems (Use Case 2) risk public backlash when genuine customers are subjected to unwarranted delays or enhanced verification processes, eroding trust in the insurers. Bias and discrimination concerns AI systems risk reinforcing biases through proxies for protected attributes in datasets [89]. Over-personalisation of products can reduce comparability, complicating consumers’ ability to evaluate options and make informed decisions [29]. In automated claims processing (Use Case 1), conflicting objectives between customer indemnity and insurer financial outcomes may introduce bias into decisions. Similarly, natural disaster prediction models (Use Case 4) risk bias in prioritising response efforts, potentially disadvantaging certain regions or communities. False positives and predictive accuracy concerns AI systems in fraud detection and risk assessment often generate false positives, overwhelming teams and allowing genuine threats to go unnoticed. A 2024 survey revealed that 64% of respondents identified false positives as a major issue, with 42% experiencing them in 41–80% of cases [90]. Such inaccuracies harm customers by delaying claims processing or leading to wrongful claim denials. Predictive inaccuracies in underwriting (Use Case 3) or natural disaster models (Use Case 4) may also result in the misallocation of resources or erroneous premium calculations, further impacting operational effectiveness and customer satisfaction. Privacy and cybersecurity concerns AI adoption amplifies privacy and cybersecurity risks, as insurers rely on AI to process sensitive customer data. Cyberthreats, including data leaks, unauthorised access and adversarial attacks, pose significant risks across use cases. Automated claims processing (Use Case 1) and fraud detection (Use Case 2) are particularly vulnerable, as breaches or adversarial attacks can compromise decision‑making and erode consumer trust. Additionally, natural disaster prediction models (Use Case 4) face risks of IP loss and cyberattacks targeting proprietary data systems. Overall, the range and complexity of cybersecurity and privacy risks associated with AI adoption in the insurance sector require careful attention. These risks are further amplified by ongoing privacy reform, including the recent requirement to notify customers when ADM is used for decisions that can impact their rights or interests [91, 92]. Additionally, future decisions on Privacy Act 1988 (Cth) reforms introduce further uncertainty, particularly regarding consent, which must be unambiguous and up to date. The implementation requirements for these changes are not yet clear, adding to the complexity of compliance and risk management. The Appendix provides a detailed discussion of the most prominent challenges in privacy and cybersecurity that insurers need to consider in their AI adoption processes and strategies, while Box 3 offers further insights into the privacy and cybersecurity risks specific to the five prioritised use cases. Box 3: Cybersecurity and privacy risks associated with AI adoption While all AI applications carry some of the risks outlined in this report – and potentially new, unforeseen risks – the use cases prioritised in the workshops are particularly significant. Each use case presents specific privacy and cybersecurity challenges that require robust safeguards and mitigation strategies to prevent potential damage. Some risks are well documented today, others are foreseeable in the near future and some remain unpredictable. Table 6 provides an overview of current and potential future risks associated with these five use cases. For example, in Use Case 1, current risks include attackers manipulating input data to deceive AI models, resulting in incorrect claim decisions, such as unwarranted approvals or denials. Techniques like ambiguous language or adversarial phrases can exploit the model’s vulnerabilities [93]. Additionally, these models, which are trained on vast amounts of sensitive data, are susceptible to model extraction attacks [94], posing the risk of catastrophic data breaches. Future risks include increasingly sophisticated adversarial attacks that may become harder to counter as AI technology advances. Table 6: Current and future privacy and cybersecurity risks associated with the five use cases Note: Current can be interpreted as the immediate concerns, and Future can be seen as the potential or emerging concerns that are likely to intensify or materialise over time. An effective AI solution would address both concerns comprehensively by implementing robust data protection measures, proactive threat detection, and ongoing compliance strategies. Overall, AI adoption in insurance presents a broad spectrum of risks. For consumers, these translate into concerns about fairness, privacy and trust. However, these risks can be effectively managed with good risk management practices and the use of guardrails. Section 3.4 discusses the guardrails around AI adoption in insurance and the critical role of robust governance, ethical practices and transparency in ensuring that AI benefits both insurers and consumers. USE AUTOMATED CLAIMS PROCESSING & TRIAGE FRAUD DETECTION & PREVENTION ENHANCING UNDERWRITING & RISK ASSESSMENT NATURAL DISASTER IMPACT PREDICTION & RESPONSE OPERATIONAL CONTROL & COMPLIANCE CASE RISK CURRENT FUTURE CURRENT FUTURE CURRENT FUTURE CURRENT FUTURE CURRENT FUTURE Privacy violations 3.4 Governance and guardrails: regulatory responses to AI in insurance Regulation of AI in the insurance industry has evolved in two distinct waves. Initially, attention focused on machine learning and its ability to extract patterns from large datasets. This enabled new business insights but raised concerns about data governance and discrimination. The second wave began in 2023, with the emergence of AI foundation models, such as large language models, prompting more comprehensive regulatory responses covering broader ethical and societal implications. International frameworks: technology‑neutral guidelines The International Association of Insurance Supervisors (IAIS), representing supervisors and regulators of insurance in over 200 countries [95], published a foundational regulatory framework through 24 technology-neutral principles in 2011, revised in 2019 [96], that emphasise good governance, risk management and customer protection. In July 2025, the IAIS issued an Application Paper to support supervisors in applying insurance core principles and ensuring consistent global oversight of AI use in the sector. The IAIS principles leave space for both regulators and insurers to use AI. As IAIS members, Australian regulators – the Australian Prudential Regulation Authority (APRA) and ASIC) – adhere to these principles. The International Monetary Fund (IMF) regularly assesses Australian regulators (2012 [97], 2019 [98]) on their compliance with the principles. Australian legislation, including the Insurance Contracts Act 1984 and anti-discrimination laws, supports these principles, creating a robust, technology-independent regulatory foundation [89]. Evolving sector-specific guidance From the 2010s, regulators began issuing specific guidance for machine learning applications in insurance. In 2019, the USA’s National Association of Insurance Commissioners (NAIC) established AI regulatory principles emphasising fairness, accountability and proportionality [99]. To increase the adoption of these principles, the NAIC provided a model bulletin for each commissioner to adapt for their insurers [100]. The NAIC urged insurers to govern third-party AI systems with the same rigour as internal systems, promoting transparency in vendor contracts. By 2024, 17 states of the USA had adopted or were adopting these principles [101]. Similarly, the European Insurance and Occupational Pensions Authority (EIOPA) [102] issued governance principles in 2021 [29] underpinned by broader European Union regulations such as Solvency II, the General Data Protection Regulation (GDPR) [103] and the ePrivacy Directive [104]. These principles address fairness, human oversight and data transparency while advocating tailored regulation proportional to AI’s risk and impact. Emerging ethical and responsible AI regulations and standards Over the past two years, several countries have introduced AI-specific regulations with an increasing focus on ethical AI, fairness, explainability and human oversight (Table 7). Earlier, in 2019, the Australian Government released Australia’s AI Ethics Principles [105]. In 2023, the International Organization for Standardization (ISO) released the first international standard on AI, ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System, which specifies requirements for AI management systems within organisations from establishment through to maintenance and continuous improvement [106]. In this context, the Australian Government released the Voluntary AI Safety Standard: Guiding safe and responsible use of artificial intelligence in Australia in August 2024 [107]. It sets out 10 voluntary guardrails for AI aligned with the USA’s National Institute of Standards and Technology (NIST) AI Risk Management Framework, offering guidelines for accountability, transparency and risk management. In September 2024, the Australian Government also released a discussion paper on mandatory guardrails for high-risk AI [108]. They were essentially the same; however, mandatory guardrails bring mandatory reporting requirements. The primary challenge is to identify what is high risk. The discussion paper focused on potential adverse impacts on people and society, whether they be rights, physical and mental health, litigation or the economic, social, environmental or legal fabric of society. Table 7: AI-specific regulations introduced in 2023-2024 REGION REGULATION NOTES ON RELEVANCE FOR INSURANCE European Union AI Act (2024) Prohibits harmful uses of personal data, such as behavioural social scoring, and imposes stricter requirements for ‘high‑risk’ AI, including health and life insurance applications USA Artificial Intelligence Research, Innovation, and Accountability Act (2024) [109]; NIST AI Risk Management Framework [110] Focus on high-impact AI applications, requiring transparency and annual reporting United Kingdom AI Regulation White Paper (2024) Promotes an agile, principle-based regulatory approach, fostering innovation while safeguarding consumer rights Canada Artificial Intelligence and Data Act (2023) [111] Emphasises fairness, safety and human oversight, complemented by voluntary codes of conduct In government services, legislative reform has been in development since the Robodebt Royal Commission to establish a consistent framework for ADM in government operations [112]. Currently under consultation, the regulation is expected to serve as a landmark for the use of algorithms, including AI, across all service sectors of the Australian economy. A range of changes to the legislative environment surrounding AI adoption is expected to emerge alongside the developments in the privacy reform and new cybersecurity legislation, which is currently well underway [92, 113–115]. As these legislative updates are introduced, they will likely establish new standards for accountability and governance, impacting how insurers manage ADM processes and handle sensitive customer data [91, 116]. Industry players and government organisations are actively developing practical guidelines and recommendations to consider emerging challenges. For instance, the Australian Human Rights Commission, in collaboration with the Actuaries Institute, has published guidance on the use of AI and its implications for discrimination in insurance pricing and underwriting [117]. There are also increasing commitments from government bodies to oversee and monitor the adoption of AI. The IMF’s 2019 review of Australian insurance regulation recommended enhancing enterprise risk management to address AI-specific risks. ASIC’s Corporate Plan (2023–27) [118] commits to monitoring AI’s impact on financial services, focusing on potential consumer harm and algorithmic misuse. Key challenges in AI regulation and implementation gaps Despite significant regulatory progress, challenges persist in aligning AI principles with practical applications in the insurance industry. The IAIS FinTech Advisory Group [119] identified key regulatory challenges, including: • Bias and discrimination: indirect discrimination remains hard to detect, especially when protected attributes are excluded from datasets. • Explainability: AI systems often lack transparency, complicating oversight and limiting consumers’ ability to contest decisions. • High-risk designation: clear metrics and definitions are needed to identify and regulate high-risk AI applications consistently. • Human-centric focus: while decisions like premium adjustments are reversible, high premiums or denied claims can cause cascading financial and social harm. Despite these challenges, there is broad agreement that existing regulations are generally sufficient. The real need lies in translating these frameworks into actionable guidance. The Geneva Association argues that additional regulation may be unnecessary given the sector’s heavy regulation and the reversibility of insurance decisions [120]. However, practical considerations remain. A 2024 KPMG survey revealed insurer demand for clearer metrics to evaluate compliance with ethical and fairness standards [80]. Similarly, the USA’s NAIC has called for metrics to measure the effectiveness of existing guidelines [99]. In Australia, AI in insurance has largely operated in sandbox environments, promoting innovation while limiting broader application. Transitioning from sandboxes to real-world use will require rigorous standards and metrics to evaluate fairness, ethics and operational success. Similarly, transitioning from voluntary AI standards to mandatory guardrails for high-risk applications will require clearer definitions and practical support. And while frameworks from the IAIS, EIOPA and NAIC provide a strong foundation, practical application remains challenging. Practical guidance would require: • detailed recommendations to integrate principles into underwriting, claims and customer interactions • harmonised approaches for aligning global principles with localised regulations and practices for consistency • metrics development for measures of ethics, fairness and overall AI outcomes • heightened focus on risk assessments to ensure fair outcomes, especially for population groups on the edge of insurability. Internal guidelines and measures to mitigate AI related risks Insurers operate within sophisticated risk management frameworks and stringent regulatory regimes, including APRA’s risk management framework which, to a certain extent, covers AI applications [26]. However, companies are also implementing internal strategies, incorporating voluntary AI standards and guidelines, as well as governance mechanisms to better mitigate risks associated with AI adoption. Internal AI committees are being established to oversee model development, assess potential biases and ensure ethical considerations are integrated at every stage. A growing suite of technical and procedural tools supports the operationalisation of responsible AI. Specific measures, such as regular model audits, explainability tools and bias detection systems, are being used to verify AI’s reliability and fairness, particularly in underwriting and claims management. Regular stress testing ensures models perform reliably across various conditions. Firms are also increasingly leveraging third-party validations and external reviews of algorithms to bridge governance gaps. Cross-functional teams comprising actuarial, ethical and legal experts are also fostering robust model governance while enabling innovation. Despite these steps, the establishment of robust AI adoption guardrails remains challenging. Rapid advancements in technology and regulation, coupled with the complexity of ensuring fairness and transparency at scale, present ongoing difficulties. As AI systems grow more sophisticated, biases embedded in historical data or external third-party models can result in unintended errors. A lack of standardised frameworks for measuring AI accountability further complicates compliance with AI ethics guidelines and evolving regulations. Another key challenge for internal governance systems is ensuring that higher risk AI applications are identified and managed with appropriate guardrails, while allowing innovation in lower risk areas to proceed without overly restrictive policies. Talent and expertise also remain critical, as specialised skills in data science, AI ethics and regulatory compliance are essential for effective AI governance. Many firms struggle to bridge this expertise gap internally, leading to reliance on external vendors. This reliance raises additional governance concerns, particularly regarding data privacy and security, as well as around the management of vendors. The technology vendor landscape is moving fast, requiring frequent software contract reviews and upgrades, especially in the light of potential market consolidation around the providers of AI foundation models. Overall, many Australian insurers are actively exploring or building internal AI governance systems that embed ethical AI principles, governance frameworks and oversight systems to ensure responsible adoption. Nevertheless, governance gaps remain and can potentially slow AI adoption [121]. Addressing transparency at scale, mitigating unintended biases and overcoming talent shortages remain ongoing challenges in developing resilient and ethical AI governance systems in the industry. 4 Advancing AI adoption in the insurance industry The Australian insurance industry has shown resilience and adaptability in navigating digital transformation over the past decade. Backed by strong regulatory frameworks and consumer protection mechanisms, it is well positioned to leverage AI for transformative outcomes. However, with the rise of GenAI, new risks and complexities are emerging. Addressing these requires a proactive, collaborative and ethical approach, focusing on consumer benefits while mitigating risks. This section outlines seven key areas for advancing AI adoption to improve insurance in Australia. Delivering better insurance for all The insurance industry should develop and commit to a pro-social AI vision that prioritises customer-centric use cases and delivers societal benefits. This vision would consider key challenges, particularly those affecting vulnerable customers, while ensuring industry sustainability. To remain relevant and effective, insurers need to evolve beyond traditional models, leveraging AI to scale operations during catastrophic events and maintain the financial capacity to pay future claims. Addressing these challenges requires collaboration beyond the insurance sector. AI can identify emerging trends and risks, but solutions also depend on partnerships with governments. While localised measures and greater risk-sharing can mitigate some problems, insurers play a crucial role in advocating for larger scale initiatives – such as buy‑back schemes, rezoning and community-level mitigation infrastructure – to reduce exposure to catastrophic risks. The industry can also take a proactive educational role by helping communities understand controllable risks and adopt effective mitigation strategies. These efforts can extend to scenarios like managing flood events or addressing supply chain disruptions. By deploying AI to enhance predictions and streamline responses, insurers can prevent the creation of ‘insurance vacuums’, where coverage becomes unsustainable, necessitating government intervention. Advancing AI adoption in the insurance industry 9 Delivering better insurance for all 9 Strengthening governance for responsible AI 9 Fostering collaboration and resilience 9 Adopting AI strategically and proactively 9 Building AI skills for a future‑ready workforce 9 Becoming a trusted partner through transparent AI 9 Innovating insurance Strengthening governance for responsible AI adoption The insurance industry’s ongoing digital transformation has laid a strong foundation for adopting AI technologies. Existing frameworks for cybersecurity, privacy and compliance provide critical guardrails to protect consumers and maintain trust. However, as AI evolves, emerging risks necessitate enhanced governance to ensure responsible and effective implementation, along with ongoing quality control and human oversight of AI models. This includes ensuring data quality, refining model algorithms and managing biases within the models. Developing practical recommendations, publishing best practices and establishing clear guidelines for AI adoption will be crucial steps forward. Fostering collaboration and resilience Collaboration within the insurance sector and with external stakeholders is vital to navigating the complexities of AI adoption. Establishing industry working groups focused on AI ethics, bias mitigation and incident analysis could foster collective knowledge and drive actionable improvements. These groups might also develop benchmarking frameworks to assess AI’s impact, ensuring continuous learning across the sector. Given the rapid pace of technological advancement, industry leaders must collaborate and adopt a ‘learn as you go’ approach to understanding AI as well as implementing and governing it effectively [122]. Existing collaborations through bodies like the ICA and the Actuaries Institute provide a foundation but could be expanded through formalised channels for sharing use cases, best practices and lessons learned. A structured incident-sharing and analysis centre could offer a platform to review adverse AI outcomes, helping insurers refine their practices. Industry standards, such as the Expert Report Best Practice Standard, provide a model for formalising best practices [123]. Partnerships with AI thought leaders, research organisations and think tanks can ensure alignment with cutting-edge practices and trends. Technical collaboration, including federated learning models and data standardisation initiatives, could improve predictive capabilities while protecting sensitive information. Collaboration is also critical in tackling broader societal challenges like climate change and community resilience, with insurers uniquely positioned to influence both policyholders and policymakers through AI-driven risk prediction and mitigation. Regular industry forums could further enhance collaboration, providing a platform for sharing insights, discussing problems and showcasing successful customer-focused AI applications. Where relevant and appropriate to share within competition law, these forums would encourage open dialogue, data‑sharing and collective problem-solving, ultimately supporting the ethical, transparent and effective integration of AI across the insurance sector. Adopting AI strategically and proactively AI’s transformative potential lies in its ability to enhance operational efficiency, improve customer experiences and unlock new avenues for innovation. However, realising this potential requires both strategic vision and the courage to advance beyond initial experimentation. Many organisations, in Australia and globally, remain stalled in proof-of-concept stages, hesitant to fully embrace AI due to ethical, regulatory and operational complexities [53]. While caution is understandable, it is essential for insurers to move forward to avoid missing opportunities for productivity and growth. As AI technology matures, the focus should shift from being a first mover to evolving alongside the technology, delivering meaningful and sustainable value. By investing in innovation and adopting measured, well-informed risks, Australian insurers can build resilient, future-ready operations prepared to harness AI’s full potential while maintaining trust and meeting evolving consumer and regulatory expectations. Building AI skills for a future‑ready workforce The rapid evolution of AI highlights the critical need to build a workforce capable of effectively managing these technologies. Many organisations face a significant skills gap in AI and related technical areas, particularly among leadership roles. To remedy this, insurers can focus on AI literacy and targeted upskilling programs for both leadership and staff. Investing in training opportunities that build technical competencies while fostering a workplace culture that values innovation can help attract and retain top talent. Collaboration within the sector to share resources, expertise and best practices for staff training will further enhance the industry’s collective capability to leverage AI responsibly and effectively. Becoming a trusted partner through transparent AI Building consumer trust is fundamental to the successful adoption of AI in the insurance sector. Clear communication about when ADM is used and on the benefits of AI, such as improved affordability and faster claims processing, can help alleviate consumer concerns. Demonstrating a commitment to privacy requirements and ethical AI use, particularly in ensuring fairness and mitigating bias, further strengthens this trust. Transparency is especially critical when engaging with vulnerable groups. Insurers can prioritise fairness by ensuring that AI-driven decisions are explainable and aligned with customer-centric objectives. Additionally, leveraging AI to support downward pressure on costs of premiums can reinforce the industry’s commitment to the affordability and accessibility of insurance products. Trust-building extends beyond consumers to include regulators and other stakeholders across the insurance value chain. Active engagement with regulators helps establish a grounded understanding of AI applications in insurance, emphasising transparency, responsibility and consumer-centric priorities. This approach positions the insurance industry as a trusted partner to governments, fostering resilience and meeting the evolving needs of communities. Innovating insurance AI presents significant opportunities to expand product offerings and improve operations. For instance, AI liability insurance can address emerging risks related to algorithmic errors and compliance breaches, meeting the evolving needs of businesses and individuals in a digital landscape. Additionally, predictive analytics and dynamic pricing can enhance affordability and accessibility, tailoring insurance products to diverse customer segments. AI also holds significant potential in tackling climate change challenges. By predicting and mitigating risks, insurers can support communities in adapting to environmental pressures while contributing to broader societal objectives. These initiatives not only bolster the industry’s reputation but also establish insurers as proactive leaders in risk management and community resilience. The Australian insurance industry is uniquely positioned to leverage AI to deliver transformative outcomes. By fostering collaboration, investing in skills and prioritising ethical and consumer-centric approaches, insurers can navigate the complexities of AI adoption effectively. Aligning strategies with regulatory requirements and customer expectations will help ensure that AI benefits both the industry and the communities it serves, paving the way for a resilient, future-ready sector. 5 Conclusion This report has examined the opportunities and challenges presented by AI for the Australian insurance industry, highlighting its potential to improve processes, address key challenges, and create better outcomes for consumers. Drawing from insights shared during consultations with the ICA’s AI Working Group, the work has explored how AI can support affordability, accessibility and resilience in the sector. AI has started to reshape the way insurers operate, enabling efficiencies in claims processing, fraud detection and customer service. It also offers opportunities to tackle broader challenges, such as preparing for, and mitigating, the effects of climate change. However, with these opportunities come risks, including concerns around data privacy, fairness and transparency. This report has highlighted the need for insurers to leverage their existing strong management systems and carefully navigate these risks while adopting AI in ways that are transparent, responsible and consumer focused. Looking to the future, the insurance industry has an opportunity to evolve into a more collaborative and customer-centred sector. AI can help deliver personalised and proactive insurance solutions, ensuring products meet the unique needs of consumers while maintaining affordability and fairness. By leveraging technology to support communities during crises and to mitigate risks, insurers can enhance their role as reliable and supportive partners in times of need. The work underscores that the future of insurance can be one of stronger connections between insurers and their customers. By embracing AI thoughtfully, the industry can help create an environment where insurance is more than a financial safety net – it enhances its position as a trusted partner in building security and resilience in people’s lives. Appendix Key cybersecurity and privacy risks associated with AI adoption in insurance. THE RISK DESCRIPTION OF THE RISK AND WHAT IT MEANS FOR INSURERS DATA SENSITIVITY AND PRIVACY Privacy violations To function effectively, AI systems often require large amounts of personal data. If the data are collected or used without obtaining clear and informed consent from customers, this may violate privacy laws such as the GDPR [124]. Moreover, if the training data are not properly anonymised and secured, there is a risk that individuals could be re-identified, leading to privacy breaches [125]. Data leakage Data leakage involves the unintentional exposure of sensitive information to unauthorised parties, posing serious privacy and security risks in fields such as insurance. In AI-driven systems, data leakage can occur due to various vulnerabilities, such as improperly secured data storage, weak encryption protocols or flawed data transfer practices [126]. Unauthorised data access Unauthorised individuals (employees or outsiders) may attempt to access sensitive customer information or proprietary algorithms utilised in AI models. Such breaches can result in regulatory penalties under laws and undermine customer trust. Failure to protect data can result in significant penalties. Under the Australian Privacy Act 1988 (Cth), the Office of the Australian Information Commissioner (OAIC) can impose fines of up to A$50 million or 30% of a company’s adjusted turnover for serious breaches [127]. ADVERSARIAL ATTACKS Backdoor attacks Backdoor attacks embed concealed malicious code or triggers within AI models, enabling attackers to alter outputs under certain conditions [128]. For instance, in insurance AI systems, a backdoor could be integrated to approve fraudulent claims when specific triggers are present in the input data. This could lead to considerable financial losses before the problem is detected. Data poisoning and manipulation Data poisoning attacks occur when attackers intentionally manipulate the training data of an AI model to change its outputs [129]. In the insurance sector, for example, adversaries might introduce false information into claims databases used to train fraud detection algorithms. This can cause the model to mistakenly classify fraudulent activities as legitimate. Code injection Code injection is a security vulnerability where malicious code is introduced into an AI system by exploiting weaknesses, such as insufficient input validation (sanitise or filter user-provided input before processing inputs). In the context of AI-driven chatbots, adversaries may embed malicious code within user inputs to compromise the underlying system, gain unauthorised access to back-end infrastructure or exfiltrate sensitive data [130]. Prompt injection and exploitation Prompt injection attacks exploit vulnerabilities in AI systems by crafting inputs that alter the intended behaviour of natural language–based models [131]. Specifically, in the context of insurance chatbots, attackers may use carefully constructed input phrases to manipulate the AI into disclosing sensitive information or executing unauthorised operations. Model evasion Model evasion refers to the process of crafting adversarial inputs that exploit the vulnerabilities of an AI model, causing it to generate incorrect outputs or classifications without requiring direct modification of the model’s parameters or architecture. In the context of insurance, this tactic may involve fraudsters subtly altering claims data to circumvent fraud detection systems, thereby enabling fraudulent activities while remaining undetected. BIASES (MODEL FAIRNESS) Model tampering to discriminate or exclude groups AI models can be manipulated or unintentionally trained on biased datasets, leading to fairness issues and, consequently, discriminatory outcomes for certain groups [132]. In the insurance industry, this could result in unfairly high premiums or the denial of services to individuals based on characteristics such as race, gender, high risk or socioeconomic background. These biases can harm the affected individuals, expose insurers to potential legal consequences and damage the company’s reputation. THE RISK DESCRIPTION OF THE RISK AND WHAT IT MEANS FOR INSURERS ETHICS AND TRUST Privacy violations during extensive personalisation Extensive personalisation may require collecting and retaining detailed personal data, raising privacy concerns about how these data are used, stored and eventually destroyed (Australian Privacy Principles – APPs 11.1 and 11.2 [133]) in accordance with data retention policies and legal requirements [134, 135]. If insurers use AI to analyse personal habits (e.g. tracking driving patterns) or health data without consent, they risk violating privacy expectations and applicable laws, such as those governing data retention and disposal. Ethics violations due to personalisation Insurers may risk breaching ethical standards and data protection regulations when customers are not adequately informed about collecting and utilising their data for AI-driven personalisation [136]. This encompasses issues such as repurposing data beyond their original intent and sharing data with third parties without explicit consent. Transparency issues due to opaque AI decisions AI models, such as deep neural networks, frequently operate as ‘black boxes’, offering little insight into their decision-making processes [137]. In the insurance sector, this lack of transparency poses significant challenges, especially when customers demand explanations for outcomes such as claim denials or premium determinations. OPERATIONAL SECURITY Insider threats Insider threats involve employees or contractors misusing their access to harm AI systems by tampering with models, stealing data or introducing vulnerabilities [138]. For example, a disgruntled worker might disrupt operations by altering AI models or leaking sensitive information. Insufficient monitoring Lack of proper monitoring can leave suspicious activities and security breaches undetected. AI systems need continuous oversight to identify anomalies, such as unusual data access or deviations in model outputs. Without adequate monitoring, response to cyber incidents may be delayed, amplifying their impact [139]. For insurers, this could lead to prolonged fraud exposure, data breaches, financial losses and regulatory penalties. Model theft and IP violations AI models are valuable intellectual property (IP) and vulnerable to theft through cyber espionage, where attackers steal proprietary algorithms for competitive or illicit use [140]. In industries such as insurance, stolen models can enable competitors to replicate innovations, threatening the original company’s market position. REGULATORY COMPLIANCE Current AI models may violate the evolving regulatory and legal landscape The regulatory frameworks for AI are evolving to address data privacy, algorithmic transparency and anti‑discrimination. Insurance companies using AI without explainability or fairness may violate laws such as the GDPR or the proposed European Union’s AI Act, facing fines, legal issues and costly system overhauls. AI VULNERABILITIES Chatbot exploitations Deepfake technology can produce realistic synthetic audio or text, enabling attackers to impersonate individuals convincingly [141]. In insurance, this could be used to exploit voice-activated systems or chatbots, such as mimicking a policyholder’s voice to approve unauthorised actions or access sensitive data. Hallucinations AI hallucinations occur when models generate plausible but incorrect information [142]. In insurance, this could result in chatbots providing inaccurate policy details, erroneous claims advice or fabricated terms, potentially leading to disputes, operational inefficiencies and reputational damage. References 1. Roser M 2022. The brief history of artificial intelligence: the world has changed fast – what might be next? OurWorldinData.org. 2. Fui-Hoon Nah F, Zheng R, Cai J, et al., 2023. Generative AI and ChatGPT: applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3):277–304. 3. Hajkowicz SA, 2024. Artificial intelligence foundation models: industry enablement, productivity growth, policy levers and sovereign capability considerations for Australia. CSIRO. 4. Hajkowicz S, Sanderson C, Karimi S, et al., 2023 Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: a bibliometric analysis of research publications from 1960–2021. Technology in Society, 74:102260. 5. Thesmar D, Sraer D, Pinheiro L, et al., 2019. Combining the power of artificial intelligence with the richness of healthcare claims data: opportunities and challenges. Pharmacoeconomics, 37(6):745–752. 6. Holland C & Kavuri A, 2021. Artificial intelligence and digital transformation of insurance markets. Journal of Financial Transformation, 54: 104–115. 7. Adeoye OB, Okoye CC, Ofodile OC, et al., 2024. Integrating artificial intelligence in personalized insurance products: a pathway to enhanced customer engagement. International Journal of Management & Entrepreneurship Research, 6(3):502–511. 8. Kumar R, 2019. Insurance telematics: risk assessment of connected vehicles. In: Proceedings of the 2019 IEEE Transportation Electrification Conference (ITEC-India). IEEE, p.101–110. 9. Eggert M, 2019. Understanding the acceptance of smart home-based insurances. In: Proceedings of the 27th European Conference on Information Systems (ECIS), paper 159. 10. Royal Commission, 2023. Royal Commission into the Robodebt Scheme: report. Australian Government. 11. ICA, 2024. ABC’s of general insurance. Insurance Council of Australia. 12. ICA, 2024. ICA annual report 2023. Insurance Council of Australia. 13. ICA, 2024. Australia’s insurance industry snapshot: July 2024. Insurance Council of Australia. 14. Eling M, Nuessle D & Staubli J, 2022. The impact of artificial intelligence along the insurance value chain and on the insurability of risks. The Geneva Papers on Risk and Insurance – Issues and Practice, 47(2):205–241. 15. Cappiello A, 2020. The technological disruption of insurance industry: a review. International Journal of Business and Social Science, 11(1):1–10. 16. Moneta A, 2014. The digital insurer: the customer-centric insurer in the digital era. Accenture. 17. Kumar M & Aggarwal A, 2022. Determinants of technology adaption within the framework of TOE: an insurance sector perspective. ECS Transactions, 107(1):3417. 18. Eling M & Lehmann M, 2018. The impact of digitalization on the insurance value chain and the insurability of risks. The Geneva Papers on Risk and Insurance – Issues and Practice, 43(3):359–396. 19. Eckert C & Osterrieder K, 2020. How digitalization affects insurance companies: overview and use cases of digital technologies. Zeitschrift für die gesamte Versicherungswissenschaft, 109(5):333–360. 20. Gartner, 2024. Gartner 2024 hype cycle for emerging technologies highlights developer productivity, total experience, AI and security. Gartner Newsroom, 21 August. 21. Herrmann H & Masawi B, 2022. Three and a half decades of artificial intelligence in banking, financial services, and insurance: a systematic evolutionary review. Strategic Change, 31(6):549–569. 22. IAIS, 2024. 2024 global insurance market report (GIMAR). International Association of Insurance Supervisors. 23. ASIC, 2024. Beware the gap: governance arrangements in the face of AI innovation. Report 798. Australian Securities and Investment Commission. 24. Sezzle, 2022. 2022 annual report. Sezzle Inc. 25. QBE, QBE 2020-02-16 Annual Report. 2020, QBE insurance Group. 26. Cohen J, 2023. How AI is transforming insurance. Taylor Fry. 27. SCOR, 2018. The impact of artificial intelligence on the (re) insurance sector. SCOR. 28. EY, 2014. Case study: how a Nordic insurance company automated claims processing. Ernst & Young. 29. EIOPA, 2021. Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector. European Insurance and Occupational Pensions Authority. 30. Taylor Fry, 2024. RADAR FY2024: our class-by-class insights for insurers. Taylor Fry. 31. Lukic V, Close K, Grebe M, et al., 2023 Scaling AI pays off, no matter the investment. Boston Consulting Group. 32. Mason C, Chen H & Evans D, 2024. AI adoption and firm demand for workers and skills: insights from online job postings. Qeios. 33. Ryseff j, De Bruhl BF & Newberry SJ, 2024. The root causes of failure for artificial intelligence projects and how they can succeed: avoiding the anti-patterns of AI. RAND Corporation. 34. Gray D & Shellshear E, 2024. Why data science projects fail: the harsh realities of implementing AI and analytics, without the hype. CRC Press. 35. Nicholas J & Barrett J, 2024. Why insurance premiums are squeezing Australians and fuelling inflation. The Guardian, 27 February. 36. Roy Morgan, 2023. Australians are increasingly approaching other companies before renewing their household insurance. Roy Morgan. 37. Barone Gibbs B, Kline CE, Huber KA, et al., 2021. COVID-19 shelter-at-home and work, lifestyle and well-being in desk workers. Occupational Medicine 71(2):86–94. 38. Naseeb H & Metwally A, 2022. Outsourcing insurance in the time of COVID-19: the cyber risk dilemma. Journal of Risk Management in Financial Institutions, 15(2):155–160. 39. ABS, 2025. Consumer Price Index, Australia, December quarter 2024. Australian Bureau of Statistics. 40. Saxena S & Kumar R, 2022. The impact on supply and demand due to recent transformation in the insurance industry. Materials Today: Proceedings, 56:3402–3408. 41. Loi M, Hauser C & Christen M, 2022. Highway to (digital) surveillance: when are clients coerced to share their data with insurers? Journal of Business Ethics, 175(1):7–19. 42. Blakesley IR & Yallop AC, 2019. What do you know about me? Digital privacy and online data sharing in the UK insurance sector. Journal of Information, Communication and Ethics in Society, 18:281–303. 43. Lindholm M, Richman R, Tsanakas A, et al., 2022. Discrimination-free insurance pricing. ASTIN Bulletin: The Journal of the IAA, 52(1):55–89. 44. Lockey S, Gillespie N & Curtis C, 2020. Trust in artificial intelligence: Australian insights 2020. KPMG & University of Queensland. 45. Cave S, Coughlan K & Dihal K, 2019. “Scary robots”: examining public responses to AI. In: Proceedings of the 2019 AAAI/ ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, p.331–337. 46. Edelman, 2024. Edelman Trust Barometer: innovation in peril. Edelman. 47. Essential Research, 2024. Essential report topic: AI. Essential Research. 48. Roy Morgan, 2023. Majority of Australians believe artificial intelligence (AI) creates more problems than it solves. Roy Morgan. 49. ACSC, 2024. Annual cyber threat report 2023–24. Australian Cyber Security Centre, Australian Signals Directorate. 50. ACTU, 2024. Insurance companies’ price gouging harming workers and consumers. Australian Council of Trade Unions. 51. Jackson B, 2024. ‘Evidence is clear’: banks and insurers profits grow 46% since March 2021. ABC News, 5 September. 52. ACS, 2024. ACS digital pulse 2024. Australian Computer Society. 53. AIDC & University of Sydney Business School, 2019. Driving innovation: the Boardroom Gap 2019 Innovation Study. Australian Institute of Company Directors. 54. ICA, 2024. The insurance industry talent roadmap – becoming an industry of choice for a rewarding career. Insurance Council of Australia. 55. Gradient AI, 2022. White Paper: how AI can help address the P&C insurance talent gap. Gradient AI. 56. InsuranceNews, 2023. Insurers should shift from loss coverage to risk prevention: Bain. InsuranceNews.com.au, 16 February. 57. Afshar Ali M, Alam K & Taylor B, 2020. Do social exclusion and remoteness explain the digital divide in Australia? Evidence from a panel data estimation approach. Economics of Innovation and New Technology, 29(6):643–659. 58. Alam K & Imran S, 2015. The digital divide and social inclusion among refugee migrants: a case in regional Australia. Information Technology & People, 28:344–365. 59. ABS, 2024. Insights into output of building construction prices. Australian Bureau of Statistics. 60. Muroi M, 2024. Cost of building a home in Australia jumps $100,000 in four years. Sydney Morning Herald, 24 November. 61. CAIF, 2024. Fraud stats: impact. Coalition Against Insurance Fraud. 62. Johnson A, 2024. The key to surfacing and stopping insurance fraud is decision intelligence. Quantexa. 63. Higgins T, Howden M, Lawrence K, et al. 2024. Submission: impact of climate risk on insurance premiums and availability. ANU Institute for Climate, Energy & Disaster Solutions, Australian National University. 64. Sherwood D & Sullivan KB, 2021. Building a more sustainable insurance industry: how carriers can empower CSOs to tackle climate risk, diversity and inclusion, and governance transformation. Deloitte Insights. 65. Khondaker MMH & Momen M, 2024. Improving hurricane intensity and streamflow forecasts in coupled hydrometeorological simulations by analyzing precipitation and boundary layer schemes. Journal of Hydrometeorology, 25(8):1237–1258. 66. Azhar A, 2024. Leveraging AI for precision in hurricane forecasting. HPCwire, 26 November. 67. Lam R, Sanchez-Gonzalez A, Willson M, et al., 2023. Learning skillful medium-range global weather forecasting. Science, 382(6677):1416–1421. 68. Minister for Emergency Management, 2024. Testing AI’s potential for early bushfire detection. Australian Government. 69. AFAC, 2024. Fire prediction simulators. Australian and New Zealand Council for Fire and Emergency services. 70. CSIRO, 2021. New version of Spark to be used nation-wide to model and predict bushfires. CSIRO. 71. Fox-Hughes P, Bridge C, Faggian N, et al., 2024. An evaluation of wildland fire simulators used operationally in Australia. International Journal of Wildland Fire, 33(4):WF23028. 72. Sun J, Bathgate K & Zhang Z, 2024. Bayesian network-based resilience assessment of interdependent infrastructure systems under optimal resource allocation strategies. Resilient Cities and Structures, 3(2): 46–56. 73. Sriram LMK, Ulak MB, Ozguven EE, et al., 2019. Multi-network vulnerability causal model for infrastructure co-resilience. IEEE Access, 7:35344–35358. 74. NHRA, 2023. Modelling impacts of natural hazards on interconnected infrastructure networks. National Hazards Research Australia. 75. Al-Rawas G,. Nikoo MR, Al-Wardy M, et al., 2024. A critical review of emerging technologies for flash flood prediction: examining artificial intelligence, machine learning, internet of things, cloud computing, and robotics techniques. Water, 16(14):2069. 76. Mosavi A, Ozturk P & Chau K-W, 2018. Flood prediction using machine learning models: literature review. Water, 10(11):1536. 77. Smith A, 2024. AI enhances flood warnings but cannot erase risk of disaster. Reuters, 17 October. 78. Spring J, 2024. Just how much can we trust A.I. to predict extreme weather? Smithsonian Magazine, 23 September. 79. Leffer L, 2024. AI weather forecasting can’t replace humans – yet. Scientific American, 9 January. 80. Scattaglia S, Adesman-Navon I & Henderson J, 2024. How AI could transform the insurance industry. KPMG. 81. ICA, 2022. Cyber insurance: protecting our way of life, in a digital world. Insurance Council of Australia. 82. AXA XL, 2024. AXA XL unveils new cyber insurance extending coverage to help businesses manage emerging Gen AI risks. AXA XL. 83. Insurance Asia, 2024. Cyberattacks cost Australian businesses $71,600 on average. Insurance Asia. 84. Ernst & Young, 2023. Valuing nature for a resilient future. Insurance Council of Australia. 85. Root, 2022. 2021 annual report. Root, Inc. 86. eHealth, 2023. Quarterly report for quarter ending September 30, 2023. eHealth, Inc. 87. First American, 2024. First American Financial reports first quarter 2024 results. Financial Corp, First American Financial Corporation, 24 April. 88. Employers Holdings, 2021. 2020 annual report. Employers Holdings, Inc. 89. Bednarz Z & Manwaring K, 2021. Keeping the (good) faith: implications of emerging technologies for consumer insurance Contracts. Sydney Law Review, 43(4):455–487. 90. SANS Institute, 2024. SANS Institute releases 2024 Detection & Response Survey highlighting AI, automation, and cloud security challenges. SANS Institute. 91. Patto J, Ruhal K & Sasse K, 2024. Slow and steady: 2024 Privacy Act Reform Bill released. PwC Australia. 92. Fitzgerald L Cheung K-C & Martin M, 2024. Australian privacy alert: Parliament passes major and meaningful privacy law reform. Norton Rose Fulbright.. 93. Ren S, Deng Y, He K, et al., 2019.. Generating natural language adversarial examples through probability weighted word saliency. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, p.1085–1097. 94. Gong X, Wang Q, Chen Y, et al., 2020. Model extraction attacks and defenses on cloud-based machine learning models. IEEE Communications Magazine, 58(12):83–89. 95. IAIS, 2024. What we do. International Association of Insurance Supervisors. 96. IAIS, 2019. Insurance Core Principles and common framework for supervision of internationally active insurance groups. International Association of Insurance Supervisors . 97. Monetary and Capital Markets Department, IMF, 2012. Australia: insurance core principles – detailed assessment of observance. Country Report No. 12/312. International Monetary Fund. 98. Monetary and Capital Markets Department, IMF, 2019. Australia: Financial Sector Assessment Program – technical note – insurance sector: regulation and supervision. Country Report No. 19/49. International Monetary Fund. 99. NAIC, 2020. National Association of Insurance Commissioners (NAIC) Principles on Artificial Intelligence (AI). National Association of Insurance Commissioners. 100. NAIC, 2023. NAIC Model Bulletin: use of artificial intelligence systems by insurers. National Association of Insurance Commissioners. 101. NAIC, 2024. Implementation of NAIC Model Bulletin: use of artificial intelligence systems by Insurers. National Association of Insurance Commissioners. 102. EIOPA, 2024. About. European Insurance Occupational Pensions Authority 103. EUR-Lex, 2024. General Data Protection Regulation (GDPR). EUR‑Lex, European Union. 104. EDPS, 2002. ePrivacy Directive. European Data Protection Supervisor, European Union. 105. Department of Industry, Science and Resources, 2019. Australia’s AI Ethics Principles. Australian Government. 106. ISO, 2023. ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System. International Organization for Standardization. 107. Congress, 2024 S.3312 – Artificial Intelligence Research, Innovation, and Accountability Act of 2024. Library of Congress, USA. 108. NIST, 2024. Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology, US Department of Commerce. 109. Innovation, Science and Economic Development Canada, 2023. Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. Government of Canada. 110. National Artificial Intelligence Centre, 2024. Voluntary AI Safety Standard: guiding safe and responsible use of artificial intelligence in Australia. Australian Government Department of Industry, Science and Resources. 111. National Artificial Intelligence Centre, 2024. Safe and responsible AI in Australia: proposal paper for introducing mandatory guardrails for AI in high-risk settings. Australian Government Department of Industry, Science and Resources. 112. Attorney-General’s Department, 2024. Use of automated decision-making by government. Consultation paper, November 2024. 2024, Australian Government. 113. Parliament of Australia, 2024. Privacy and Other Legislation Amendment Bill 2024. Parliament of Australia. 114. Department of Home Affairs, 2024. Cyber Security Act. Australian Government. 115. The Treasury, 2024. Ensuring access to quality and affordable financial advice. Australian Government. 116. ICA, 2024. Insurance Council welcomes second tranche of advice reforms. Insurance Council of Australia. 117. AHRC & Actuaries Institute, 2022. Guidance resource: artificial intelligence and discrimination in insurance pricing and underwriting. Australian Human Rights Commission & Actuaries Institute. 118. ASIC, 2023. ASIC corporate plan 2023–27: focus 2023–24. Australian Securities and Investment Commission. 119. IAIS, 2022. IAIS report on FinTech developments in the insurance sector. International Association of Insurance Supervisors. 120. Noordhoek D, 2023. Regulation of artificial intelligence in insurance: balancing consumer protection and innovation. The Geneva Association. 121. ASIC, 2024. ASIC warns governance gap could emerge in first report on AI adoption by licensees. Australian Securities and Investments Commission. 122. Ricard P, Li L & Flint A, 2023. Keeping up with generative AI: part 1 – the opportunity for insurers. OliverWyman. 123. ICA, 2024. Insurers take new approach to use of expert reports. Insurance Council of Australia. 124. Zaeem RN & Barber KS, 2020. The effect of the GDPR on privacy policies: recent progress and future promise. ACM Transactions on Management Information Systems, 1(1):1–20. 125. Abadi M, Chu A, Goodfellow I, et al., 2016. Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. Association for Computing Machinery, p.308–318. 126. Alneyadi S, Sithirasenan E & Muthukkumarasamy V, 2016. A survey on data leakage prevention systems. Journal of Network and Computer Applications, 62:137–152. 127. OAIC, 2024. Chapter 7: Civil penalties – serious or repeated interference with privacy and other penalty provisions. 2024, Office of the Australian Information Commissioner. 128. Salem A, Wen R, Backes M, et al., 2022. Dynamic backdoor attacks against machine learning models. In: 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P). IEEE, p.703–718. 129. Huang WR, Geiping J, Fowl L, et al., 2020. MetaPoison: practical general-purpose clean-label data poisoning. Advances in Neural Information Processing Systems, 33:12080–12091. 130. Sanchez DT & Sartagoda RS, 2024. Security threats and security testing for chatbots. In: Rawat R, et al. (eds), Conversational artificial intelligence. Wiley-Scrivener, p.303–318. 131. Shibli AM, Pritom MMA & Gupta M, 2024. AbuseGPT: abuse of generative AI chatbots to create smishing campaigns. In: 12th International Symposium on Digital Forensics and Security (ISDFS). IEEE, p.1–6. 132. Du M, Yang F, Zou N, et al., 2020. Fairness in deep learning: a computational perspective. IEEE Intelligent Systems, 36(4):25–34. 133. OAIC, 2022. Read the Australian Privacy Principles. Office of the Australian Information Commissioner. 134. OAIC, 2022. Retention and deletion of personal information collected during COVID-19. Office of the Australian Information Commissioner. 135. Chellappa RK & Sin RG, 2005. Personalization versus privacy: an empirical examination of the online consumer’s dilemma. Information Technology and Management, 6(2):181–202. 136. Andreotta AJ, Kirkham N & Rizzi M, 2022. AI, big data, and the future of consent. AI & Society, 37(4):1715–1728. 137. Samek W, Montavon G, Vedaldi A, et al. (eds), 2019. Explainable AI: interpreting, explaining and visualizing deep learning. Springer. 138. Zhang J, Gu Z, Jant J, et al., 2018. Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the 2018 on Asia Conference on Computer and Communications Security. Association for Computing Machinery, p.159–172. 139. Cappelli DM, Moore AP & Trzeciak RF, 2012. The CERT guide to insider threats: how to prevent, detect, and respond to information technology crimes (theft, sabotage, fraud). Addison-Wesley. 140. Beach PM, Mailloux LO, Langhals BT, et al., Analysis of systems security engineering design principles for the development of secure and resilient systems. IEEE Access, 7:101741–101757. 141. Juefei-Xu F, Wang R, Huang Y, et al., 2022. Countering malicious deepfakes: survey, battleground, and horizon. International Journal of Computer Vision, 130(7):1678–1734. 142. Xu W, Agrawal S, Briakou E, et al., 2023. Understanding and detecting hallucinations in neural machine translation via model introspection. Transactions of the Association for Computational Linguistics, 11:546–564. For further information CSIRO Data61 Dr Alexandra Bratanova alexandra.bratanova@data61.csiro.au csiro.au/data61