Home Breadcrumb caret Partner Content Breadcrumb caret Business Lines Breadcrumb caret Cyber Update How insurers can mitigate AI-associated risks Reliance on data by Canada’s P&C insurance industry highlights the need reduce risk exposures associated with artificial intelligence By Jason Contant, | October 28, 2025 | Last updated on October 28, 2025 2 min read Plus Icon Image Photo by iStock/sankai Canada’s property and casualty insurance industry relies on data, and the requirement for accurate data cannot be overstated. This is why the industry needs to take a close look at exposure risks associated with artificial intelligence (AI), particularly relating to biased, inaccurate, or ill-suited AI outputs, cyber experts tell Canadian Underwriter. AI, like humans, is influenced by the information it consumes. It can incorporate biases into its models or even develop new biases over time. “For example, for decisions related to things like lending, hiring or healthcare, the company can be held liable,” says Leanne Taylor, senior manager of technology, cyber and professional lines at Sovereign Insurance. In some cases, technology companies used AI to vet candidates and inadvertently incorporated gender biases into their pre-vetting process, thereby preventing qualified candidates from reaching the human interview stage, adds Sam Chapman, CFC’s Canada technology team leader. “Ultimately, the quality of the data that goes into these models is the limiting factor,” he says. In one high-profile case, Samsung software developers pushed source code into ChatGPT in 2023, effectively leaking corporate secrets and potentially exposing its material for use by competitors. Others have turned to ChatGPT or other large language models to ask for legal advice. Related: Where is Canada’s cyber insurance market heading? “There is also the risk of undermining professional obligations,” says Jaime Cardy, senior associate at Dentons Canada LLP. “For example, when insurance professionals rely on AI-generated recommendations without fully understanding how those outputs are produced, they risk breaching their duty to provide informed and transparent advice.” Since AI systems rely on large data sets, breaches or privacy violations are potential exposures. “They can lead to very costly claims as a result of regulatory fines, lawsuits and reputational harm,” Taylor says. Can insurers even underwrite AI? Yes, but not in the same way you underwrite traditional cyber risks. Insurers need to look at how the AI was trained, what data sources were used, and whether there were controls in place to prevent bias and errors, Taylor says. They also need to consider a company’s governance structure. “Who is responsible for monitoring the AI and how quickly can a company respond if something [goes] wrong?” she asks. “In addition, what role does the AI play in the business? How is it deployed? Is it internal, or is it external facing? Is it used for real-time controls of the business with customers, or is it just used for internal processes and controls?” CAIB New Edition 1.0 – a New Standard for Broker Education Image Insights Paid Content CAIB New Edition 1.0 – a New Standard for Broker Education Preparing brokers to navigate an increasingly complex insurance landscape. By Sponsor Image For example, an AI system could be breached and manipulated to provide incorrect advice or information to a client. “What is the worst-case scenario?” Taylor asks. “Traditional underwriting might focus mainly on network security, but with AI, you need to have a wider lens that will also consider things like ethics and compliance. And underwriters really need to ensure they’re asking the right questions to properly underwrite the risk.” Subscribe to our newsletters Subscribe Subscribe Jason Contant Jason has been an award-winning journalist with Canadian Underwriter for more than a decade, including the past three years as associate editor and, before that, as digital editor for seven years. Print Group 8 LinkedIn LI X (Twitter) logo Facebook Print Group 8