Home Breadcrumb caret Your Business Breadcrumb caret Tech Underwriters using AI face ethical dilemmas Automated underwriting solutions can raise questions around transparency and accountability By Jason Contant, | September 15, 2025 | Last updated on September 15, 2025 3 min read Plus Icon Image iStock.com/CHOLTICHA KRANJUMNONG Canadian tech companies are increasingly developing and using artificial intelligence (AI), bringing new risks, exposures and ethical considerations for underwriters. For example, automated underwriting solutions can raise questions around transparency and accountability, says Sam Chapman, CFC’s Canada technology team leader. “As a classic example, especially when we’re looking at direct-to-consumer solutions, we’ve got AI that is effectively making a decision as to whether someone can be insured or not insured, and often non-renewing accounts without any human interaction,” he says in an interview. “And you think of the implications on that policyholder — that individual may not be able to get, let’s say, auto insurance. They may not be able to get to work, for example.” Or, from a home insurance perspective, AI’s decision could affect a person’s ability to get mortgages and loans. There’s also a concern that humans could “hide behind that AI technology and…just pass the burden onto the AI,” Chapman says. “So, I think accountability, when we’re looking at these solutions and who the end user is, and whether that end user is potentially a vulnerable end user, is really important.” Why innovative customer experience will define the future of personal auto insurance Image Insights Paid Content Why innovative customer experience will define the future of personal auto insurance Technology is helping insurers reimagine how they support personal auto customers — and it starts the moment a collision is reported, say experts at Accident Support Services International. By Sponsor Image AI is starting to pop up in every single industry vertical, Chapman says. For example, agri-technology solutions can be retrofitted onto the front of a tractor to identify and pick ripe apples off trees or identify weeds across fields. Or large utilities may use technology to look at pipeline integrity or predictive maintenance in oil and gas fields. From a NatCat perspective, cameras strapped onto helicopters can be flown up and down power lines to identify fallen trees and other issues during ice storms without having to send people out. Privacy concerns AI is even being used for HR functions, where large language models can vet huge numbers of resumes and extract key pieces of data. However, that can lead to fears of AI replacing human talent and privacy concerns. It’s important that AI models adhere to privacy regulations like PIPEDA, the Personal Information Protection and Electronic Documents Act, Chapman says. Take wearable tech analyzing heart rate and blood pressure as an example. “Are they utilizing it in the format that they said that they would be?” Chapman asks. “Are they sharing that with third parties? “We’ll typically see that they’ve collected that data, and they can generate huge amounts more revenue by selling that data…,” he says. “When we’re looking at technology that’s collecting data en masse, are they doing it within local regulations? And most technology rarely resides in a single country, so we’ve also got to think of considerations, whether that’s south of the border…[or] more of a global scale.” Do cyber policies cover AI-generated crimes? Image Industry Do cyber policies cover AI-generated crimes? Cyber insurance policies do cover AI-perpetrated cybercrime, such as AI-generated deepfake videos, experts tell Canadian Underwriter. 3 min read From an insurance standpoint, it’s important to have affirmative AI cover, or explicitly stating in a policy that the risk is covered. If coverage is silent, then it can be difficult to understand what’s covered or excluded. “Has it been excluded when you’re looking at combined policies?” Chapman asks. “Has it been excluded from the cyber element of their coverage, or is there any kind of restrictions in cover, or limitations in cover, through that E&O insuring clause? “For CFC at the moment, building that affirmative language certainly is the direction to go as opposed to having specific policies that kind of just pick up AI exposure…” It’s also important to avoid tarring AI with the same brush, classifying it as the same activity with the same exposures, Chapman says. “Not all AI risks are created equal…,” he says. “What AI technology is being utilized [and] who’s utilizing it really allows us to segment those exposures and ensure that we are not applying broad rates to the whole class…” Subscribe to our newsletters Subscribe Subscribe Jason Contant Jason has been an award-winning journalist with Canadian Underwriter for more than a decade, including the past three years as associate editor and, before that, as digital editor for seven years. Print Group 8 LinkedIn LI X (Twitter) logo Facebook Print Group 8