How commercial clients can install AI guardrails

By David Gambrill | March 26, 2025 | Last updated on March 26, 2025
2 min read
metal-guardrail-with-retro-reflecting-optical-units.jpg_s=1024×1024&w=is&k=20&c=tXfiskP18Qfg0eppVJjxE6JMh07FBBvoRuppVK7Y8Mg= alternate text for this image

When underwriters assess a commercial client’s cyber risk, they look for guardrails or controls that show insured clients are trying to prevent system infiltrations that can lead to losses.

For example, to reduce a client’s exposure to cyberattacks, cyber underwriters insist on a combination of controls, ranging from multifactor authentication to end-point detection, before agreeing to cover an organization’s cyber risks.

So, before offering cover for AI-related losses, what controls will insurers insist on seeing?

Ruby Rai, cyber practice leader (Canada) at Marsh McLennan, says it’s too early to tell what the guardrails will look like for the ‘third wave’ of AI, in which machines act as agents that advise businesses on actions. In the current state, well-drafted cyber insurance products are broad and contemplate emerging or developing technology risks.

Reliance on any technology is part of the exposure, and so guardrails such as governance frameworks will be an important part of risk management efforts for any organization, she says.

Adding AI

As AI adoption permeates through business processes, the goalpost for privacy and security controls will also shift. Mature organizations will be able to demonstrate risk reduction by adopting appropriate governance frameworks and staying current on developing AI regulations globally.

“Even looking into the third wave of AI, I would say there is definitely some level of confidence that the existing [cyber] products can address [this exposure], perhaps with some tweaking for specific organizations that are actually creating these agents and using it [more often] than most other organizations,” she says.

“For some organizations, [cyber policies] should be customized. But for most, if they are going to be consumers of that product, demonstrating continued improvements in cyber security controls, and adopting appropriate governance culture within the organization would help.”

A corporate governance framework is important, she adds.

“It’s like the privacy or security framework. I would say the interesting part with AI is you have many more stakeholders than just legal and IT to make sure that framework is [comprehensive]. It comes to governance. You’re looking at the C-suite, you’re looking at board [to develop AI oversight policies].”

This article is excerpted from one that appeared in the April-May 2025 print editing of Canadian Underwriter. Feature image by iStock/eugenesergeev

David Gambrill