Home Breadcrumb caret Podcasts Breadcrumb caret What’s on Dec? | Episode 23 | The shift to underwriting workbenches What’s on Dec? | Episode 23 | The shift to underwriting workbenches Will Harnett, Head of Product Strategy at Send Technology, discusses the shift from policy admin systems (PAS) to underwriting workbenches, as well as agentic AI and intelligent quoting. September 16, 2025 Stream this episode and others in our series on Spotify! Featuring: Will HarnettHead of Product Strategy, Send Underwriting workbenches, which offer a single pane of glass dashboard for underwriters, are building out an adjacency to policy admin systems (PAS). This evolution of the underwriting process allows workbenches to add insights and augment an underwriter’s day-to-day work, explains What’s on Dec? guest Will Harnett, Head of Product Strategy at Send Technology. In our latest podcast, Will describes the move from an unstructured data process to a structured one, and shares how workbenches bring everything together to provide underwriters a full picture of the data. He also dives into agentic AI, which uses machine learning models to mimic human decision-making with limited supervision. Will discusses where he sees the biggest opportunities for this type of AI in underwriting. He rounds out the discussion talking about speed to quote and intelligent quoting from broker and carrier perspectives. Audio transcript Intro | Jason Contant: Hi, I’m Jason Contant, associate editor at Canadian Underwriter and host of our podcast series, “What’s On Dec?”. This episode features my conversation with Will Harnett from Send Technology about underwriting workbenches. We explored the shift from PAS, or policy admin systems, to these workbenches, various aspects of agentic AI, and the status of speed to quote. This episode is sponsored by AM Best. Jason Contant: Today we’re gonna be talking about various aspects of underwriting systems from modernization to agentic AI and intelligent quoting, and everything in between. We’re also gonna talk about, I guess, how the share of the IT wallet is changing, you know, how things are shifting in terms of policy admin systems and underwriting workbenches. So on that note, maybe, Will, you can start us off by giving us a bit of a brief overview of what these underwriting workbenches are, and how they’re different than traditional policy admin systems used by carriers. Will Harnett: How would I describe the workbench? Probably the best way to describe it is a little bit, what are the problems it’s trying to solve for? If you look at the traditional policy admin systems, a lot of the processing and data capture that takes part as part of the PAS side of things, is it’s very late in the process. Maybe it’s at the quoting phase. Maybe it’s at beyond quoting, and it’s just bound-risk phase. But even at the quoting phase, it’s probably capturing, you know, account broker information, basic insured information. Sometimes it’s the case rating is done in a PAS, sometimes rating is done off piece in an Excel rater. So rating may not even be done. So it could be the price just being entered into the policy admin system. But if you look at those steps, that’s very much, down the journey of kind of the underwriting cycle or the underwriting process or the risk lifecycle journey. So that doesn’t really massively help the underwriters in terms of their day-to-day job. So if you think about their day starts when the submission comes in, or putting aside any prospecting they may do and any kind of CRM-type activity. But if you look at a transactional view of the world, their life starts, submission comes in, data extracted, you know, they gather risk data. Risk data may be different to exposure data. Exposure data may be the data that you’d use as part of your rating to drive the technical and final price. But as part of the underwriting process, you’ll look holistically at the insured and the relevant data. So that world, pre-quote up to quote and beyond, but that world pre-quote, is very much the area that the workbench and the problems or the help that it’s bringing to the underwriting process. Now, does it stop at quote? No, some do, say, for instance, the Send workbench. Probably there aren’t many out there that go the full end-to-end lifecycle. So, the underwriter remains in that single kind of pane of glass, so to speak, for the underwriting risk journey, through to quote, through to bound. And then if midterm adjustments come in, renewal processing, et cetera, it’ll sit around the policy admin system so it can support kind of all the various API feeds into the backend PAS. So that’s how I probably describe the difference. They live together, side by side. What I would say is, traditionally, some of the capabilities that are completed in the PAS, they’ve been kind of extracted out a little bit. So even forms management, doc generation, or looking at the policy wording side of things, people are looking at forms as a service capabilities. You’re looking at specialized raters. They’re coming out of the policy admin system. So if you look at kind of the PAS going forward, as it is today, it’s fine. That’s not to say that it won’t kind of go as it does. But there’s opportunities to kind of skinny down some of the capabilities as more of a service-orientated architecture exists, and people choose best-of-breed capabilities to solve whatever their needs are. And just a final note, where that is the case, that workbench is the enabler to kind of plug all these systems together. So, again, they’re not jumping out kind of from one system to the other and so forth. Jason Contant: Yeah, okay. So you kind of did answer this question in a sense, you kind of painted a picture of it, but I’m trying to get a sense of sort of the evolution of this underwriting process in the world of complex commercial insurance especially. Will Harnett: Yeah. Jason Contant: Where do you see tools like these playing a part in this? Will Harnett: Yeah, well, in a way, it kind of, the world where the technology estate was really focused in on that PAS, kind of points to what tools were there, you know, for the underwriter. There weren’t many. So what did they do? They ended up kind of, they’ll still get email, they’ll still get Word docs, they’ll still get Excels, et cetera, et cetera. So they’ll be operating kind of at their desk, in their desktop, underwriting using this data in a very kind of siloed, unstructured way. It’s still digital data, but it’s not sitting in enterprise data. It’s not sitting in enterprise systems. So there’s very little that you can actually do with that data, except for the underwriter will copy and paste, will write up Word doc reports, will, swivel and reenter duplicate data entry into a rater. They’ll swivel again and they’ll triplicate data enter into a policy admin system. So, that has been the world of complex insurance, and for good reason because the risks aren’t homogeneous. Like there’s a lot of variety, there’s a lot of, you know, complexity in the risk, it goes with the name. But in terms of where is that evolving to, the more… It evolves from unstructured process, is what I just described to you there, moving to structured process, which is kind of saying, how do I bring order to the day from end to end so I can track what’s going on, I can kind of create tasks, I can manage the completion of tasks? So that goes from structured process to then the next step is structured data and what you do with that structured data. So, that’s the evolution of unstructured process to structured process where we’re bringing in good disciplines about consistent standardized processes across the board. You’re bringing in process measurement and all those good things to then expanding the process to not look at a widget going across the factory floor, so to speak, but essentially looking at the data that’s associated with that risk and adding insights to that and augmenting that underwriter’s day-to-day work. So that’s kind of a little evolution of where things have come from and are going to. Jason Contant: Okay. Yeah, and you mentioned sort of the unstructured data aspect and double and triple entry, as we all know, is a big thing in the industry, right? So, what kind of data I guess, is being used for these systems and how do you confirm it and validate the data? Will Harnett: Yeah, like classically, I mean, it’ll start with the submission. But if you think like a submission, a submission application form could be as skinny or as broad as depending on line of business and complexity of risk. So again, traditionally, how much structured data was captured? A very skinny data amount, but now going into more of a structured data kind of operating state, the more data you can pull out of your submission document or submission application into a structured data format into your systems, then suddenly there’s a lot more you can do. So now first question is how do you extract the data? Okay, well, there’s a set of data extraction capabilities that kind of live around that space that’s a little cottage industry in itself. Then your next question, is that data correct or is that data complete? So you have an opportunity now that you’ve landed it, you know who the insured is, you can then go out to third-party data services and you can start augmenting and validating through other third-party data, kind of data services that are out there, to kind of enrich what, you know, the skinny or whatever level of data you received in submission app to kind of build out a bigger picture on the insured and the underlying kind of exposure. Jason Contant: Yep, so many people have heard of AI, but not everyone understands agentic AI. So, how do you define it in underwriting? Will Harnett: To define agentic, maybe I’ll define kind of like AI agents to give a little bit of colour of them in the context of the underwriting process and then use that then to kind of, to describe a little bit of an agentic kind of AI world. Like AI agents tend to be kind of standalone capability solutions that solve discrete kind of problems. So if I kind of, I guess answer it by giving you examples. In terms of, if you look at areas like data extraction, you could have an AI agent to support a data extraction process. If you want to, these are all obviously phenomenal use cases for AI in case that’s one of your questions later on. But like in terms of summarization of documents, so a big challenge for underwriters in the complex space is they’re just given volumes of kind of engineering reports and loss run reports and et cetera, et cetera. So, an agent to kind of run summarization. Another classic one is, and so much underwriting is written off, industry class codes. You’ll get like, what’s the insured class of business, and being able to go out there and actually determine, a North American NAICS code, like, you know, to say, okay, it’s this six digits, but really is that the level of underwriting that you need? Sometimes you gotta look beyond that. So these are standalone agents to solve discrete problems. Now, an agentic AI framework is where you can actually start tying in a dynamic way these AI agents together. So suddenly you’re creating a very powerful operating environment where an AI agent kind of ends and through the framework that you have in place, through the agentic component, you can then trigger off the next AI agent to run and complete what it’s doing. So it really is a powerful tool to create that process framework to drive even more efficiencies in how you run kind of your world. Jason Contant: So, I’m thinking of like where you see the biggest opportunities for this too now, like for agentic AI in underwriting, would it be like the triage level or like quoting or decision making, for example? Will Harnett: Yeah, I think it’s gonna start with discrete pain points and, what do the large language models do really well at the moment? It kind of, that data extraction summarization and be able to kind of collate kind of the right level of information from vast domains of data sources. So, what does that kind of look and sound like? That kind of sounds like the front end of the process. So we get submission documents in, okay, we can run various agents over the documents coming in. We can then use that information that we’ve pulled down and then go out to the web and essentially pull in and augment further information, and then summarize it into the kind of the right level of data. So the way I’d kind of be thinking about agents is look at discrete problems. Like if you look across the entire value chain of the risk lifecycle, you could pick out kind of opportunities along the way. If you go all the way to the end, you could say from a renewal process perspective, okay, well how would you look at renewals? Renewals is essentially new business, except you’ve got a load of information on the insured already. But the thing is, maybe your underwriting rules have changed or maybe you essentially have to re-underwrite that risk. So what will you do? You will go out and you’ll assess again, any litigation or any additional properties added to, you know, the insured or whatever, there’s an opportunity to kind of rerun the new business rules over that renewal that comes through. So suddenly there’s a renewal agent. So, if you look at every aspect of the process, there’s going to be discreet opportunities to deploy an agent. And I guess where we see it, it’s essentially gonna be a library of agents that you’re gonna have, and you’re going to leverage them at different points in the process. And that’s I guess where that agentic component comes in to say, how can we dynamically kind of string those agents together to produce a fairly, like a streamlined process. Jason Contant: Yeah. Well, I’m thinking too, like if you think about like ChatGPT and stuff like this, right? I always think of like, a lot of times it can give you bad information, right? Or incorrect information. So what happens, like in the case with agentic AI, if it gives you bad information? Like who’s responsible really in that case? Will Harnett: Well, I think, ultimately, the underwriter has to stand over what augmented data that they are adding to the mix. If you think about the classic, process as it is today, if there’s an inaccuracy in the application, who’s at risk? Well, that’s probably the brokers’ E&O there that’s at risk, you know? So, we’ve got the kind of the broker data that’s coming in from the insured. If the insured misrepresented, well then that’s a different story there altogether. So, if you look at where the underwriter is, is then harvesting and enriching with third-party data, then I think the responsibility lies on the carrier side to make sure that if they’ve augmented the data that they’ve received from the broker and the broker data is accurate, then I think that would reside with the underwriter and the carrier. If you talk about like are there tools and ways that you can improve your confidence level in the data? There absolutely are. You talk about ChatGPT, and you know, the response it gave you wasn’t quite accurate. But what happens if you took that same question and you hit it against three other LLMs, and they gave you back, and you started looking at the deviations and you kind of run a view around how they’ve differed or where they’re similar, then suddenly you can start triangulating, and have a higher probability or confidence level in what you get back. So, there are kind of ways and means of trying to mitigate some of those hallucinations or some of those challenges that you receive. But ultimately, I mean, I’m kind of saying there’s definite progress and opportunities that you can make in that space, but ultimately the responsibility will lie with the underwriters. Jason Contant: Shifting gears here, but like in terms of intelligent quoting and sort of speed to quote, like where are we at now and what do you think still needs to be done? Will Harnett: Well, naturally enough if there’s a lag time between a submission coming in, it going to offshore maybe to do a data extraction process, it lands in an underwriting assistant, they do a review to see if it’s within appetite, if it’s in appetite, maybe it goes over the underwriter, then the underwriter starts opening up the files, et cetera, et cetera. You see the lag time there in turnaround of quotes. I think if you look at it through a broker lens, quote turnaround time is a very critical metric in terms of they want to get submissions out, they want to get quotes back and they want to get it turned around fast. So, that is the broker incentive. Obviously, carrier incentive wants to see as many submissions as possible, but risk assessment is pretty important from their side of things. We actually conducted a recent survey there of north of 60 senior insurance people on the carrier side, on the broker side, MGA side. And it was really interesting, the feedback that we got there was something like 79% of brokers said carriers need to, better define their appetite and capacity. There’s another very high percentage of, the efficiencies around triaging. So the brokers are up there in the 70s and 80% of dissatisfaction around definition of appetite and turnaround time. And then the carrier side, they look, they say, you know, up to 70% of the carriers expect more complete risk data from the brokers or improved submission quality from the brokers. So it’s kind of like, there’s a bit of a chicken and egg there. What comes first? Like, you know, better data from the brokers will help deploy faster turnaround times. But the carriers and the underwriters need to better define their appetite. So I think that’s kind of where we’re at. Where we need to get to or where we’re getting to is, there’s gonna be a bit of kind of an inflection point. Those that adopt and those that don’t adopt. Those that adopt and move to more a data-driven kind of underwriting strategy where they are adopting, you know, structured data like digitize that source. If you can digitize the data as early in the process, then there’s an awful lot you can do, whether through kind of just basic automation and process, leveraging APIs and third-party data augmentation, or by leveraging AI agents, you can, absolutely reduce the cycle time of, is it within appetite? You know, can you aggregate all the information and can you speed up in the quoting process? But what’s also important for brokers is quick decline. So you can kind of say like, even if you can kind of take in a raft of submissions and you could say out of appetite, like, you know, decline, decline, decline, decline, the broker is actually happy with that. They don’t have to waste time chasing you to kind of say, you know, what do you think on that? Et cetera, et cetera. So when they say carriers better defining their appetite that will help them get faster declines, which will help them chase down various markets so they actually can get a quote back. So I think moving to a data-driven world is absolutely going to speed up that quote turnaround time. And I think it’s an important data point that we look at with our customers, and I think it’s gonna be an ever increasing area of focus, like where brokers define SLAs and those carriers that can beat their competitors to get quotes back, we’ll see more better submissions and those that don’t, we’ll see maybe the not-so-good submissions, so we’ll see a bit of a divergence there. Jason Contant: Yep. It reminds me a little bit about like AI in general, like a brokerage CEO once said to me, like if, you’re gonna be left behind basically if you don’t get in the game early, and it sort of makes me think of this now where you kind of, the ones that don’t get in the game sort of will kind of be a little bit left behind and then there’ll be another, you know, divergence that way, right? Will Harnett: No, 100%. And if you go back to your first question about what’s a workbench? A workbench is essentially a data orchestration capability. And regardless if you, you buy Send technology solution, if you go out there and try and do something differently, my view on this world is, is every carrier will need an orchestration capability, a data orchestration layer to manage structured data, bring in unstructured, get it changed to structured and drive the process, especially in this AI-driven world. So I think that’s… I couldn’t agree more. Like I think there will be that divergence, in terms of the haves and the have nots in this space. Jason Contant: Yeah, and like as it stands now, is the data pretty much still siloed or can you kind of offer insight across the portfolio? Will Harnett: Yeah, like it’s an interesting one. Like, you know, back in my days say I was 20 years with a carrier on both sides of the pond in London market and the U.S. So, I spent 10 years in New York. And you look at some of the portfolio assessments and the product reviews, they tended to be like, almost a quarter in arrears, and at a level of granularity that was not very detailed. You know why? Back to again your opening question, in a policy admin system, which was tended to be the source of structured data, the level of granularity of the data in that system was not very fine-grained. It wasn’t really the underwriting data, you know. And the underwriting data is where you can start correlating kind of loss to kind of interesting events or you know, what is driving certain behaviour and things like that. So, the more you drive into a structured data operating environment with an orchestration capability, you can feed that data downstream into your actuarial teams and your risk, your CEO teams, whatever, to do your portfolio analysis, but at a finer grain and in a more real-time fashion, which was…so then as we call it here at Send is the underwriting cycle. So, then the CEO can, in a faster timeframe, look at the shape of the portfolio. Does it align to their underwriting strategy? One, like in terms of attachment points and limit profiles and class concentrations and things like that, they can get a better view of the shape of their portfolio. And if they want to enact change on that portfolio, they can feed that back into the line underwriter to say, we’ve kind of tweaked the underwriting rules in the system. You know, that you can only go to a certain level, for a certain concentration of class of business or we’d like to bring down our limit profile down. We no longer want to offer five mil, we’ll offer 3 mil limits, et cetera, et cetera. So that, you know, it starts at kind of that individual risk by risk underwriting with more information on the underlying risk and that feeding into your portfolio assessment and analysis stuff, which then feeds back into your transactional underwriting and creates that cycle of the underwriting process. Outro | Jason Contant: That wraps up today’s episode sponsored by AM Best. We hope you enjoyed the discussion. Thanks for tuning in. We’ll see you next time on “What’s On Dec?” Print Group 8 LinkedIn LI X (Twitter) logo Facebook Print Group 8 Related Podcasts What’s on Dec? | Episode 27 | Shifting brokerage ownership models Image What's on Dec? What’s on Dec? | Episode 27 | Shifting brokerage ownership models Randy Carroll, CEO of brokerage Ai Insurance Organization, provides an in-depth look at topics affecting the Canadian P&C brokerage channel, including producer-to-owner models, talent retention challenges and the shifting commercial lines market. March 17, 2026 What’s on Dec? | Sneak peek into What’s on Dec? in 2026 Image What's on Dec? What’s on Dec? | Sneak peek into What’s on Dec? in 2026 Canadian Underwriter Editor-in-Chief David Gambrill and Associate Editor Jason Contant discuss the expansion of CU’s What’s on Dec? podcast, including a new CU Interview component and YouTube channel. February 24, 2026 What’s on Dec? | Episode 26 | The key issues shaping brokers’ 2026 outlook Image What's on Dec? What’s on Dec? | Episode 26 | The key issues shaping brokers’ 2026 outlook Brett McGregor, president of Insurance Brokers Association of Canada, looks at brokers’ biggest issues in 2026, including Bank Act delays, diverging market cycles, and how NatCat trends are shaping policyholders’ resilience. December 16, 2025
What’s on Dec? | Episode 23 | The shift to underwriting workbenches Will Harnett, Head of Product Strategy at Send Technology, discusses the shift from policy admin systems (PAS) to underwriting workbenches, as well as agentic AI and intelligent quoting. September 16, 2025 Stream this episode and others in our series on Spotify! Featuring: Will HarnettHead of Product Strategy, Send Underwriting workbenches, which offer a single pane of glass dashboard for underwriters, are building out an adjacency to policy admin systems (PAS). This evolution of the underwriting process allows workbenches to add insights and augment an underwriter’s day-to-day work, explains What’s on Dec? guest Will Harnett, Head of Product Strategy at Send Technology. In our latest podcast, Will describes the move from an unstructured data process to a structured one, and shares how workbenches bring everything together to provide underwriters a full picture of the data. He also dives into agentic AI, which uses machine learning models to mimic human decision-making with limited supervision. Will discusses where he sees the biggest opportunities for this type of AI in underwriting. He rounds out the discussion talking about speed to quote and intelligent quoting from broker and carrier perspectives. Audio transcript Intro | Jason Contant: Hi, I’m Jason Contant, associate editor at Canadian Underwriter and host of our podcast series, “What’s On Dec?”. This episode features my conversation with Will Harnett from Send Technology about underwriting workbenches. We explored the shift from PAS, or policy admin systems, to these workbenches, various aspects of agentic AI, and the status of speed to quote. This episode is sponsored by AM Best. Jason Contant: Today we’re gonna be talking about various aspects of underwriting systems from modernization to agentic AI and intelligent quoting, and everything in between. We’re also gonna talk about, I guess, how the share of the IT wallet is changing, you know, how things are shifting in terms of policy admin systems and underwriting workbenches. So on that note, maybe, Will, you can start us off by giving us a bit of a brief overview of what these underwriting workbenches are, and how they’re different than traditional policy admin systems used by carriers. Will Harnett: How would I describe the workbench? Probably the best way to describe it is a little bit, what are the problems it’s trying to solve for? If you look at the traditional policy admin systems, a lot of the processing and data capture that takes part as part of the PAS side of things, is it’s very late in the process. Maybe it’s at the quoting phase. Maybe it’s at beyond quoting, and it’s just bound-risk phase. But even at the quoting phase, it’s probably capturing, you know, account broker information, basic insured information. Sometimes it’s the case rating is done in a PAS, sometimes rating is done off piece in an Excel rater. So rating may not even be done. So it could be the price just being entered into the policy admin system. But if you look at those steps, that’s very much, down the journey of kind of the underwriting cycle or the underwriting process or the risk lifecycle journey. So that doesn’t really massively help the underwriters in terms of their day-to-day job. So if you think about their day starts when the submission comes in, or putting aside any prospecting they may do and any kind of CRM-type activity. But if you look at a transactional view of the world, their life starts, submission comes in, data extracted, you know, they gather risk data. Risk data may be different to exposure data. Exposure data may be the data that you’d use as part of your rating to drive the technical and final price. But as part of the underwriting process, you’ll look holistically at the insured and the relevant data. So that world, pre-quote up to quote and beyond, but that world pre-quote, is very much the area that the workbench and the problems or the help that it’s bringing to the underwriting process. Now, does it stop at quote? No, some do, say, for instance, the Send workbench. Probably there aren’t many out there that go the full end-to-end lifecycle. So, the underwriter remains in that single kind of pane of glass, so to speak, for the underwriting risk journey, through to quote, through to bound. And then if midterm adjustments come in, renewal processing, et cetera, it’ll sit around the policy admin system so it can support kind of all the various API feeds into the backend PAS. So that’s how I probably describe the difference. They live together, side by side. What I would say is, traditionally, some of the capabilities that are completed in the PAS, they’ve been kind of extracted out a little bit. So even forms management, doc generation, or looking at the policy wording side of things, people are looking at forms as a service capabilities. You’re looking at specialized raters. They’re coming out of the policy admin system. So if you look at kind of the PAS going forward, as it is today, it’s fine. That’s not to say that it won’t kind of go as it does. But there’s opportunities to kind of skinny down some of the capabilities as more of a service-orientated architecture exists, and people choose best-of-breed capabilities to solve whatever their needs are. And just a final note, where that is the case, that workbench is the enabler to kind of plug all these systems together. So, again, they’re not jumping out kind of from one system to the other and so forth. Jason Contant: Yeah, okay. So you kind of did answer this question in a sense, you kind of painted a picture of it, but I’m trying to get a sense of sort of the evolution of this underwriting process in the world of complex commercial insurance especially. Will Harnett: Yeah. Jason Contant: Where do you see tools like these playing a part in this? Will Harnett: Yeah, well, in a way, it kind of, the world where the technology estate was really focused in on that PAS, kind of points to what tools were there, you know, for the underwriter. There weren’t many. So what did they do? They ended up kind of, they’ll still get email, they’ll still get Word docs, they’ll still get Excels, et cetera, et cetera. So they’ll be operating kind of at their desk, in their desktop, underwriting using this data in a very kind of siloed, unstructured way. It’s still digital data, but it’s not sitting in enterprise data. It’s not sitting in enterprise systems. So there’s very little that you can actually do with that data, except for the underwriter will copy and paste, will write up Word doc reports, will, swivel and reenter duplicate data entry into a rater. They’ll swivel again and they’ll triplicate data enter into a policy admin system. So, that has been the world of complex insurance, and for good reason because the risks aren’t homogeneous. Like there’s a lot of variety, there’s a lot of, you know, complexity in the risk, it goes with the name. But in terms of where is that evolving to, the more… It evolves from unstructured process, is what I just described to you there, moving to structured process, which is kind of saying, how do I bring order to the day from end to end so I can track what’s going on, I can kind of create tasks, I can manage the completion of tasks? So that goes from structured process to then the next step is structured data and what you do with that structured data. So, that’s the evolution of unstructured process to structured process where we’re bringing in good disciplines about consistent standardized processes across the board. You’re bringing in process measurement and all those good things to then expanding the process to not look at a widget going across the factory floor, so to speak, but essentially looking at the data that’s associated with that risk and adding insights to that and augmenting that underwriter’s day-to-day work. So that’s kind of a little evolution of where things have come from and are going to. Jason Contant: Okay. Yeah, and you mentioned sort of the unstructured data aspect and double and triple entry, as we all know, is a big thing in the industry, right? So, what kind of data I guess, is being used for these systems and how do you confirm it and validate the data? Will Harnett: Yeah, like classically, I mean, it’ll start with the submission. But if you think like a submission, a submission application form could be as skinny or as broad as depending on line of business and complexity of risk. So again, traditionally, how much structured data was captured? A very skinny data amount, but now going into more of a structured data kind of operating state, the more data you can pull out of your submission document or submission application into a structured data format into your systems, then suddenly there’s a lot more you can do. So now first question is how do you extract the data? Okay, well, there’s a set of data extraction capabilities that kind of live around that space that’s a little cottage industry in itself. Then your next question, is that data correct or is that data complete? So you have an opportunity now that you’ve landed it, you know who the insured is, you can then go out to third-party data services and you can start augmenting and validating through other third-party data, kind of data services that are out there, to kind of enrich what, you know, the skinny or whatever level of data you received in submission app to kind of build out a bigger picture on the insured and the underlying kind of exposure. Jason Contant: Yep, so many people have heard of AI, but not everyone understands agentic AI. So, how do you define it in underwriting? Will Harnett: To define agentic, maybe I’ll define kind of like AI agents to give a little bit of colour of them in the context of the underwriting process and then use that then to kind of, to describe a little bit of an agentic kind of AI world. Like AI agents tend to be kind of standalone capability solutions that solve discrete kind of problems. So if I kind of, I guess answer it by giving you examples. In terms of, if you look at areas like data extraction, you could have an AI agent to support a data extraction process. If you want to, these are all obviously phenomenal use cases for AI in case that’s one of your questions later on. But like in terms of summarization of documents, so a big challenge for underwriters in the complex space is they’re just given volumes of kind of engineering reports and loss run reports and et cetera, et cetera. So, an agent to kind of run summarization. Another classic one is, and so much underwriting is written off, industry class codes. You’ll get like, what’s the insured class of business, and being able to go out there and actually determine, a North American NAICS code, like, you know, to say, okay, it’s this six digits, but really is that the level of underwriting that you need? Sometimes you gotta look beyond that. So these are standalone agents to solve discrete problems. Now, an agentic AI framework is where you can actually start tying in a dynamic way these AI agents together. So suddenly you’re creating a very powerful operating environment where an AI agent kind of ends and through the framework that you have in place, through the agentic component, you can then trigger off the next AI agent to run and complete what it’s doing. So it really is a powerful tool to create that process framework to drive even more efficiencies in how you run kind of your world. Jason Contant: So, I’m thinking of like where you see the biggest opportunities for this too now, like for agentic AI in underwriting, would it be like the triage level or like quoting or decision making, for example? Will Harnett: Yeah, I think it’s gonna start with discrete pain points and, what do the large language models do really well at the moment? It kind of, that data extraction summarization and be able to kind of collate kind of the right level of information from vast domains of data sources. So, what does that kind of look and sound like? That kind of sounds like the front end of the process. So we get submission documents in, okay, we can run various agents over the documents coming in. We can then use that information that we’ve pulled down and then go out to the web and essentially pull in and augment further information, and then summarize it into the kind of the right level of data. So the way I’d kind of be thinking about agents is look at discrete problems. Like if you look across the entire value chain of the risk lifecycle, you could pick out kind of opportunities along the way. If you go all the way to the end, you could say from a renewal process perspective, okay, well how would you look at renewals? Renewals is essentially new business, except you’ve got a load of information on the insured already. But the thing is, maybe your underwriting rules have changed or maybe you essentially have to re-underwrite that risk. So what will you do? You will go out and you’ll assess again, any litigation or any additional properties added to, you know, the insured or whatever, there’s an opportunity to kind of rerun the new business rules over that renewal that comes through. So suddenly there’s a renewal agent. So, if you look at every aspect of the process, there’s going to be discreet opportunities to deploy an agent. And I guess where we see it, it’s essentially gonna be a library of agents that you’re gonna have, and you’re going to leverage them at different points in the process. And that’s I guess where that agentic component comes in to say, how can we dynamically kind of string those agents together to produce a fairly, like a streamlined process. Jason Contant: Yeah. Well, I’m thinking too, like if you think about like ChatGPT and stuff like this, right? I always think of like, a lot of times it can give you bad information, right? Or incorrect information. So what happens, like in the case with agentic AI, if it gives you bad information? Like who’s responsible really in that case? Will Harnett: Well, I think, ultimately, the underwriter has to stand over what augmented data that they are adding to the mix. If you think about the classic, process as it is today, if there’s an inaccuracy in the application, who’s at risk? Well, that’s probably the brokers’ E&O there that’s at risk, you know? So, we’ve got the kind of the broker data that’s coming in from the insured. If the insured misrepresented, well then that’s a different story there altogether. So, if you look at where the underwriter is, is then harvesting and enriching with third-party data, then I think the responsibility lies on the carrier side to make sure that if they’ve augmented the data that they’ve received from the broker and the broker data is accurate, then I think that would reside with the underwriter and the carrier. If you talk about like are there tools and ways that you can improve your confidence level in the data? There absolutely are. You talk about ChatGPT, and you know, the response it gave you wasn’t quite accurate. But what happens if you took that same question and you hit it against three other LLMs, and they gave you back, and you started looking at the deviations and you kind of run a view around how they’ve differed or where they’re similar, then suddenly you can start triangulating, and have a higher probability or confidence level in what you get back. So, there are kind of ways and means of trying to mitigate some of those hallucinations or some of those challenges that you receive. But ultimately, I mean, I’m kind of saying there’s definite progress and opportunities that you can make in that space, but ultimately the responsibility will lie with the underwriters. Jason Contant: Shifting gears here, but like in terms of intelligent quoting and sort of speed to quote, like where are we at now and what do you think still needs to be done? Will Harnett: Well, naturally enough if there’s a lag time between a submission coming in, it going to offshore maybe to do a data extraction process, it lands in an underwriting assistant, they do a review to see if it’s within appetite, if it’s in appetite, maybe it goes over the underwriter, then the underwriter starts opening up the files, et cetera, et cetera. You see the lag time there in turnaround of quotes. I think if you look at it through a broker lens, quote turnaround time is a very critical metric in terms of they want to get submissions out, they want to get quotes back and they want to get it turned around fast. So, that is the broker incentive. Obviously, carrier incentive wants to see as many submissions as possible, but risk assessment is pretty important from their side of things. We actually conducted a recent survey there of north of 60 senior insurance people on the carrier side, on the broker side, MGA side. And it was really interesting, the feedback that we got there was something like 79% of brokers said carriers need to, better define their appetite and capacity. There’s another very high percentage of, the efficiencies around triaging. So the brokers are up there in the 70s and 80% of dissatisfaction around definition of appetite and turnaround time. And then the carrier side, they look, they say, you know, up to 70% of the carriers expect more complete risk data from the brokers or improved submission quality from the brokers. So it’s kind of like, there’s a bit of a chicken and egg there. What comes first? Like, you know, better data from the brokers will help deploy faster turnaround times. But the carriers and the underwriters need to better define their appetite. So I think that’s kind of where we’re at. Where we need to get to or where we’re getting to is, there’s gonna be a bit of kind of an inflection point. Those that adopt and those that don’t adopt. Those that adopt and move to more a data-driven kind of underwriting strategy where they are adopting, you know, structured data like digitize that source. If you can digitize the data as early in the process, then there’s an awful lot you can do, whether through kind of just basic automation and process, leveraging APIs and third-party data augmentation, or by leveraging AI agents, you can, absolutely reduce the cycle time of, is it within appetite? You know, can you aggregate all the information and can you speed up in the quoting process? But what’s also important for brokers is quick decline. So you can kind of say like, even if you can kind of take in a raft of submissions and you could say out of appetite, like, you know, decline, decline, decline, decline, the broker is actually happy with that. They don’t have to waste time chasing you to kind of say, you know, what do you think on that? Et cetera, et cetera. So when they say carriers better defining their appetite that will help them get faster declines, which will help them chase down various markets so they actually can get a quote back. So I think moving to a data-driven world is absolutely going to speed up that quote turnaround time. And I think it’s an important data point that we look at with our customers, and I think it’s gonna be an ever increasing area of focus, like where brokers define SLAs and those carriers that can beat their competitors to get quotes back, we’ll see more better submissions and those that don’t, we’ll see maybe the not-so-good submissions, so we’ll see a bit of a divergence there. Jason Contant: Yep. It reminds me a little bit about like AI in general, like a brokerage CEO once said to me, like if, you’re gonna be left behind basically if you don’t get in the game early, and it sort of makes me think of this now where you kind of, the ones that don’t get in the game sort of will kind of be a little bit left behind and then there’ll be another, you know, divergence that way, right? Will Harnett: No, 100%. And if you go back to your first question about what’s a workbench? A workbench is essentially a data orchestration capability. And regardless if you, you buy Send technology solution, if you go out there and try and do something differently, my view on this world is, is every carrier will need an orchestration capability, a data orchestration layer to manage structured data, bring in unstructured, get it changed to structured and drive the process, especially in this AI-driven world. So I think that’s… I couldn’t agree more. Like I think there will be that divergence, in terms of the haves and the have nots in this space. Jason Contant: Yeah, and like as it stands now, is the data pretty much still siloed or can you kind of offer insight across the portfolio? Will Harnett: Yeah, like it’s an interesting one. Like, you know, back in my days say I was 20 years with a carrier on both sides of the pond in London market and the U.S. So, I spent 10 years in New York. And you look at some of the portfolio assessments and the product reviews, they tended to be like, almost a quarter in arrears, and at a level of granularity that was not very detailed. You know why? Back to again your opening question, in a policy admin system, which was tended to be the source of structured data, the level of granularity of the data in that system was not very fine-grained. It wasn’t really the underwriting data, you know. And the underwriting data is where you can start correlating kind of loss to kind of interesting events or you know, what is driving certain behaviour and things like that. So, the more you drive into a structured data operating environment with an orchestration capability, you can feed that data downstream into your actuarial teams and your risk, your CEO teams, whatever, to do your portfolio analysis, but at a finer grain and in a more real-time fashion, which was…so then as we call it here at Send is the underwriting cycle. So, then the CEO can, in a faster timeframe, look at the shape of the portfolio. Does it align to their underwriting strategy? One, like in terms of attachment points and limit profiles and class concentrations and things like that, they can get a better view of the shape of their portfolio. And if they want to enact change on that portfolio, they can feed that back into the line underwriter to say, we’ve kind of tweaked the underwriting rules in the system. You know, that you can only go to a certain level, for a certain concentration of class of business or we’d like to bring down our limit profile down. We no longer want to offer five mil, we’ll offer 3 mil limits, et cetera, et cetera. So that, you know, it starts at kind of that individual risk by risk underwriting with more information on the underlying risk and that feeding into your portfolio assessment and analysis stuff, which then feeds back into your transactional underwriting and creates that cycle of the underwriting process. Outro | Jason Contant: That wraps up today’s episode sponsored by AM Best. We hope you enjoyed the discussion. Thanks for tuning in. We’ll see you next time on “What’s On Dec?” Print Group 8 LinkedIn LI X (Twitter) logo Facebook Print Group 8