PLUS Podcast

Demystifying AI Episode 2

PLUS Season 1 Episode 2

In this episode of the Demystifying AI series, we dive deeper into how London market underwriters are integrating Artificial Intelligence into their businesses and underwriting practices. Building on our previous discussion of Generative AI, we explore the intended and unintended effects of AI adoption in the insurance industry. What new products are being developed to manage the risks associated with AI? And how can AI transform the underwriting process itself? Tune in to hear insights from industry experts on the evolving role of AI in the London market.

PLUS Staff: [00:00:00] Welcome to this PLUS Podcast, Demystifying AI Episode Two. Before we get started, we would like to remind everyone that the information and opinions expressed by our speakers today are their own and do not necessarily represent the views of their employers or of PLUS. The contents of these materials may not be relied upon as legal advice. With the housekeeping announcements out of the way, I'm pleased to turn it over to our host, Sarah Coots.

Sarah Coutts: Thanks, Tyla. Thanks to everyone who is joining this podcast, the second in our Demystifying AI series. I'm Sarah Coutts from Marsh in the UK. Today we are again joined by two great speakers, Alexandra Matthews, Underwriter of AI Risks in the Insure AI team at Munich Re, and Kenneth Carmichael, Head of the UK Technology Underwriting team at CFC.

So, as you may recall, our first podcast in this series covered the basics on  artificial intelligence, and more [00:01:00] specifically, generative AI. This podcast is now exploring how London market underwriters are approaching AI, both internally within their businesses and in their underwriting.

What do they see as the intended and unintended effects of utilizing AI? What products are being designed to mitigate those risks? What role can AI play in the underwriting process? Kenny, to kick us off, how do you see the adoption of AI impacting underwriting?

Kenny Carmichael: Sure. The classic question most people ask is whether they're going to be replaced by a robot, right?

 In 2013, Carl Frey, a director at Oxford University, completed a famous study in this area. A few years later, he predicted with 98.9 percent probability that insurance underwriters would be replaced by machines within the next 10 to 20 years. However, it's now 2024 and the insurance underwriter is not quite a creature of the past just yet. Despite this, recent research released by Hyper [00:02:00] Exponential has revealed that 69 percent of underwriters are still concerned about being replaced by AI within the next five years.

Personally, I don't see AI replacing underwriters or any other insurance professionals on mass anytime soon. I think that AI brings with it an exciting opportunity to make everyone's job more interesting. I believe we will see AI predominantly augmenting, enhancing jobs and freeing up humans to do what we do best, which is engaging with other humans, whether that is to solve an actuarial problem or to sell an insurance policy.

You have to remember that people were also concerned that email would eliminate jobs and cause the postal system to collapse. And whilst we certainly saw the nature of jobs fundamentally change, the efficiencies created through more efficient communication methods ultimately generated job creation.

Here at CFC, one of the most exciting areas of the business where we are utilizing AI is within our proactive protection team. This team is [00:03:00] responsible for turning an insurance policy from a promise to pay into a promise to protect. They use AI and machine learning to help identify customer assets, to prioritize which emerging vulnerabilities are at the highest risk, and to identify anomalous traffic on the wider internet to discover new threat active behavior.

With over 100, 000 customers that we're trying to defend, we have to use every possible type of technology to automate processes to the scale CFC security teams operate at. Saving the human element only for the most in-depth tasks. Coming back to the underwriter, there is no reason for well-paid insurance professionals to be performing basic data entry tasks.

I do believe the nature of the role will fundamentally change. Wouldn't it be better for underwriters to be spending more time building better products, explaining the benefits of their products to their customers, and reaching and protecting more customers, as opposed to generating a quote which could potentially be done by machine?

Sarah Coutts: Thanks, Kenny. And there will no doubt be some sighs of [00:04:00] relief that those wondrous predictions haven't come to fruition. However, those stats do also align with the Institute of Public Policy's findings that they released earlier this year, in which they estimated up to 8 million jobs in the UK alone are at risk as a result of the adoption of AI.

Not great news, but as you say, these stark figures possibly fail to address what new jobs or improved roles may emerge as a result of AI.  Trying to stick to the positives in this discussion, Alex, from your perspective what are the key benefits of adopting AI in underwriting and indeed in any other business?

Alexandra Matthews: I think AI is generally a pretty good decision maker in the sense that it can identify patterns or correlations maybe faster or better than a human might be able to do. It's also quite good at dealing with uncertainties or incomplete information because when it gives an answer, it gives that answer with a certain confidence level of which output or which decision might be the most likely as opposed to a certain [00:05:00] answer.I think that really sums up a lot of the decisions that we actually make in insurance across the whole value chain from risk assessment reviewing wording, checking against guidelines, for example, to pricing automation and claims adjudication.

 From the perspective of what we at Munich Re see in terms of corporates in other industries adopting AI, I think highly regulated industries have good potential for AI usage and getting a lot out of their AI investments. These industries might be financial services, healthcare, legal services. This is because these industries make typically very important decisions that have a meaningful impact on people who are touched by these decisions; for example, when a bank uses AI to determine someone's eligibility for a home loan or a credit card, a hospital that might use AI to make [00:06:00] basic diagnostic decisions. These types of AI based decisions really are important to the people who are touched by those decisions. I think using a better decision maker to support in that function would bring a lot of benefits to those people.

Sarah Coutts: Absolutely. As you say, generative AI is certainly being utilized widely in their sectors already.

While, of course, the adoption of AI does present tremendous opportunities, it also introduces risks, such as reliance on the third parties who have built and trained the AI, the integrity of their data sets that have been used and of course the security of the information that's fed into the AI model.

Alex, as an underwriter, how do you approach these risks?

Alexandra Matthews: When we are underwriting the risk of AI underperformance, and we do a risk assessment, we generally look at many different factors across a company's AI process to see how this risk is identified and how it's managed. This can be [00:07:00] from the risk engineering pipeline, how external factors beyond the control of the AI can be measured by the policyholder or excluded, the representativeness of their training data and then as well performance monitoring and management, so things like quality management and evaluating your operational risk. These are all the things that we look at to see how this kind of risk of AI underperformance over time is managed. Whether the AI is built in house or licensed from a third party, is that irrelevant if we truly understand these particular elements for a given insured?

For us, what's important is the consistency of performance of an AI model rather than the initial level of performance or the initial quality, I guess you could say, of a given a tool.

Sarah Coutts: Okay, interesting. Thanks [00:08:00] Alex. Kenny, your view on that?

Kenny Carmichael: Firstly, I think it's important to appreciate that right now for most businesses, AI is a tool and it will be one of many tools used by business. Businesses may not always be relying on a third party to develop their AI system. They may build one or many AI systems themselves, as well as adopting third party AI products. Businesses have always used tools and whether they are tools they have built or tools they have purchased, tools will always introduce a certain level of risk.

However, I do think we need to be careful not to fall into the easy trap of assuming that a new tool or technology inherently presents a greater risk if we don't have the data to support that assumption. When it comes to AI, right now there is an equally valid argument that could be made that over the long-term AI will actually improve the risk profile of most businesses as the risk of human error may decrease as the reliance [00:09:00] upon advanced AI increases.

When underwriting business, particularly in the volume SME space, we aren't always going to be able to get into the nuts and bolts of every single AI tool that a business uses. Therefore, as is the same with many other types of risk, we need to be confident that are insured to risk aware and that they have implemented relevant frameworks or policies to manage the new risks.

In my sector, where we are underwriting providers of technology solutions, we do apply additional due diligence to companies that are specifically selling AI products as opposed to incorporating AI within another technology solution. We will look more closely at what industry sectors these companies serve, what AI training techniques are used, and what type of applications the AI will be used for.

Sarah Coutts: Okay, many different elements there that you're looking at when underwriting users of AI. But, what specific AI products are needed for insurance, those insurers who are utilizing AI?

Kenny Carmichael: As mentioned [00:10:00] earlier, I focus primarily on providers of technology solutions, and that will include providers of AI solutions. We can provide comprehensive cover to all technology companies, which includes affirmative coverage, the failure of the product to perform its intended function. This is critical to any provider of any software application, including AI applications.We also cover breach of contract, negligence cyber events or intellectual property disputes. We are able to provide really robust coverage to those technology providers. We are aware that such broad coverage is less readily available right now in the market to large corporate customers, so they may have more need to seek specific AI solutions.

 For business businesses that adopt AI, it's also really important for them to consider the potential intellectual property risks that they may be exposing themselves to by relying on AI tools, which have been trained on proprietary data sets that belong to others. [00:11:00] We're able to offer standalone intellectual property solutions to businesses, and we're focused on making it really easy for customers to buy intellectual property protection from us.

This used to be an area that was highly complex, time consuming and expensive. We can now provide an indication with just a business name, website, and revenue.

Sarah Coutts: Super efficient there, Kenny, and I'm sure really helpful for insurers, as you said, just to get that indication nice and efficiently. I suppose the other element is it's really useful to actually flag the intellectual property risk that's associated with using AI as both the use of the copyrighted material in training data sets, as well as the resulting output from generative AI models, could lead to a business infringing on another's IP.

Overall for the tech industry, particularly in the SME space that you mentioned, it does sound like there are several insurance solutions already available that will respond to errors and omissions resulting from AI.

Alex, what does Munich Re have on offer in that regard? [00:12:00]

Alexandra Matthews: The Insure AI team at Munich Re has a whole product suite for AI caused risks. You could broadly think about it in two buckets that products could be taken up by AI providers or vendors or AI users.

 With this first bucket for providers, affirmative coverage for AI risks can essentially cover contractual liabilities relating to the underperformance of that vendor’s AI, so providers of a tool or a service who want to derisk their customers operations when using that third party AI. We then enable these providers to essentially guarantee a level of performance of this AI solution and say, “This is the KPI that we promise to meet when you're using our AI solution and if we don't meet this KPI and you suffer a loss as a result, we will [00:13:00] indemnify you for that loss and we'll basically cover it.” That coverage is then guaranteed and met by the insurance. This is our flagship product that we started with 6 or 7 years ago.

We've noticed in the past couple of years that AI use cases have started to shift and more corporates have started to use AI so not just traditional tech companies with data science teams who are very involved in the AI process, but essentially ordinary companies providing some other type of service that's being supplemented by AI. Within that, these companies then have many different touch points with third parties who are affected by somebody else's use of AI. We've now broadened our offering to cover users of AI, which is corporate users, and the use cases that they might have either in house, so first party losses of developing [00:14:00] AI models, as well as then third party liabilities if an AI tool that you use causes some damage to your customer.

Corporates of any size would benefit from this product where the harms caused by their AI are really meaningful to them. Their reputation and their relationship with their customers is also really critical, and if their AI tool made a mistake that negatively impacted those customers, this would also be a loss that the company would have to bear.

Those broadly are the two kind of types of products that we offer, but this coverage can be modularized and tailored depending on the specific use case.

Sarah Coutts: That's really helpful to know that actually there are these two products that have been developed specifically to address the risk of underperformance in AI models, so both for the provider of AI and for those who then just utilize the AI. And [00:15:00] offering, of course, as you mentioned, the first- and third-party losses cover as well.

So in your view, Alex, is there also coverage available under traditional E& O policies for AI related risks?

Alexandra Matthews: I think it depends on the type of coverage an insurer has and in what market, because we've seen quite a range in terms of how comprehensive an E& O policy might be.

I definitely think that there's some silent AI exposure in tech E&O wording. For example, if the use of AI results in algorithmic bias that might discriminate against a particular group of people or some sort of system failure, then an errors and omission policy could be triggered to cover third party claims. [As a result] there might be coverage then there for AI related risks. I think it's also pretty unclear whether contractual liabilities are covered under these policies. First party damages or expenses seem to [00:16:00] be typically less often covered by E& O policies. For those examples, I think affirmative coverage for these particular risks is very valuable when the kind of coverage scope is not clear.

We've received inquiries from media companies who are worried that their PI policies don't cover Generative AI created content. When we think about what that might look like or what policies that might trigger, typically what we've seen with policies is that a failure of a particular technology might be forcing the policy to respond. If the AI is still operating at a full decision throughput capacity and just has an increased error rate in its decision making process, such a performance degradation might not be big enough to be considered a failure, which would be the trigger then of [00:17:00] the E& O policy.

I think actually answering whether there is coverage under E& O is not as clear as it might initially seem. If you think you have a system failure or product failure coverage, it still might not be enough to be triggered if your product is still operational but just not as good as you were expecting it to be.

Sarah Coutts: Understood. [With] the E& O policy, there might be trigger issues around it, or as you said, particularly when the AI model is still functioning but just underperforming, so to speak. Kenny, from your perspective, what do you think about the coverages that are available under traditional E& O policies for AI related risks?

Kenny Carmichael: I think it depends on what you consider to be a traditional E&O policy. In my world, for providers of technology solutions, the answer is absolutely yes. If an AI model doesn't perform as expected, then it follows that the technology service has failed to [00:18:00] perform its intended function. As I mentioned earlier, that's something that our technology product here at CFC provides cover for. However, I think to the point Alex made earlier, some more traditional policies that may be in other sectors may not affirmatively grant this cover. Therefore, technology functions, but perhaps not quite as efficiently as intended.There could be more of a question on whether or not the technology provider has actually been negligent in their delivery.

It's also important to remember that AI may present enhanced regulatory risk as well. Therefore, it's important to ensure that any policy also provides some regulatory cover.

Sarah Coutts: Absolutely. We've seen that regulators across the globe are still considering how to regulate AI. Of course, we have also seen some of their regulatory approaches which vary quite wildly in the different jurisdictions. You've rightly mentioned, regulatory actions are certainly an additional risk for insurers in this sort of AI context.

[00:19:00] We've also got the civil claims risk as well. Some commentators believe that the increasing use of AI could trigger claims across many lines of business. Others do not envisage such a significant impact on claims activity. Now, on the understanding there has not yet been or hasn't been a surge of AI related claims in the market, Alex, can you tell us what claims you have seen to date that relate to AI?

Alexandra Matthews:  I think it will probably come as no surprise that most claims that we've seen have been in the U. S. and typically relate to third party litigation against AI users. For example, [we see them] for discrimination in the health care and financial services industries and banks using AI to adjudicate home loan applications might have a higher rate of discrimination against particular ethnic groups. That's where we've seen quite a lot of claims that specifically mention AI and some sort of algorithmic error.

I think discrimination is a [00:20:00] cause of action that's one to watch. I think probably we would start to see more of these types of claims come through in other jurisdictions. There's also some cases going through the courts, I think also in the UK, regarding AI based facial recognition software and potential discrimination arising there. That's not necessarily against protected groups, per se, but I think still within the broader theme of some sort of bias.

 It's quite early to say if these court cases will translate to insurance claims. I think that's for two reasons. I think, firstly, the coverage options for insureds are possibly unclear, so which policies they might look to for responsiveness is not really well fleshed out yet. I think because AI is still quite new, there might already be some claims coming through to people's claims team insurance that have some sort of issue, but we haven't recognized it [00:21:00] yet as a specific  AI issue because often it's quite unclear what kind of AI tools companies are using. This is generally private or proprietary information. It might be actually that there are already some losses that have been caused by AI, and we just haven't identified them as AI cause losses as opposed to another type of loss yet.

I think another area where I would say probably claims could be expected to rise is IP litigation. At the moment, what we've seen so far is rather limited to whether particular works of creators can be used to train AI, so not much about really the AI kind of causing the loss itself. There were recently some court cases already in Europe, I think in the Czech Republic and Germany, which centered a little bit more on this issue of AI causing a loss rather than being, whether that is, subject to copyright protection in and of itself. [00:22:00]  I think general concern about IP and the relationship between IP and claims is reflected in the AI solutions for generative AI content in IP wordings that we've already started to see in the market.

Sarah Coutts: Yes. Thanks, Alex. The AI exclusion, I've only seen one across my desk so far, so that's certainly something else to watch out for. Sticking with claims for now, Kenny, what AI related claims have you seen, so far?  

Kenny Carmichael: As with all technology solutions, AI will go wrong. We've already seen many examples in the media, such as AI suggested home recipes that may end up poisoning you, or customers getting more chicken nuggets than they bargained for at the McDonald's AI controlled drive thru.However, I believe the key areas to be really mindful of include the risk of inadvertently introducing bias in decision making tools and the widespread use of generative [00:23:00] AI as we've discussed, in particular how this will impact content creators.

AI does also present an increased risk for companies providing financial technology solutions. We have a dedicated team here at CFC that underwrites FinTech risks and they are seeing an increased reliance upon AI for example, systems providing AI driven financial advice or automated trading systems. Companies in the financial sector, or providing AI services to the financial sector, will be classified as high risk and they'll face greater regulation and scrutiny. I'd certainly expect to see claims in this area.

Finally, the risk of AI washing is also one to be aware of. This is particularly relevant to lines of business such as management liability or transactional liability. This is where a business will make overinflated claims about their use of AI with a view to also inflate the valuation of their [00:24:00] business.

Sarah Coutts: Yeah, good point. The U. S. has certainly seen a number of AI related security claims, and I believe the SEC itself has now cautioned companies against so called AI washing. It appears that some are overexcitedly promoting their AI capabilities when often their usage is nowhere near sophisticated or extensive. I suppose this could be really grounds for the next surge of security class actions in the US and beyond, of course.

I suppose it also goes back to that school of thought that AI is not a new risk, but simply a variation and causing a variation on claims we're already seeing, one to watch. With that in mind, Kenny in your view, is this surge in generative AI usage, is it a new risk and if so, what are you doing to address it?

Kenny Carmichael: I think it's really important that we remember that AI, at the end of the day, is just software. And software isn't new. Software is constantly evolving, but it [00:25:00] has been for decades.

It's easy to conjure up visions of an evil neural network or AI controlled robots going rogue, but I don't believe that AI today is inherently malicious. Yes, we will see risks from generative AI, but already we're seeing moves to ensure more responsible development practices for generative AI models. Providers of technology solutions will be striving to ensure that their tools are without error and risk free, because at the end of the day, that's better for business.

Over the last two decades, we have already seen technology trends capturing headlines, including the rise of social media, the adoption of cloud computing, the Internet of Things, virtual reality or distributed ledger and crypto risks. All have their own nuances, but at the end of the day the claims are still predominantly being for things that we are very familiar with, such as financial loss, body injury or property damage. We need to ensure that we adopt data [00:26:00] driven approaches to risk appetite and that we're not led by hype or speculation about a new type of technology.

Sarah Coutts: Fair enough. Good point, Kenny. Thank you. That certainly makes sense to me. Alex, I suppose we can't get away from the fact that generative AI is being utilized at an alarmingly fast rate. Actually this in itself poses a risk. What's your view on the risk landscape?

Alexandra Matthews: I think Kenny's absolutely right. AI is just a software. With that in mind, even the best trained state of the art AI models will always make mistakes. That we know with certainty because of a couple of features inherent to the AI, so firstly, it's probabilistic in nature and also the systemic element to its predictions.

What we mean by this is that probabilistic AI doesn't guarantee a specific answer, but instead calculates the likelihood of various possible [00:27:00] answers. Any error that an AI makes tends to be systemic in nature, meaning that once it learns a particular pattern or starts making a decision that it thinks is the right or the best optimal outcome, it will continue to make that mistake and make that decision until it's identified and corrected, often externally from someone who's reviewing the model.

Knowing that those types of risks, that you don't always get a certain answer and that if it does go wrong it will probably continue to go wrong until it's corrected, means that we have these residual risks of AI. Knowing that residual risk exists and knowing that no matter how well you train it or how effective AI governance you've implemented could fully eliminate that risk, I think is really important for any sort of user or provider of AI to understand. Even if you're managing your AI [00:28:00] risks exceptionally well, prior to deployment, I think insurers could play a role in transferring that residual risk from AI usage, spreading it across the market to ensure that corporates can continue to adopt and implement AI with confidence.

 In terms of what they're really doing to address that risk, we've talked about it a little bit already that we've seen some specific exclusions for generative AI created content in the market. We’ve also seen some policies start to provide affirmative AI coverage for these types of risks. The ISO Insurance Services Office is also, I think, expected to come out with some clarification language or exclusions on AI pretty soon. It really seems like the momentum is picking up across the insurance market to respond in some way to these new risks that can't be managed fully in any other way.

Sarah Coutts: Absolutely. Thank you. That was really helpful. In that context, then, [00:29:00] from the insurance perspective, what should they be doing from a risk management and governance perspective if adopting AI into their businesses?

Alexandra Matthews: I think the first step is having some sort of process, so that could be creating criteria for model updates, documenting it, following it.The business, I think, would benefit from creating a set of performance metrics that really evaluate the model comprehensively and making sure that model updates pass that performance benchmark because the AI can begin to deteriorate over time as the in the environment that it's operating in obviously changes. I think that's just human nature.

And so safeguarding against recurring issues, it can really depend on what the desired KPI of performance is. When you're measuring the performance of your AI model, using some sort of KPI, you need to ground truth. [00:30:00] These ground truths can be one of two things. You either have a KPI with a ground truth that is not subjective to human judgment. For example, “Is this part in my sort of manufacturing conveyor belt faulty or not faulty?” Then you can really see the truth of that hypothesis. The second type of ground truth is one that is subjective to human judgment. [This] might be bias prevention, which you're measuring against some sort of fairness metric in terms of, giving people equalized odds or demographic parity in terms of hiring opportunities. What is really considered fair there might change with the local jurisdiction or the local culture.

With that in mind, it's important then to understand what your desired KPI is, and how are you actually going to check if that KPI Is being met over [00:31:00] time and fulfilled over time. Here, I think we'd also recommend that the data science team who's actually updating the software interacts a lot with sort of, I guess what you might call the policy monitoring team to continuously adapt their model monitoring and evaluation process so that these KPIs are met.

Sarah Coutts: Okay. Thanks. A lot of continuous auditing and performance is suggested there, which in itself of course is a huge investment of time and resource for a business. Kenny, will all businesses be able to do this?

Kenny Carmichael: Most businesses in the world are small to medium in size. As such, I think they will need to adopt practical steps that allow them to continue to trade whilst also attempting to mitigate risk.

One of the most effective steps that business can introduce is to ensure that there is a human in the loop or human oversight of any AI tools that they adopt or sell. As more businesses adopt AI, I think we will see many [00:32:00] looking to either seek advice on or entirely outsource the responsibility for developing AI risk management tools and corporate governance policies, and that's okay.

The main thing is that AI is not going away, so it's not something that businesses can ignore. As insurers, all we are looking for is to see that the business has taken steps to consider the risk level of any AI that they have chosen to adopt and try to mitigate the risks appropriately.

Sarah Coutts: Absolutely, and AI and more specifically generative AI, is certainly here to stay. I agree that human element is absolutely key to stay within that process rather than just leaving it up to the AI 100 percent.

Anyway, I'd be happy to delve even further into this topic, but our time is almost up, already. Before we go, we do have a little bit extra time for some horizon scanning. Alex, you first. What are your thoughts on the next big developments in AI for insurers [00:33:00] or insurance?

Alexandra Matthews: In many industries I think AI will become more ingrained as part of existing processes, rather than being a separate feature of a company's operations. As it becomes more common to use AI in critical operational functions becomes less of a differentiating factor for companies. I think it'll be important for corporates to really understand whether and how that changes the risk profile of their company, whether that exposes them to new types of risks, and for insurers to then understand if that exposes them to new types of claims or a change in claim frequency.

I would say insurers should probably equip themselves with the best possible understanding of really how AI can actually change their policy holders’ exposure to particular loss types across their business functions, and the broader implications of widespread AI usage across an insurance portfolio.[00:34:00] For example, if those use cases of individual insurers use the same foundation models, this might give rise to an accumulation risk if there is a change in performance of that underlying model. This might require changes to the scope or the price of coverage being given in the market.

 I think the next big developments probably doesn't come from the technology itself, but really comes from understanding that this is essentially a change in the kind of software that companies are using and being really reflective on how that changes how we underwrite particular risks.

Sarah Coutts: Thanks, Alex. Kenny, any final thoughts on what's next for insurers?

Kenny Carmichael: I think we need to keep an eye on the regulatory landscape. We've just seen the EU AI Act come to force on the 1st of August this year. This will be introduced in phases over the coming years. Whilst this is not Directly applicable to businesses here in the [00:35:00] UK. It does or will still apply to any business that conducts trading in Europe. In time we will see greater scrutiny applied, particularly to providers of general-purpose AI models.

As I said earlier, AI is not going away. Legislators and regulators are trying to ensure that AI is being developed and utilized responsibly. Regulators and litigators will take action, and insurers need to think carefully about the type of customers that they're targeting.

Sarah Coutts: Thanks, Kenny. Thanks to you as well, Alex. Wanted to thank you both for some really useful insights into AI from an underwriter's perspective.

You've certainly given us plenty to take away: Does AI pose more or less risk? Where's the coverage in our E& O policies? [What] new products that are out there providing cover for AI providers and users? [What are] exclusions that are emerging on the market as well, as affirmative [00:36:00] coverage? Of course, overall, how can businesses monitor and mitigate these new risks or the risks that they are seeing by using their AI.

Overall, thanks. As I said, great insights. For our next podcast, we'll be then moving on to how insureds are utilizing generative AI, how they're implementing it, managing the risks and ensuring that their own clients are content with it being used within the business. Until then, it's a thank you to PLUS and thank you to all of our listeners.

PLUS Staff: Thank you for listening to this PLUS podcast. If you have ideas for a future post podcast, please complete the content idea form on the PLUS website.