This is the Data Science Conversations podcast with Damian Deighan and Dr. Philip Diesinger. We feature cutting -edge data science and AI research from the world’s leading academic minds and industry practitioners so you can expand your knowledge and grow your career. This podcast is sponsored by Data Science Talent, the Data Science Recruitment Experts.
Welcome to the Data Science Conversations podcast. My name is Damian Deighan, and I’m
here with my co -host, Philip Diesinger. How’s it going, Philip?
Good. Thank you, Damian. Great. So today, we are very happy to have Peter Bärnreuther from Munich RE to talk about AI risk management. By way of background, Peter is an AI risk expert and underwriter for Ensure AI at Munich Re in Germany and there he structures risk transfer solutions for AI including performance shortfall, gen AI output and IP infringement risks. Peter is the former chief underwriting officer of Chain Proof, Inc., who are the first regulated insurance company for crypto related risks and in his earlier career he worked at Accenture as a risk management consultant for banks and insurance firms. Peter holds a PhD in physics and a master’s in economics.
Welcome to the podcast Peter. Thank you. Great to be here. So if we just start Peter with a little bit of information about your background. Just could you take us through your journey and how you went from studying physics at university into risk management, and now into the very specialist area of AI risk? – So when you study physics, there is not a typical job that it should take.
And somewhere you can do everything and nothing. So when I decided to quit academia and to join the corporate world. It was a little bit the question, yeah, where’s your edge, where you can do better than the typical economics student.
So risk management was a choice. It has a lot of to do with numbers. And so I joined extension the risk management department then after one year joined Munich RE. And once, if you’re in a large corporation, you have many options, especially Munich, we give you a lot of options. And organically and not organically, basically, you move around. I was very interested in developing new products, so we started a crypto initiative. I was able to start my own company, developing new new coverages for crypto -related risks. When I came back to Munich, I wanted to stay basically in the larger field of product development and decided to join the team from Michael Berger, which is developing coverages for AI -related risks.
Why do you think insurance and AI risk is now becoming such a hot topic?
Well, AI is a hot topic. I don’t know if AI insurance is already a hot topic, but basically every company is using AI or it’s as last exploring how it can support their business by using AI and also AI is able and can reduce risk in some area or even mitigate it or eliminate it, new risk arises on other corners and this is very fundamental issue of AI. AI is, we call it probabilistic and not deterministic.
So what is the difference?
Deterministic is really like a machine, like a code. You can basically, you see where it makes an error if it makes an error. But with AI, it’s probabilistic. So you can say the answer is to 99 .5 % right, but you never can say it’s to 100 % right. So this is basically the price you have to pay for this flexibility. And this is where Ensure AI then comes in, where we would then ensure This last bit as kind of a risk management strategy and you’re talking about different types of risks Peter Can you give some examples? If we open up that bucket of AI risks like what are typical examples of risks that you are facing? Yeah, here would differentiate between two kinds of aim iris.
Yeah, one is the Machine learning AI risk. So this is a little bit a breach at GPT -3. This gives you out a number. Typical is like zero or one.
Is it a cat or a dog? For example, you categorize email as a phishing email or not a phishing email. And or you try to estimate the price of a car by giving in a lot of input variables like brand and other Yeah. And this is basically the old world, which we say, but it’s AI as well. And then there’s a new world where you have this Chen AI, where they really generate more context.
Yeah. And this is can be text or music or pictures or code as well. And here, for example, a risk would be then IP infringement. So the output, I think we heard all about it, but the output can be just infringing on the IP of somebody else. It might be with text or might be with pictures. And then it’s as well more complicated risk. I’m looking as well at the legal tech landscape. For example, generating legal texts with AI is an interesting use case as well. Especially in legal text, every word might be very important and have to be chosen carefully.
So then AI will make or can make errors as well. And you mentioned also when we talked before that litigation of cases where AI plays a major role is increasing exponentially at the moment. Can you give us some examples of AI incidents that have been in the headlines recently? recently. So there is this airline, I think most of you heard about it, which airline chatbot which promised a discount. So when some guests of the airline wanted to buy this ticket, he got a discount from the chatbot. When he then wanted to have the discount on the ticket, the airline said this is not in line with our guidelines, we cannot pay out, the customer went to court and got the 500 euro discount ticket. You can say the reputation damages of course much higher, but if a chat bound really goes through and this airline has perps 100 ,000 customer, then of course the loss could be as well substantial even if it’s per ticket only 500 euros. BF BFS well seen, I think these lawyer sites fake cases in legal hearings. We have seen this several times, it’s where one big loss where SILO AI, this was already like 3 -4 years ago, they estimated the value of houses by taking input variables like how big is the
house, how old it is. And the AI didn’t realize that there was basically a drop at the prices and was still overvaluing these houses. And the company lost over 300 millions by these, the AI couldn’t adjust in time and and the prices were inflated.
Other kinds of buckets of risks are like discrimination, discrimination for example in mortgage underwriting, in Medicare, so discrimination hiring or firing process and we already use or we already talked shortly about the IP infringement which is one big bucket to I would like to take a brief moment to tell you about our quarterly industry magazine called The Data Scientist and how you can get a complimentary subscription. My co -host on the podcast Philip Diesinger is a regular contributor and the magazine is packed full of features from some of the industry’s leading data and AI practitioners. We have articles spanning deep technical topics from across the data science and machine learning spectrum. Plus, there’s careers advice and industry case studies from many of the world’s leading companies. So go to datasciencetalent.co.uk/media to get your complimentary magazine subscription. And now we head back to the conversation. We already talked about the need of having insurance because the system are intrinsically probabilistic by nature. I think another potential area where insurance could actually help is ensuring that adoption of AI pilots actually takes place. So we have seen recently publications that indicate most corporate or enterprise solutions stay in the pilot phase and never get scaled up because of those potential risks and not having a way of mitigating those. Is this something that you also encounter in your work that you see coming with an insurance product actually facilitates the company’s decision to move from a pilot into a fully -scaled system?
Exactly. This would be the idea with. We have two products, I think we will talk about it later, but one is for larger corporates where scaling might be a trust issue. The company explore use cases, they develop use cases, but as you said, they are not making it into production.
And one reason probably is that these companies are risk averse, that there is risk, there is especially with AI, I think a lot of reputational damage as well, and then insurance might help to push these internal use cases forward and create value for the company. But of course here it’s really a case -by -case basis you have to look at. Yeah, so the insurance that you’re offering basically covers, again, the probabilistic risk of having a certain incident. But it’s not only based on external events like environmental hazards or something like that, but also based on internal factors like the quality of the model, for instance, like the parameters, the available data, the skill that went into building an AI model and so on. Can you talk a little bit about how ensuring AI products differs from offering insurance in other areas? Yeah, so there are two answers. The one is, it’s very similar and the other one, they are then nuances.
So a lot of these risks, yeah, like discrimination, IP infringement, system failure, data privacy. So these are all risks which come with AI, but they are not new risks. We ensured them before like discrimination, hiring their policies for it. But it was more for humans and not for AI’s.
So this is not completely new. What is new is really how to assess it now because before you had more to assess the humans and now you have to assess a technical system and the really new feature is basically this missing explainability and transparency. I would say that is new with an AI model. It’s let’s say harder and depends on the model perhaps even impossible to explain why a decision was taken.
there are models who are better in explaining why they gave us an output and models which are less. But this is really, I would say, a key difference. But this is really a structural difference. So we’re probably getting a little bit into your product already and how you kind of the whole process of offering an insurance policy and so on. Could you talk us a little bit through that? Like what is exactly the product that you have as Munich? And how do you offer to clients like what’s the process of, you know, having the first conversations with them and then in the end offering them some sort of, you know, insurance policy from the process
performance or process is quite straightforward. Just write us a message, we have some kind of form with initial questions like what is your use case, which model are you using? How do you measure performance? Then we have, which is I think very special from our specialty of our team, we have really data scientists who can then talk to the data scientists or model developers of the client. So these really speak the same language and as that we really understand the use case and the risks and how the decline is measuring risks. And it’s not, yeah, and it’s again on a case -by -case basis. So we are not yet in a really production mode. We are looking at each case very carefully and try to really understand the risks with our associated risks with each case. And how do you design the guarantees so that they make sense both for the company applying AI but also for you yeah I perhaps it makes sense to talk about the products we have you have these two different products I mentioned already one one is for corporate so if a corporate has its own AI or buys an AI from a third party as well with the UAI Act, they basically have to have an overview about all the AIs and are they implemented in critical business processes. It always makes sense to talk about then risk management strategies for AI, which is implemented in critical business processes.
And then one of these options which you have is insurance. So The other where we are at the moment seeing more business is from AI provider, so from vendors who basically want to approach a corporate to sell their AI and These AI vendors most of the time start up They are missing basically this this reputation and trust most not because their product product is bad, but just they are not this time on the market, which needs to build up this trust. And here it’s where we are coming into play as Munich Rhee and one of the trusted and reputational companies. We give basically this stamp of approval from Germany.
It’s called TÜVSÜD, who’s doing it for like construction projects which give this stamp of approval and say, yeah, this tool was vetted by a third party and is good. And we are not only giving this stamp of approval, but we put the money in our mouth and ensure this as well. So a third party can be then basically very confident that the model works, otherwise the insurance would pay out. That’s basically the two key products for the AI vendors.
It’s really about trust, scaling, getting new clients. For the corporates, it’s more about risk mitigation.
Makes a lot of sense. What are the two products called?
They’re both under the name AI, sure, but once for corporates and once is for AI like what are the exact like shortfalls or problems that you ensure against is it that you’re guaranteeing the performance of an AI system as it is or is it within that realm specific topics like you ensure against hallucinations or IP infringements like you said things like that so how would that work so as an example it’s IT security, so let’s assume credit card fraud. So when you buy at Amazon or at Auto and pay with your credit card, there’s always some software whose checks is this credit card fraud. Yeah, and this is often an AI software. None of these products, of course, work 100%, but they get pretty close to it and we would then ensure basically if the software is from an AI vendor we are ensuring that the software works in 99 .9 % of all the cases so that if you have then 1000 fraudulent credit card transactions basically only one of them comes through and then really leads to a loss. Similar, basically, you can as well make it for IP infringement.
We would test the model. We would then have an estimation. What is the probability that the model infringes some copyrighted material?
And if the model infringes then on copyrighted material, we would then pay out the loss and lawyers lawyer’s fees. Peter, you brought also two other use cases. I think the first one was PV efficiency, which sounds really interesting. Solar energy seems to be heavily optimized already as a sector. What were the challenges that made the insurance valuable in that case?
Exactly. I brought a use case from Raikun. You can as well all and many more of our case studies on our website. And here, Raikun is an AI -driven solution which enables technology adapters to detect faults and assess inefficiencies and address errors autonomously in these PV modules. And they basically then offer a 100 % fault detection and zero false alarms guarantee. They were a software developer. They guaranteed it to their clients.
No, the issue is they have a lot of risk on their balance sheet. So Munich RE came on in and took basically all these contractual liabilities on its own balance sheet. And through this trust, there was an increased sales volume at Raikun.
This is always our hope with startups we are working together that they can then ultimately sell more. On the automated fall detection system, increase the energy output as well by 6 % while lowering the operational cost by 25%. So basically the clients won on every area and The startup got new clients and we earned as well our insurance fee. So this was basically a triple win. Makes sense. Yeah. And it seems really ambitious. Right. 100 % fall detection and zero falls alarms.
Is there something when you hear that scares you at the beginning or that is too ambitious or is it something, is it a typical situation where you can come in and you see value for yourself and your product? product?
Yeah, typically, of course, these startups often have a lot of data, which makes pricing very easy in the end. Yeah, in AI, as well in credit card fraud, you have thousands of credit card frauds each day. So you have really a good databases, and here it’s the same. They had quite a lot of data so we were very confident and then depending where you set the level at 100 % or 99 % of course the price of the insurance changes as well. In the case of when the AI misses a fault here what would be the chain of responsibility between like Kuhn and then the client of course and I really agree. So the client will report it. So the client has a guarantee from Raikun.
So the client has to report it. There was an error in the AI. He has, of course, has to provide some proof. This, of course, then depends on the use case. So the proof is created. And then Raikun Will pay out the amount and right cool will go to you and agree look you ensured me. Here’s a proof of loss Please pay out to right cool.
Yeah, so is this in general a good sector for you guys like energy renewables It’s just something that is a typical sector or what would be in this tree’s way where you apply or So your insurance products the most Yeah, so the biggest winner by far is really IT security. This is in general where we made our most cases I would say renewable energy is second. Yeah, we have there some Performance for batteries and cars or as well for home batteries. We have seen there several cases But the the nice thing is that this AI insurance is basically vertical agnostic So if we can look into biotech into legal AI Into infrastructure, so we can basically work everywhere We are yes, and this is as well what this makes this work. So interesting because you are learning about so many different and areas and what you usually where you don’t have the possibility to. You mentioned batteries as another use case. What would be the case that you ensure there? Yeah, we work together with a start -up called twice. They offer an AI -based solution that supplies relevant information about batteries, basically the state of health of the battery and everybody who has a car and an EV now or wants to sell it knows that the health of the battery is one of the main issues and the nice thing is that they did it without running physical tests. They could do it only with basically data, which when you put these data into an AI, you got an prediction, what is the state of the health. And this was then the guarantee that the currency of the state of the health and basically what Munich we then insured this prediction from twice. And if the state of health was incorrect by more than 2%, customers of twice will be then identified with eight times as much as they paid twice for the relevant battery. So yeah, that’s basically the insurance.
And here as well, yeah, for twice, they could no bridge this trust gap. They had a third party who vetted this software works. If it not works, we even pay you a lot of money and this increased trust led them to increase sales for twice.
– So that’s very interesting, yeah. With your AI product or an AI insurance product, what are typical challenges or uncertainties that you’re facing? Like when does it Become a challenge to ensure something or to offer a product? – Perhaps let’s stay again as an example. Let’s stay with this credit card fraud because it’s just perfect to explain such a lot of issues. The first thing which is always difficult is can you price us through you have enough data? And sometimes it’s very difficult to have a lot of data like this IP risk, yeah? You would, as an IP infringement risk, you would have a, you need a lot of data when does it infringe copyrighted material that’s sometimes hard to obtain. But if you stay with the credit card fraud example, there it’s very easy. Like as already mentioned, these credit card, There are thousands of credit card fraud trials every day, so you really, really can test the model. And then you have a really good estimation for the price today. And that’s a little bit then the issue that you can say the model is very, very good today and estimate a price. But there’s something what we call input drift and concept drift. And this is that the model changes or the environment changes over time. So, for example, in the spread cut fraud, attackers might use different ways like they have been before. There might be different attack routes And it’s not clear how good are the models with this new develop attacks. And this is more the uncertainty.
How does the world change? Or I brought as well a very nice example because we had a loss in this area. Some years ago, we insured a company which was basically predicting these kinds of credit card frauds and the model picked up internally that short -term hotel boggings are very good indication of fraud and then COVID came and the lockdowns and everybody of us was booking all these short terms. So this basically destroyed the inner working of the hotel, the false positives and false negative went up and the team wasn’t able to retrain the model as fast as it would have been necessary. So we had their several small losses. But this is something which can happen. They had the company has a great model, but then something changed and the model didn’t work anymore. Yeah.
Do you have a certain way of dealing with scenarios like that, where you have accumulation of cases or scenarios, like where many clients all fail at once?
Yeah, that’s always a difficult question for an insurer. What can you do with earthquake risk and all your clients in San go. The thing what you have to do is basically think about these accumulation scenarios and then put yourself a limit.
And for us these these accumulation scenarios are the typical which you would then think of that basically most of the AI model use one of the large foundational models foundation models like OpenAI, Anthropic, and so on. So there are not so many. And if there are updates with a performance degradation, so the performance gets worse in, let’s say, in general, it gets better. But in your part, exactly, in IP infringement, it gets worse.
But in all. So suddenly all your IP infringement policies are at risk or at least at higher risk than you use them when you were pricing them. And same if you’re ensuring several fraud models or IT security models.
If there’s a new attack vectors, all of the models will be tested at the same time. And if it’s really a weakness of these models or they don’t recognize it because they just don’t know it, they haven’t seen it in the wild before, basically, they will underperform at the exact same time.
So what can you do it not too much except to set some limits that, for example, in fraud models, we ensure only on aggregate level up to a certain number. And we try to, if we ensure IP infringement, that there’s perhaps a mix of different foundational model and not everybody’s using the anthropic model. Yeah, maybe coming back, So you said that you have experts evaluate the client’s model or vendor’s model. There are obviously also some challenges with these models that can only arise over longer periods of time, like concept truth, the bias or so that are not immediately apparent.
How do you deal with that? Do you limit the time interval that you offer the insurance for or what kind of measures or ways do we have to deal with that?
Yeah, basically all our insurance policies would be one year. Here’s as well, we would evaluate the team and the velocity they can change the model. Let’s assume COVID happens again.
Is there anything they can do? Can they retrain the model pretty fast and update the model next day? How are they monitoring the model? How they are developing it? How are they deploying it?
This we call the whole M Ops. This would be very important for us as well as get a feeling of the team, trusting the team, other experts and what they are doing. So you mentioned already that the sector where you see the most traction at the moment is IT insurance.
Are there any fast growing sectors that you can see already at the moment?
– It’s a hard one, yeah. So in the usual machine learning, as you already see, see this IT security fraud detection, feel really sad. In Chen AI, I’m hoping on IP infringement on legal AI.
But I think the space is still young. So I’m convinced that if we sit here together in two, three years, there are other use cases we don’t think today are possible. It depends as well how these foundation models develop. And then I think at the very beginning you already mentioned that the current legal frameworks are also adapting. So there could also be some form of silent coverage in existing contracts or existing laws and so on.
Is this something that you have all the touchpoints with?
Yeah, exactly. When we talk to corporates to cover these AI risks, it’s the first questions of are we already covered or not. And neither the client nor we, the broker, often knows it because when the policy was first established 10 or 20 years ago, there was no AI. So there’s no language which excludes AI or includes it. And so we call it silent AI because it silently covered or not covered. And here we definitely think this will change the next years.
And this is what happened as well with cyber. Cyber was as well silent. People didn’t know is it covered or not until it then went on to be its own excluded everywhere and to be its own line of business. And we see as well very strong pushes, especially from the US, to make the wording more transparent, more clearer. And this is of the interest of everybody, from the client broker insurer, let’s just make clear that what is covered and not. And we see a lot of things where it’s now excluded or as well affirmatively covered. So both is possible.
Do you think in five years from now AI insurance will be a standard part of enterprise AI contracts?
That’s a tough one as well. I think both ways are possible. AI is excluded from standard products and its own line of business. I can see this comments. This would be the cyber way, similar to how the cyber policies developed in the past. Or as well, you can as well argue that these AI policies are integrated in traditional policies since AI is anyway everywhere. So it doesn’t make sense to separate it. It will as well depend. Do we see large losses from AI, then it’s more, okay, let’s separate it. But if the losses are small and there’s a
soft market, then clients and broker will push to include these risks into the policies.
– And what are currently your biggest markets? Is it the U .S. or Europe or Asia?
– Our focus markets of our team is U .S. and Europe. Saying Saying that, if Dan Babi from Asia or South America is calling us, of course, we will ensure it. But basically, where we go to conferences, where we try to get a lot of visibility is EU and US. Yeah, makes sense.
Do the AI laws, regulations, and so on, legislation play a role between those two markets? Do you see a difference there between the US and Europe?
– No, because our products is usually not registration -driven. In most cases, it’s startup who want that coverage to cover this trust bridge here to get vetted. This is basically the main
motivation. – Yeah.
What advice would you give AI startups when they want to be insurable?
– Yeah, just feel free to give us a call. Just write us on LinkedIn.
Always reach out. We are always happy to talk with people from the ecosystem, even if you think you’re not there. Otherwise, focus on your use case and your performance. I mean, this is anyway the most important thing. What we then need, if your use case is flying, it’s just a little transparency about your error rate, how do you measure performance, what model are you using? But that’s usually not the critical part here. The main issues really is the market demand for your model and for insurance. Everything else, we will work out. – And who knows our listeners learn more. – Our team has our own website. Yeah, if you put into Google Ensure AI and Munich RE, you will find it. We have a lot of case studies. So we talked about two cases, but we have infrastructure cases, IT security cases. So I think this makes a lot of sense to look around here. We have some white papers as well, describing as
well, silent AI, where we see the AI sure market, and we have as well some papers on ArcSyfe, so our risk assessors and data scientists, if they have some time to work as well on some academic topics, so they’re writing as well some papers, of course, in the relationship of AI and insurance. Otherwise, again, only the offer, feel free to reach out. It leaked in, follow the team for updates. Sounds good.
So that concludes today’s episode. Peter, thank you so much for joining us today. It was an absolute pleasure talking to you. Thank you also to my co -host, Philip Diesinger, and of course to you for listening. And before we do leave you, I just want to quickly mention our industry publication, the Data and AI magazine. It’s packed full of insight into what’s happening in the world of enterprise data and AI. And there is actually a feature in the current issue, the September 2025 issue around a brilliant three -way collaboration with Munich Rewire and GoTrial. And it’s a very, very interesting article on the front cover and you
can subscribe to the magazine for free at datasciencetalent.co.uk/media.
Do check out our other episodes, datascienceconversations .com and we’ll look forward
to having you on the next show.