
Episode 1: Cyber Pulse - The Evolving Threat Landscape: Is AI Reshaping Cyber Insurance?
You can also listen to this episode on Spotify, Apple Podcasts, Amazon Music, or YouTube.
Host: Rianna Mistry, SEO & Content Specialist at CyberCube
Guests: William Altman, CyberCube’s Cyber Threat Intelligence Principal & Doug Fullam, Principal Actuary at CyberCube.
Transcript
00:12
Rianna Mistry: Welcome to the CyberCube podcast. You're listening to our Cyber Pulse series where we help the insurance industry stay on the pulse of cyber security trends and the threat landscape. I'm your host, Rhianna Mistry, Content Specialist at CyberCube, and this episode is all about AI. Now, artificial intelligence continues to be the hot topic across industries and for good reason, but at CyberCube, we've been looking at how AI will impact the cyber insurance industry specifically.
00:42
To discuss this, I'm joined by William Altman, CyberCube’s Cyber Threat Intelligence Principal and Doug Fullam, Principal Actuary at CyberCube. Thank you both for joining us. Could you introduce yourselves and let the listeners know more about your role at CyberCube? Let's start with you, William.
William Altman: Yeah, thanks for having us today. It's great to be on the podcast, getting ready to talk about artificial intelligence, the threat landscape, so a lot of exciting stuff coming.
01:08
I'm a Cyber Threat Intelligence Principal here at CyberCube. So I look after our threat intelligence service called Concierge and then help our clients and anyone interested in our products and services understand what we do from a cyber technical perspective. And I've been with CyberCube now for almost five years. So I come from a background mostly in cyber security and not insurance, but
01:31
you know, as I'm here at the company for longer and longer, I feel like I can't distance myself from that insurance label too much anymore. Even though I've never worked at a big carrier or a big reinsurer myself, I've spent the last five years supporting those clients with our Threat Intel services. So really happy to be here and excited to talk to Doug today.
Doug Fullam: Thanks. I'm excited to talk to you as well. As I said, my name is Doug Fulham. Principal Actuary here. I focus mainly on building the models themselves.
01:59
I've been in the cat modeling space for over a decade at this point and came from the nat cat side, but now focused on the cyber side. And so I kind of have a complimentary experience relative to William here. Spent a lot of time building models in insurance space and things of that nature. And over the last few years that I've been at CyberCube, kind of expanded my cyber domain knowledge, cyber skill set and things of that nature. And so building out models in that space to understand the risk and the impact that it might have.
02:24
Rianna Mistry: Great. Thank you. As you both touched on, CyberCube provides modeling and analytics for the cyber insurance industry. Our listeners will want to know more about how AI is affecting the threat landscape. But before we get into that, Doug, could you start us off and talk about how AI is currently impacting modeling?
Doug Fullam: Yeah, it's an interesting question on a lot of different levels and it's impacting in a few different ways. But the one thing I'd like to frame, especially as we want to think about AI and things like that, it wasn't like it was like a light switch a couple of years when chat GPT came online where
02:53
All of a sudden, the whole world is different. Realistically speaking, AI has been around for many years now, decades at this point. We've been using it. You probably have been using it in your own daily life without even thinking about it. You hit the weather app on your phone. There's probably some AI model built to help you understand what the weather is going to be. So now in terms of the impact here in the cyber space, obviously it's been kind of an evolving process in that. So there's been tools out there.
03:23
Tools are getting more sophisticated. The way that attackers and or defenders use those tools is changing over the time. And so what we really are trying to understand is that kind of evolutionary process. And so as we go through it, what does that mean? How does that change the landscape? And does it give one person an added advantage on the attacker side or maybe an added advantage on the defender side? And obviously the answer is it's very blunt. There's a lot of moving parts to this and it does change pretty regularly.
03:50
And so we need to think about that process in the coming period. Now, a lot of times when we think about insurance, we're only looking in the P&C space, maybe out a year two years, three years, and not necessarily looking way down the road. That being said, we do need to prepare for how that process happens. And we do need to prepare for the impact. So what I want insurers and the rest to of take away is how do we think about that process? How does that maybe change trends or patterns out there? And is that necessarily going to drive one thing up or one thing down?
04:19
And I think the real thing is we need to think about that oscillation that can happen. So I'd love for insurance to think about it is not just one thing. It's not just all good or all bad. It is many things with many moving parts. And maybe some of them in the end balance out each other, but there are probably things that we need to think about and track. So we are tracking the evolution of how they get used. We're thinking about that in the models. And I do also want to leave people with the fact that the models inherently do capture some level of AI risk in the sense that they're capturing how
04:48
often things happen in the severity and we're modeling those. We're not explicitly modeling that. We're not designing things around that framework explicitly, but as attackers get used to those things or as defenders use them, they inherently impact claims and frequencies and they do make it into the analysis and the results.
William Altman: So Doug, what I heard you talk a little bit about there were considerations that you're taking into account when thinking about
05:13
how AI is going to impact cat modeling going forward. One of them is the threat landscape. mean, it's quite literally just what threat actors are using AI for and whether or not at some point they'll have a distinct advantage over defenders. Would you agree that's something you're following today?
Doug Fullam: Yeah, I mean, I would definitely say we're following that. do agree that if you think about it, what side of the fence do you want to be on? Attackers need one avenue and defenders need to defend against all avenues.
05:42
It does make sense from that perspective as well. Yeah. Good. Good way to think about it.
William Altman: Yeah. It is a really interesting point. You know, we have to think today about how these threat actors are using AI and whether or not we're looking at just efficiency gains in their attack capabilities or total revolution in terms of types of new kill chains, new systemic events that we haven't considered. You know, we're not there yet.
06:07
I think the general consensus, at least that I've heard amongst clients and folks I talked to, is that 2025 is not the year that AI supercharges and revolutionizes cyber attacks against companies. Meaning that we're not likely to see defenders go on the back foot in a meaningful way for a significant period of time due to AI imbalances between defenders and adversaries. And we're also not likely to see threat actors
06:37
perpetrate totally unforeseen cat events or even the worst cat events that we can model in 2025 because of AI. But we're still likely to see some efficiency gains along that road to revolution if we ever get there, where threat actors are already today gradually improving existing cybersecurity capabilities rather than completely replacing them.
07:01
It's more about efficiency when it comes to target selection. I think this is a really big role for AI today when threat actors want to scan the internet for credentials that have been leaked and login portals that match those credentials. Or if they want to find the existence of vulnerable assets that exist on the internet at scale and be able to target those with exploits and malware at scale.
07:27
These are the types of things that are now being made increasingly possible with AI. It's really important to note that bad guys have been good at cyber attacks for a long time without AI. They don't really need AI to be successful at this. However, AI is going to create a new class of threat actor that's not as technically savvy, but still capable of perpetrating meaningful, significant cyber attacks that cause insurance losses.
07:57
So that group of threat actors is likely to expand. That could cause some type of an impact on frequency and severity of attacks moving forward. But I don't think in the near future, I think we still need to see the systems, the AI systems get more capable, more autonomous and cheaper, basically more accessible.
Rianna Mistry: And in what ways are cyber threat actors using AI in their attacks?
08:22
William Altman: So really it's about automated target selection today and AI and social engineering and deepfakes. Those are the primary ways that threat actors are using the technology. think in the future we'll see more AI powered exploits, polymorphic and adaptive malware and kind of whole new kill chains developed as a result of AI. But this is kind of the long term future of the technology.
Rianna Mistry: Okay. So that's a long way away. What about in the near future? What advancements do you anticipate?
08:51
William Altman: I think it's important for us to discuss AGI. When we talk about AGI, what we mean is artificial general intelligence. This is that day when the AI models are able to perform intellectual tasks faster, better, more efficiently than human beings can. And I don't know if we'll ever get there, but it's important to understand kind of where we've come from on that road. Current AI systems are highly specialized. GPT models.
09:20
self-driving cars. We've made a lot of progress in these narrow AI fields. This is essential before transitioning to general intelligence, but it isn't a guarantee that we'll actually get there to AGI. I'm curious, Doug, what are your thoughts? Like, you know, are we going to get to a point where artificial intelligence is just so much better than human beings, more powerful that, you know, it's really the AGI is out there protecting us and defending us against other AGI? Or is that more of a science fiction novel?
09:50
Doug Fullam: I think the answer is maybe, maybe there's going to be obviously advancements in many different ways across the spectrum in that. And it also depends on what we mean exactly by AGI or even ASI when we talk about superhuman intelligence in that space. And we want to make distinctions on that. We should start with the premise that, you know, even understanding human intelligence is the evolving field and we are still trying to figure out what that means exactly.
10:14
Let alone what is the computer intelligence and its own ability to kind of outpace us. So obviously computers even today are much better at certain things than we are. You we're using for many things. The systems will continue to evolve and they will continue to be in a lot of ways more and more integrated into our overall process piece. The question is, is can they, can they surpass us? And they're going to continue to supress, surpass specific areas of us and getting to a point that maybe there's some sort of parody between.
10:41
the human and the computer, or even that it gets to that ASI, IE is better than the, than the human in general. There could still be plenty of avenues, even in that spectrum where humans just do better as a general statement, the, know, AGI ASI is better than us in a thousand tasks and we're still better than a handful of tasks or whatever you want to kind of break down between those two. But we will have our systems get more and more. And I think there's also a component of integration with other technology as we see over time, whether that become from.
11:11
You we can think of quantum computing, whether we can think of that from, you know, potentially the human to computer connections. You know, we think what's being done at certain companies where there's more direct connections between that, even, you know, today we are starting to integrate technology within people for tracking and, or pacemakers or whatever the case may be, but that likely will continue to evolve. And there'll be a a blend between where does the human stop and the system begin will be a little bit of a question in the future. But maybe.
11:41
William Altman: Right, right.
Doug Fullam: Because, you know, as much as I'm happy and excited about building models, and we do build various AI models here internally for different purposes, at the same time, it does worry me. Maybe I spent too much time reading, you know, all the scary dystopian future, but at the same time, there are great things that can come from it and leaning into that space.
William Altman: Yeah, absolutely. I think you really nailed it, that the trajectory of the technology would suggest that we will get to AGI eventually, but it's still… that timeline is still very uncertain.
12:13
The form of which AI will take in that final era is very uncertain. You mentioned, you know, like integrating with humans, integrating with quantum machines. There's also a bunch of issues and challenges before we get to AGI. You've got computation and scaling issues.
12:32
Even today's most state of the art models require vast resources, but still lacked AGI. There's also issues around cognition and human reasoning. Like we are very good humans at intuition, common sense, emotional intelligence. No AI can master that yet today. And I don't think that we're that close to actually getting there. And I think finally, the big challenge here is alignment and control.
12:56
Even if AGI is developed, will it be safe? think this is where your concern around like, want it, but I maybe don't want it. You know, I've seen the science fiction movies. We just don't know what that final form will be. And we don't know if there's going to be any meaningful regulation to control it in the future at all. So think that does in the future, hold some implications for insurance for companies using the technologies. My hope is that the
13:22
good guys have just as good, if not better technology than the bad guys. That is kind of my only real hope for the development of the tool. Cause I think it's going to happen whether we want it or not. I think it's coming and I think it's best to prepare for it. So insurance companies should start thinking about this stuff now, reinsurers, cap modelers, they don't need to upend models today. I think we've said this 2025, probably not the year that
13:49
cyber attacks revolutionize cat modeling in the space, but there's a lot to consider on the road to AGI and on the road to revolution that shouldn't be ignored by cat modelers as well.
Doug Fullam: Well, I think also just highlighting that point of like the trajectory of where it's going, you know, we are talking about like 10, 20, 30, 40, 50, a hundred years into the future on these kinds of concepts. And obviously there's a lot of things that can go left or right on that and things that in my head were disrupting…
14:17
You know, if you look at how those models have evolved over time and the amount of user data that's involved in that or just general data acquired, most of the firms that at the top end of the AI space are effectively running out of data. And there's different things that they're trying to do, whether it's trying to create synthetic data or just use other methods to try to improve the technology. You know, we've even seen how people are kind of redesigning the models themselves, maybe to use less data. And so that you can kind of improve in that space across it.
14:45
But, know, do we hit a roadblock? And that's one of the big questions in my head about the AGI, ASI is do we hit a roadblock where we just, there is no more information to really train the system. And that intuition that you're talking about, but the human is been better at overall than a system. Is that the kind of, kind of, we just don't have any more information to get past it. Theoretically, if there was unlimited data, we could, but maybe it's going to be hard or it's going to slow down the pace of development in that space.
15:12
I don't know. That's where my head comes from of like the biggest challenge to kind of get through. And then the other big challenge to get through is we both kind of tested on is just what does it actually mean to be of human intelligence and where can we kind of merge those two things to be the best of both worlds in the sense of, know, we can lean into the human where it's fantastic at, and we can lean into the system where it's fantastic at and try to get the biggest bang for our buck on that process. And I think that that's where we really need to think about in the next 10, 20, 30 years is those kind of impacts.
15:42
Rianna Mistry: And what about for insurers?
Doug Fullam: In the insurance space. mean, there is greater and greater models, greater and greater use, you know, going into on the opposite side from a modeling standpoint, there's lots of things that we can use to understand what that might mean from a loss standpoint. There's lots of things that we can do to build better models using tools that are out there in the AI space. So it's in a lot of ways, not saying everything is all rosy, but there's a lot of things that we can do and that excite me from a
16:09
from a risk assessment standpoint that excite me from a model building standpoint because of that evolution. And, you we can see, for example, in the modeling space, we can start thinking, okay, well, what happens if certain things are starting to be used? So we start getting better targeting by attackers and they start using AI to kind of identify weak points and vulnerabilities at companies. And so they start exploiting them a little bit more efficiently. We can start using our models today to...
16:39
to create sensitivities around what that might mean to understand the targeting of that space from your perspective. Cause you spent a lot of time talking to individuals in this space and kind of like the people that are worried about it from the ERM standpoint or the experts in the room from that, how do they think about those like maybe moderate changes in the future and what that might hold for their book of business or their way that they think about pricing things or what have you. be good to understand that from your side.
17:05
William Altman: Yeah, I think a lot of what people are concerned about is dynamism in the threat landscape. Everyone says this, that threat landscape moves fast. Threat actors are changing tactics. One attack to the next to try to get better. Something that people aren't experienced as much with in Natcat. We say this at CyberCube, know, the hurricane doesn't look back and say, okay, how can I do that better next time? Threat actors do. So that dynamism creates some
17:32
cause for concern amongst modelers who also have to maintain stability in those models so that they can maintain stability in the markets that the models underlie. And so I think there's a fundamental contradiction there, at least on the surface. And AI seems to induce greater fear about that dynamism uncertainty. People sort of think, well, it's just simply going to make things go faster and easier for the threat actors.
17:59
change more quicker so our models will be outdated faster. I actually disagree with it. I think that threat actors are by and large doing a lot of the same things that they've always done to break into networks. Every so often technologies are introduced that introduce efficiencies and make things work better for them. Internet scanning technologies like census and show Dan and others didn't exist and threat actors were still able to locate.
18:27
vulnerable systems on the internet. Now they can do it at scale using systems that are not primarily AI driven. I think in the future, we'll start to see some AI driven scanning stuff that's going to be concerning from the sense that threat actors could target hundreds and thousands of companies at the same time, perhaps more effectively than they can now. These are the types of things we should follow and we should start to look at how our models can adapt to them, but they don't.
18:56
totally throw our models out. I think a lot of what we model at CyberCube is still reflective of what threat actors are going to be doing in the next five years, even with AI. They might just do it in a little bit faster way with a little bit more success. So tweaking some things here and there is important. Staying on top of the risk is important, but
19:17
panicking and throwing things out for a wholesale redesign is not necessary as a result of AI. So that's some of the conversations I'm having around this, technology and this topic today amongst our clients. And like you said, various experts in the room.
Doug Fullam: Yeah. I think especially the efficiency games that you talk about are interesting in my head. like, you know, there's still economics played out here in the sense of like, okay, I want to attack a place. I want to do it reasonably efficiently. Can I
19:47
scope out where I need to attack and can I, where's the vulnerability? I don't want to waste time. And AI might actively be more on that. How can I waste less time? How can I kind of like determine what's more valuable target to attack and, or what's an easier thing to penetrate or where I should focus my efforts on. It doesn't necessarily change who gets attacked and how often they get attacked. It kind of maybe just reduces my overhead of doing that process if I'm the bad guy. So that's also an interesting thing that doesn't really change necessarily change your losses, it just changes your cost structure for the bad guy…
20:17
[William Altman: It does. Yeah.]
Doug Fullam: Which I guess in the long run, you know, all costs are variable if you're talking to economists. So that changes a little bit of that, but it's not an immediate process change in the risk profile. It's also an interesting point.
William Altman: Yeah. I would agree with that. I think today the name of the game for us and the threat intel side is to calm down the hyperbole and the hype around AI and its impact on the threat landscape.
20:45
to recognize it for what it is, where it's actually making a meaningful impact in the kill chain across most attacks, if any, and to then talk about that as it is and realistically and not give into marketing hype. So that's what we're doing on the team here. I hope people can appreciate that and understand that's the way we're approaching it here at CyberCube.
Rianna Mistry: Yeah, that feels like a really sensible response. And I think it's important we understand the wider societal response to AI as well.
21:12
You mentioned regulations earlier. Could you expand on that a bit more? It has a role to play. I think people tend to think of regulators as behind the puck constantly when it comes to new technologies. I wouldn't disagree for the most part, but there are some interesting things happening out there. There's the EU AI Act, which is probably the most comprehensive framework for categorizing.
21:35
AI levels and restricting harmful use. Similar to other regulations, it's only as good as its teeth and enforcement, but at least there's folks out there thinking about this and how we could properly regulate and structure these systems so they don't have negative consequences and some blowback. In the U.S., I think it's a lot more sector-specific and voluntary. There's a lot of executive orders that set guidelines, but no federal laws yet on AI regulation.
22:03
I think one of the big issues here is people are afraid to regulate it, overregulate it, stifle innovation, and potentially lose the geopolitical arms race taking place between the US and China when it comes to AI. In China, they're heavily regulating AI domestically, especially in content generation, especially in the news and things like that. It's heavily regulated. In the US, I think there's still some private sector players like OpenAI, Google, Microsoft.
22:32
They very much call for AI safety measures, but they also lobby for regulations in their favor at the same time. So I think we've got some competing interests when it comes to regulation that may cause all of these regulations to backfire. And essentially we won't actually get any meaningful regulation and we'll just stifle innovation and potentially lose the arms race around AI.
22:57
I really hope that doesn't happen. seems to me like the U S is potentially opening up a little bit more to less regulation around this stuff. We'll see. Yeah.
Doug Fullam: Yeah. No, it's all challenges on the development.
William Altman: Yeah. Definitely some challenges.
Doug Fullam: There's also the regulation of the insurance market. How do you regulate for this kind of stuff? How do you think about reserving? Because if we go in this space where we think that AI may influence frequency or may influence severity, and there's this ebbs and flows process to that, which is different than if you want to think of traditional…
23:27
where insurance is born out of when we think about fire insuring against fire damage from a home, or we want to be more broad spectrum or mortality risk and just life insurance, which is way longer time horizons that we're talking about with that kind of stuff where traditional regulations from them are, you know, they usually like to set in stone how people operate and how people think about setting premiums or reserves, all that kind of stuff. I think there's just an interesting operation of can you maybe have more dynamic regulations around cyber in particular, where there's more
23:57
potential oscillation between how much risk that you can take on and how much risk you can see it away and things of that nature. And because the landscape is ever evolving, pluses and minuses, is there a way that we can even think about like dynamic system? We see dynamic pricing in the financial sector. You can go and buy a stock or some sort of options, puts, calls, et cetera.
24:21
and the price associated with them changes with the ebb and flows of the general information in the market and the risk that is being taken. so could we see an insurance base as well where there's more openness in terms of allowing for those ebbs and flows of pricing or reserving or capital efficiency uses in that? And I think that that is at the very far end of where regulators like to go or will even think about going. But it's definitely a conversation I've had a few times with people
24:49
And it's definitely an interesting one because this is a space is obviously very different than your traditional home insurance or your traditional auto insurance or what have you. So maybe regulators will open up a little bit more to allowing for that kind of stuff from observing pricing, et cetera, standpoint.
William Altman: Yeah, very interesting. It comes back to that idea of dynamism and how do you keep up with the dynamic space? Dynamic pricing would be the next best thing.
25:14
gets into the kind of the idea of AI not only impacting potential losses, but impacting actual workflows in the insurance space as well. Obviously a lot of opportunity there across every industry. Insurance is really no exception.
Rianna Mistry: Thank you both for a really insightful conversation about the impact of AI and what cyber insurance need to be thinking about going forward. Before we finish, do either of you have any closing remarks for our listeners?
William Altman: Yeah, sure. Certainly. I think there's
25:44
A couple of top level points I want to drive home to our audience. I know we also mentioned this idea of quantum computing, integrating with AI. Doug may have talked about it. I mentioned it. I just want to point out that's an extremely far off topic, kind of a fun one to think about and kind of opine on. We are far from having large scale fault tolerant quantum machines that can
26:07
integrate and supercharge AI model training or drug discovery or have an impact on cryptography security or new AI architectures. Like we're just far from it. It's an interesting topic though. think I've always thought about that day when we might have a machine that has the processing power of a quantum machine and the ability to learn of like an AGI. You know, what does that actually do for us? I don't know.
26:35
if we can actually answer that today, but it seems like the pinnacle of our modern computing technologies. Something we're thinking about over the next decade or more, to be honest. But that being said, think far more relevant for us today is this idea of AI creating efficiency gains for threat actors, especially in early stages of the kill chain, like the reconnaissance targeting, credential roundups, getting initial access through exploiting
27:05
credentials through stuffing or brute forcing attacks. These are the types of things that threat actors have done for a while, but they're getting better at them, more efficient. The techniques and tools are open to a wider array of folks who can do this nefarious activity now with ease. So it's worth tracking this year, but again, not the year we're likely to see total revolution across kill chains and cybersecurity or with systemic cap modeling. So that being said, just
27:34
Get in touch with our team if you're interested in how cyber threat intelligence can make an impact on your underwriting, reinsurance, or even broken strategies. We'd be more than happy to talk to you. We've got really great clientele today making use of this information. So we know that it's valuable for the reinsurance community to kind of get in front of threats and see how they can adapt accordingly to avoid losses. And that's what we're here to help them do.
Doug Fullam: So in this space, you should be mindful that AI is not new, machine learning tools are not new.
28:04
They are, been used for decades at this point. You use them in your everyday life. And what's happening more and more and more is obviously they are influencing making maybe more efficiency gains for threat actors or alternative defenders, getting more tools to kind of understand their risks, understand what they can do to mitigate that or where their vulnerabilities are and kind of close them down. That information's all making it into what we see on the, on the actual end result of the frequency side or the severity side of
28:33
of the event and obviously we'll continue to make its way into that and we'll grow and evolve the models to account for those risks in kind of aggregate processes. That being said, we are still observing to see if we need to think about this more explicitly and we're constantly thinking through that those lenses. It helps that, you know, the team builds, you know, machine learning models on a regular basis for different parts. So the things that we're learning about are thinking how they could be used. We also using in our space for other purposes, but hopefully the good side of the ECOIN.
29:03
and helping you understand that risk. And so we do have people here that not only are thinking about the risk here, but they're also building models in that space. So they have direct experience with it in that process. So if you are interested, like William said, if you want to have a longer discussion on that, happy to have that conversation, have to think through as we kind of observe what's going on. And we're continually every week, every month, we do look at the new threats that are happening and we think about them and think about how they might influence the model results.
29:32
the next version update and what have you. And it is something that we want to work with our clients directly on to see where they need to understand the risks, but also to see how they're dealing with it, how they're thinking about the pricing or reserving or capital management process of that system. Because we're trying to design tools that can best efficiently allow them to make decisions. And then from that, hopefully overall make a better market for the broader community. Happy continuous conversations and look forward to talking with people.
30:03
Rianna Mistry: Thank you for listening to this episode of the CyberCube podcast where we break down cyber risk, insurance and everything in between. If you enjoyed today's conversation, make sure to subscribe on your preferred listening platform and tune in for the next episode. And don't forget to check out www.cybcube.com or find us on LinkedIn or X for extra content and updates. Until next time, stay informed, stay resilient and keep shaping the future of cyber insurance.