Join Earl Duby in another episode of Big Reports in 5 Minutes, where he dissects the ‘State of AI Cybersecurity’ Darktrace report from 2024.
In this brief but detailed video, Earl highlights the key findings in the report. Discover why 96% of security experts believe AI is crucial in combating AI-driven threats. If you’re looking to grasp the current trends and future directions in AI cybersecurity without reading the full report, this video provides the essential takeaways.
Want to learn more about Artificial Intelligence and how that can impact information security, join our webinar with eSentire July 24th @ 11am.
Transcript:
(0:04) Hey, hello again. This is Earl Duby, back with another episode of Big Reports in 5 Minutes, give or take. (0:13) So today I want to talk about this pretty cool report that I found from Darktrace.
(0:18) This is called The State of AI Cybersecurity from 2024, so it’s still fresh. (0:25) Read through this thing, a lot of good stuff in here, so I just want to go over a few quick points with you (0:30) so that if you don’t want to read the report, I’ll give you the high points. (0:35) So first of all, this report is an amalgamation of surveys that went out to 1,800 security executives (0:43) and security practitioners, so a wealth of data in here, a lot of feedback, (0:49) and there’s some pretty interesting numbers that come out of this report.
(0:53) So just out of the executive summary, there’s three key statistics that I thought were interesting. (0:59) So one of them, it kind of strikes me odd, so this is 96% of all these people that were surveyed (1:08) believe that AI is needed to combat AI-type threats, so 96%. (1:17) So there’s actually 4% of people out there who think that we don’t need AI to combat AI.
(1:24) So we’re going to have to go find those 4%, give them a little education, (1:28) because to me there is no way that we can combat AI-centered threats without having AI-centered defenses(1:36) to augment the human blue team that is fighting those threats, so 97% out of this survey.(1:44) Another interesting statistic was, you know, we always have this kind of interesting conflict(1:56) between what practitioners think about things and then what their bosses think about things, (2:01) or the executives think, so in this one, 79% of executives believe that organizations(2:07) have taken proper steps to reduce their AI threat, so this is threat of AI-propagated cyber threats, (2:16) so 79% of executives. Meanwhile, 54% of practitioners, the people that are actually deploying the defenses (2:23) against these AI threats, 54% of them believe that organizations have done sufficient amount of procedures (2:33) to combat this AI threat, so again, you know, it’s a pretty scary void if there’s almost like 25% difference (2:45) between executives and practitioners in terms of what they think they’ve implemented (2:50) to combat the emerging threats that we’re facing.
(2:54) So if you’re a practitioner out there, you know, we’re going to have to find a better way(2:59) of impressing upon our management that we need more things to do to combat this rising threat coming from the AI. (3:10) And then the last statistic out of the executive summary that I want to talk about is, (3:14) 88% of organizations prefer a platform over a kind of a bucket of point solutions, (3:23) so this is kind of interesting to me that we’ve made this evolution over time, (3:26) because I can remember not too long ago that CISOs prided themselves on the number of, (3:33) you know, startup companies or little point solutions that they had deployed in their environment.(3:40) Now it’s kind of pendulum has swung the other way of saying, hey, we really want platforms(3:45) because it’s really hard to manage all these point solutions, so interesting statistics.
(3:53) So as you get into the report, a couple of things stood out, (3:58) and actually I only have, you know, nine more points to get to, (4:01) so hopefully the clock isn’t running out on me just yet.(4:05) So one of the graphics in here talks about the different types of threats posed by AI, (4:11) and I just want to pick out two of these. (4:13) So one of them is deep fakes and building images that people will trust.
(4:19) So whether this is, you know, politicians, which I would never trust a politician, (4:24) or actors, which I would never trust an actor either, (4:27) but as you start to develop these different deep fakes and have them say the things you want to say, (4:34) it’s going to be very hard for people to discern truth from fiction (4:39) coming from these different types of AI-driven threats.(4:43) And the other thing is it talks about phishing campaigns (4:47) and just how AI is going to drive these phishing campaigns. (4:50) So just kind of keep that on your, you know, on the forefront (4:54) as we talk through the rest of this report in terms of these are the things (4:58) that we’re talking about in terms of AI-driven threats.
(5:03) So 90% of the surveyed people think that AI-powered cyber threats (5:08) will impact their organization in the next one to two years. (5:11) So at least they got the time frame correct, (5:15) because if it’s not affecting you now, which it most likely is, (5:20) for sure it will be impacting you over the next one to two years. (5:24) So 90% of people are on that page, so that’s good.
(5:28) There’s also a very good diagram in here about how AI-powered threats (5:33) could augment phases of the attack chain. (5:37) So there’s a really nice graphic that has the attack chain in there (5:40) and where AI would fall in there. (5:42) Really good for a presentation if you’re trying to put one together (5:45) to convince your management that you need more AI defenses(5:49) to fight these AI-driven threats.
(5:53) There’s also a good diagram in there of the different components of AI. (5:59) And I think this is interesting because we’re in kind of the same spot (6:02) with AI that we were with cloud, you know, maybe a decade ago (6:05) in terms of people just threw out this nebulous term cloud(6:10) and you were trying to figure out, like, what does that mean? (6:13) And in reality, it was just a bunch of servers (6:15) running in someone else’s data center that you were sharing time on.(6:21) When you look at AI, it can also be broken down into very clear components.
(6:27) You have just three components to that, and it’s in this diagram, (6:32) and it’s right here over my shoulder so you’ll be able to see it. (6:36) But you can use that to help educate your team in terms of what AI means. (6:44) And the key piece of that is to protect the algorithms (6:47) because, you know, aside from the data, (6:49) which we’re already protecting anyways or should be, (6:53) the algorithms are really where the magic is at.
(6:56) And, you know, protecting those and making sure that those don’t get modified, (7:01) especially if you’re a big company like Google, you know, (7:05) open AI, things like that, protect those algorithms. (7:10) Then it talks a little bit about the inhibitors (7:12) to defending against AI-powered threats, (7:16) and the top three of these things are all related to people and skills, (7:20) which is pretty ironic because those are the top three inhibitors (7:23) to almost every defense that we have out there (7:26) is do we have the people to run the tools (7:28) and do they have enough knowledge to actually run the tools (7:31) and configure them correctly to combat the threat. (7:35) So same thing here.
(7:36) Again, AI starts to sound like, you know, (7:39) other things that we’ve been combating over the last several years. (7:44) And then 31% of security professionals are familiar with supervised machine learning, (7:51) which is one of the types of AI that this report talks about. (7:55) So you take that 31% are familiar with one of the types of AI, (8:00) and it’s actually less for these other types of AI, (8:03) and then you pair that with another statistic that’s in here of only 26% (8:08) of the security professionals fully understand (8:11) how AI is used in the tools that they’re using.
(8:15) So this starts to become that barrier to our defenses. (8:21) We have people that don’t understand all of what AI is, (8:27) and then we don’t understand how our tools are implementing AI (8:33) to combat those threats. (8:34) So this is where that learning curve really has to start to change, (8:38) and hopefully, you know, just as we saw with cloud, (8:42) where you had, like, the CSA come out with some frameworks for training (8:46) and some really good resources available for helping people defend (8:52) against cloud-based threats.
(8:53) I think we’re going to have to have something come along with AI (8:57) that does the same thing that the Cloud Security Alliance is doing (9:00) in helping bridge this gap between what people know (9:04) about artificial intelligence and then what they know about their tools (9:07) and then how they can use that to combat the threat. (9:11) And then, you know, there’s other really good graphics in here. (9:17) I think this report is really good for having visuals that you could, (9:21) you know, hopefully borrow and get proper credit for, (9:27) came out of this report.
(9:28) But there’s really some good things in here that you can use (9:31) to put in presentations.(9:32) So with all that being said, I love this report. (9:35) I think you should take time to read it, but if you don’t, (9:39) hopefully this five minutes, plus or minus, (9:42) will help you get what you can out of the state of AI cybersecurity in 2024.
(9:49) And with that, I just want to remind you that we will be having a webinar (9:54) coming up in July, July 24th, I believe, (9:58) where we’re going to be talking about AI security (10:01) with some good dialogue around different types of threats (10:06) and the different types of defenses that we can put together. (10:09) So please follow the link, register for the webinar, (10:14) and I hope to see you there. (10:15) In the meantime, stay safe out there.
(10:19) Thanks.