AI and the Future of Security with John Dwyer
John Dwyer · February 2, 2024 · 1:04:27
Back to EpisodeAll right, welcome to the Security Cocktail Hour. I'm Joe Patti. I'm Adam Roth. Adam, how are you doing on this fine day?
I'm incredible. It's actually very nice and hot out. No, it isn't. It's like some major storm here. We're like getting in the northeast. They call it the bomb cyclone, right?
The bomb cyclone. Yeah, well, hopefully the bomb cyclone won't like take out our our internet while we're recording this. Because we have a guest today. We have someone coming on. John Dwyer. John, welcome to the show. Good to have you on.
Thanks for having me, guys. I'm excited to be here. I don't know what kind of weather you guys have, but it's cold and rainy here in Pittsburgh, which is shockingly on brand for Pittsburgh.
Yeah, I haven't been there for a while. I went to grad school in Pittsburgh, and I gotta be honest, I haven't been there in 30 years. I've just had no reason to go back.
Where did you go to school? Wait, did we talk about this before? Yeah, I think we did. I went to PIP. Oh, okay. Yeah. Yeah.
We talked to grad school there. And it's funny because the few people I would still talk to from there, I think are all pretty much moved out, but I hear it's a tech hub now and there's like a lot coming back, you know?
Yeah. Yeah. But I, I mean, like I went to Carnegie Mellon, so I was, you know, running around those dirty streets of Oakland as well.
Well, aren't you special? You went to Carnegie Mellon, not like a second tier peasants who went to Pitt. So, okay.
It's like an administrative error or something where they accepted my application.
Well, I used to bike through their campus. It was really nice, I gotta say, but never got in the door.
I was in Pittsburgh once. Let me tell you how long ago I was there. I slept in Three Rivers Stadium Park.
Yeah, so that got torn down a long time ago, right? I moved here in 2018, but yeah, Pittsburgh's not bad. It's really starting to grow on me. There's no traffic. I think that's my favorite thing about living in Pittsburgh is that rush hour is from five to six and then it's over. It's not like being in the Northeast where rush hour starts at two and ends at eight o'clock at night.
Yeah. Well, that's it. I lived in Oakland and one time I had to go downtown for something and I'm like, How do you get downtown? And people like, drive. And I'm like, well, where do you park? And they're just like, well, just park. I'm thinking it's like Manhattan or something. Yeah, street parking. Life isn't crazy.
Yeah. It's completely different. But I actually kind of like that. Maybe that's a sign that I'm getting older, is that it's quiet and I don't need all the hustle and bustle.
Oh, that's cool. So besides enjoying Pittsburgh, tell us a little bit about what you do. It's very cool. It's like, this is one of the cool things.
Yeah, well, I try to, you know, it is pretty cool. I'm very, very fortunate. So I work for an organization of IBM security called X-Force.
Which is well known in the security field.
I would like to, yes, I would like to think so, you know, if that means I'm doing a good job, people hearing about it. Uh, so we are a consult on the consulting arm of IBM security. And so we all operate in three main pillars, which is offensive security, threat intelligence and incident response. And basically we have a client base that we try to prepare for and prevent and then respond to incidents around the globe. And then my particular job is I, I head up research and development. So a lot of my, a lot of my work is focused on. Okay. So we have all of these consulting operations going out. What can we learn from like, what's, what are we commonly seeing from a gaps point of view across our client base? What are we, what is a common attack path from the adversaries? So what can we learn about the threat landscape by looking at the behaviors of the attackers? And then can we build some tools to make our consultants more efficient and effective?
Yeah, that's very cool. Cause I know it's not only, you know, fun for you and your people, I know, especially people who are into it. But I know that when you're a customer, you know, I've talked about it before, where it's like, you know, when you're responsible for, you know, protecting one organization, or you're in one organization, you know, you get into it and do all this stuff, but you have a narrow view. But you know, you guys see a lot broader stuff. And like, you know, when you're, like in just the one organization, you get hit with something, sometimes it's hard to tell if it's targeted, if just you're getting this, or if it's really widespread.
Yeah, I would say that's, it's going to be really hard for me to get out of this kind of work where you are, if you think about it, like consulting firms for security are in a very unique point of view, right? Especially with incident response. Like if you get to work with incident response or offensive security is you get to routinely see the attacker be successful. So there's no question about what they were or trying to do. In other areas of security, there's a vested interest in stopping them from completing their goals and objectives. And so everything that would happen after you would stop them would be speculative. When you get to work with incident response is that, you know, you come in at the end and then you build that story backwards. And so you can say, regardless of the, you know, geolocation or the company, like business vertical that they're in, this is what attackers are generally interested in. And it keeps you having that one foot or at least like one finger on the pulse of like reality in terms of the threat landscape is, is so interesting to me. I don't know if I could ever get out of this game. Like I never, I don't ever want to be so far away from it that I can't look at real world data anymore.
Well, you know, we know a lot of people, Joe and I, in incident response and in threat intelligence, but incident response, requires a very dedicated individual who's constantly going to travel. Um, some people have the tolerance of dealing with that, that traveling, some have families and they really want to be with their families. And some families understand that the individuals that are traveling are, are, are, are, are, it's part of their life, their, their culture. So a family has to really be invested in, um, the individual if they're traveling that often.
Yeah. For some people, the best thing for their family is for them to be out of the house from Monday to Friday. I've heard of those cases too, you know.
But to your point though, Adam, to your point, it's like the, um, you know, I came up through incident response and so I'll always have a soft spot in my heart for, for responders out there. And it's just, we did this study a couple of years ago about the mental health of security professionals specifically with incident responders right at the end of COVID. And I, and I just like, I don't think people realize how much these men and women are sacrificing to do IR around the globe. Everyone on these teams has missed birthday parties and weddings and family events, and they've given up their weekends and nights to make sure that companies aren't burning to the ground if they're having an incident, right? So that we're not gonna have supply chain issues. And I don't think that people really realize kind of the heroic measures these response people are doing to, like, there's IRs happening all the time, right? And to make sure that, like, things aren't crumbling around. There's a lot of men and women out there really sacrificing their time to do, to work around the clock.
So we've been fortunate enough to have two previous guests, one that delved into IR more than the other, but we had Chris Roberts and David Warshawski. And Chris Roberts literally pours his heart out all the time about mental health, not just about incident response, but in general. And we spoke about incident response with both those individuals and is a stress beyond any comprehension when dealing with those threat actors and negotiating and trying to work with them and trying to help the organization to stop bleeding. and hemorrhaging and losing intellectual property. So it's very important.
Oh, yeah, I say it all the time. You know, it's like when you're not consulting doing the IR, but you know, when you're the person in the company, whatever, a defender, it's like, When you're the seesaw, the head guy especially, it's like the sword of Damocles, I say, over you. You know, the whole thing where it's like the king says, that's what it's like. There's this sword on a string and you never know what it's gonna break. That's what it's like. And I've experienced it when something happens and I've seen like, you know, the moment and the look in someone's face. It's like they say, you can see where the soul leaves the body or I remember someone saying once, My life is over like, you know because they got hit with something or it's right popped it or whatever, you know And it's just like yeah, it's tough and in those times. That's why most of us drink
So what they call that a Segway in the business.
They call that a Segway. Trying to get better at the Segways. I got to figure out how to throw on the like and subscribe Segway too. It's pretty ham-fisted. So what do we got here? It's a guest choice as always. John, call this particular cocktail.
Yeah. So this is my Sunday cocktail of choice when I watched football or whatever we're watching, but it's a bourbon and ginger. It's a drink bourbon of your choice. I'm using Angel's Envy. I don't know what you guys have.
That's good. I've got, I still got this bottle of bootlegger I've talked about in the past.
Yeah. I saw that on one of the previous episodes.
Yes, that's some good friends gave to me who are absolutely fabulous guys. Last time, they caught me saying like, I don't know who gave me this. So I have to correct that. But.
Yeah, it's a nice, it's a nice easy sip, right? Well, let's see, I've never had this before. I've got to be honest. Cheers.
Oh, it was. I took a little because I've been drinking a little bit too much there.
Oh, that's, oh, this is dangerous, man. This is, this is dangerous. That, that totally.
This is the first time you've ever had bourbon and ginger before? It is. Yes.
That really cuts the bourbon and whoa. Wow. Okay. Especially I can see on a hot day or something. Wow. Yeah.
If you mix a little lemonade in there in the summer, like absolutely aces.
So is this what we're serving at the bar when we open our bar?
Yeah.
You know what, if we open a bar, I guess we've got to have like a special menu of like, you know, stuff we've had on the show, like guest choice and all that kind of thing.
Yeah.
That's all. No, but you don't prefer the drink by the drink. You're like, oh, can I have a episode two on the rocks?
That might get a little complicated. Hard to remember. I don't know. But whatever.
Well, you know, why not complicate things?
All right. Well, why not complicate things? That's good.
The world's complicated enough. But, and speaking of complicated, I have to believe that incident response is probably one of the most complicated aspects of what people do in cyber because there's, and correct me if I'm wrong, I've never been an IR person, but there's to-go kits, then there's stuff that the client has to do in advance before you arrive, and then there's the forensics aspect of it once you um I guess close the IR event and then there's then I guess during the IR there might be negotiations if it's ransomware so it's it's a very dynamic um situation.
Yeah so the thing about incident response especially modern incident response what is so interesting about it it is like probably one of the most um human related technical professions you can have like as a responder I would say almost the majority of your actions is managing the people either on the client side or on the response team side is more than the technical aspect of it. Like you could give our secret sauce away, like our playbook, our IR playbooks, our IR technology and give them to someone else, but you're not going to get the same level of response. And that's because it's about the people, right? The people know how to manage people. They're not the, you know, nerves of steel. Right. And, and so it's like, it's such an interesting, uh, profession because you are dealing with the human aspect and then with modern incident response, I would say it's what's so wild about it is that we have now transitioned into this new era of security where. There are like the cyber and the kinetic line is so blurred, especially with incident response, you know, things like ransomware, right? Is that if something goes down, then we're not going to have toilet paper for four weeks. I mean, if you look at the kitty litter shortage or the Clorox shortage that's happening right now are all of the result of a cyber attack. And so there's real world implications. that happened downstream of that in the physical world because of a digital action.
All right. So let me ask you then, John, you know, because we have talked a lot, especially with David and others about the whole thing. It's like, hey, you got to be a little bit of a psychiatrist, you know, psychologist, whatever. Because one second you're reading like, you know, logs and disassembling code. The next second you're walking into a room with executives who are all like, you know, pooping their pants and everything. but the kinetic side of it is interesting. Have you really been seeing that lately, more in the engagements that you're doing? Is that right? You are, really, for real, huh?
Yeah, that was a transition that happened like over the COVID lockdown, where there's, if you think about the threat landscape by and large, let's, if we focus on the cyber crime aspect, because that's something that applies to everyone. The threat landscape, by and large, is distilled down to an extortion-based attack, either from causing a business disruption or by a disruption of your brand reputation by stealing data. And what Ransomware highlighted is how much pressure you can create by taking a system offline. And what we've seen, you know, We do this thing every year called the Threat Intelligence Index, which is basically our year in review. And since the ransomware boom, manufacturing took first place as the most impacted industry. And my entire career prior to this, it's always been finance, right? Because finance had the money. That's where the money is. That's right. That's where the money is. What ransomware has highlighted is that finance may have the money, but manufacturing has the pressure. Cause there's not a, there's not a line manager on earth who doesn't know down to the second, how much money that they're going to lose for as soon as that line stops.
With the whole supply chain and the just in time stuff. I don't know much about manufacturing.
We learned that over COVID is that we are operating on these tiny, tiny supply chain margins where we have become so Logistically advanced that it's like as soon as something comes off the shelf in Austin and another one is built in Milwaukee and then it's all Shipping great. And if you if you mess with that just a little bit Then you have a six-month sort of delay on everything that goes on if I remember correctly That's that's kind of cold.
I think just in time manufacturing I'll take it to the the the gloom and doom thing so when we so it's funny I was just I'm having a conversation with my wife. I'm looking to go for a doctorate. And we were talking about with some other people recently about the whole idea behind cyber conflict and global unrest and how. Either kinetic war. Will not go away, but get replaced by cyber attacks and then the real life scenarios of somebody hitting a skater and then taking that skater down, maybe for a power plant. And then that power plant goes offline for maybe three or four weeks. And then from the three or four weeks when the power plant goes down, then we start getting a little bit of a famine because of the food is spoiling and the food is spoiling. Then we go into sanitation. Then we're going to sanitation issues. We go into health issues. People start getting sick. People can't get medicine. Medicine can't be manufactured. Medicine can't be delivered. There's no more heating or cooling or oxygen being done by a machine providing oxygen to people from room. So it goes into a real life situation that if we ever get to that next step, not so much cybercrime, but war that takes place purely in cyber attacks, then necessarily kinetic, we have real life issues. So while IR is there is that issue of manufacturing and losing, you know, stuff in the supply chain. It's even worse when people start getting affected a hundred percent by it. And then the only, the only other thing I wanted to add to was never been an IR responder, but I've been an EMT and as an EMT that responds to people and you have to tell people that the person is dead. Unfortunately, it's passed on or, or you have to tell somebody, that they're experiencing a heart attack, we have to rush them to the hospital, you can see the grief in their face. Now, while I understand IR in cyber is a little bit different, you know somebody saying, wow, if we don't recover, we're not making payroll for 10,000 people or something like that. That's gotta be a real world life impact, not only on the organization experiencing the incident, but people like you will have to, say oh my god if I don't take care of this there's a real life situation where this company might go out of business and tens if not hundreds if not thousands of families are going to experience issues paying bills.
Yeah yeah so a couple of things there is the from from a conflict point of view we have kind of transitioned into that that area where cyber cyber warfare interacts with traditional warfare. And we saw that with Russia, invasion of Ukraine prior to and then during the invasion, like there was all with the kinetic sort of attack, there was also the cyber attacks with all the wiper malware that went out. And I'm pretty sure what the what's going on with in Israel prior to the first physical sort of action, there was cyber attacks that led up into that. So now they are It truly is going hand in hand and that is because of this inner, like our society is so intertwined with the digital world as you can carry out what historically would have to do as a physical op through digital means. Your second point though, extraordinarily accurate. I've unfortunately, whenever ransomware was like 2017, 2018 when we were first the boom, like it was like, it was, I can remember the incident where it happened. Like I can, I remember, I'll never forget it. I'll remember when I was like, what is this? What is this? Like the prior to ransomware, ransomware had been around, but like, usually it was like spread through, you know, drive-by downloads or before that it was the eternal blue kind of just sprayed and prayed out there. And we saw the, like the modern, hands-on keyboard ransomware attack. I was like, what the heck is going on here? And then after that, it was like a major incident every three days for two years. And through that time, before we could kind of get our arms around it, we've unfortunately been on those phone calls where you have companies were like, I can't do business. Like my, all of my systems are encrypted. My backups are encrypted. My backups of my backups are encrypted. Like I am completely dead in the water. And like that, what do you, what do you say to someone like that? Who's like, they're asking you for help, but like, there's literally nothing you can do because everything is encrypted. It's not like in the olden days, you, the ransom was kind of crappy and you may be able to decrypt it. Um, but then they, you know, kind of buttoned things up and it was like, there's no way, like, what do you say to someone like that? So yeah, it's still probably not to the level of an EMT, but it's some, you know, the stress is there certainly is there.
I would beg to say, even though I'm talking about lives, the level and complexity and the emotions are probably the same. While it might be one person or multiple casualty incident, I might be dealing with three or four people. And let me be upfront, I haven't done EMS as a practitioner in a long time, even though I'm still in EMT. But dealing with some pretty traumatic things in life as an EMT, that's bad. knowing that a hospital has had their whole entire file system encrypted and people's medical records are there. And somebody might be in the middle of an operation and that robotic machine that's being operated by somebody six countries away, which I've heard that's happened before, it's gotta be extremely stressful. It's gotta be.
Hospitals are like a big target now too.
Yeah, we were talking to someone earlier. They used to be off limits, but not anymore. They're going for anything. It's really twisted.
It's very interesting to see how there used to be that honor amongst thieves kind of thing. Yeah. They said, we won't attack schools or hospitals until the revenues started going down. That's right. And then suddenly we're like, well, yeah, well, of course we're going to attack.
Yeah, we need more money.
Yeah.
The cannabis manufacturers now. I haven't heard anything about that.
I haven't heard anything about that either.
So speaking of things we may have heard about, the IR is there. It keeps evolving. Ransomware has definitely upped it. It used to be a lot more relaxing before ransomware. So talking about AI, we did something maybe, God, it's over six months ago now. Now it's been just over a year since the whole chat GPT generative AI thing came out. What kind of things are you seeing there? And kind of in particular, are you seeing any of that leaking? You know, there's all this horror and people are still trying to figure out what it all means. But have you seen anything leaking into incidents or getting into incidents involving it? Now we've heard that they're using it. The bad guys are using it to help like for spam and stuff and to make themselves more productive. But what are you seeing?
Yeah, so that's, it's such an interesting question. And what I always go back to is like, How would you, how would we know? Like, how would you know, unless you were like sitting next to them while they were doing it, or you had access to their infrastructure and you're actually watching them.
Knowing what tools they're using.
Yeah. Like how are you really going to know? And instead of that, I would say that the way that I've been approaching this is to say, let's just, maybe they are, maybe they aren't, but what, what really matters is, Are their goals and objectives changing as a result of them using some new technology? We haven't seen any evidence that their goals and objectives are changing, right? So still from a cyber crime point of view, goals and objectives remain the same. Their objectives are to encrypt systems, steal data, extort you for cryptocurrency. Now let's say they're using AI and large language models for phishing. Now, Does that change our security strategy significantly? No, because a fish is still a fish. Someone has to click and we have, you know, we may not do an awesome job of that collectively worldwide, but we still know what we're supposed to be doing. So then the question is of scale. Okay. So now what this is, are we going to have more incidents because they're going to be able to operationalize generative AI to take over some functions that used to be due by humans, just like we would use it for the legitimate side. Like we want to become more efficient. We're going to offload some things to humans. Um, from phishing, it's like, we've all, we're already in the age of scale. If you zoom out a bit, the last 20 years. How many, phishing has been, we're like almost going straight up for how many different, how much phish email is sent out there.
Could we possibly get any more? And just so, you know, a lot of people who are not, you know, in security may not realize it listening like. You may not know it, but what's it up to? It's in the high 90s, the percentage of email that's spam. Most of the email you get that's sent to you is actually filtered out long before you ever get it. So you're right. The idea of getting more doesn't make any sense. At least from a practical perspective, it doesn't matter.
When people ask me that all the time, like, what are we seeing? How are we seeing criminals use AI? We have not seen any definitive evidence that they are. But they probably are. We ran this experiment to see how good a large language model was at writing phishing emails compared to our professional social engineers. Pretty good. It's pretty good. They're not better than humans, but it's pretty close. So why wouldn't they be doing that?
I don't know.
But let's just say, let's assume they are. That's a good assumption to make. How does that change what we're doing on a day-to-day basis? Or from a strategy point of view, from security, it doesn't. Everything still remains the same in terms of like... How will we detect and respond and prevent that? It doesn't change even if Terminator's writing phishing emails.
You know what it makes me think just quick, like Jay, when you mentioned that you have your social engineers trying it, I'm like, Jay, with an LLM, are they able to lay off some of their social engineers?
Oh, I'm sure there's social engineers out there that are like professional phishing writers. Yeah, I'm sure.
Now they can shrink that staff, but sorry.
I got one thing to touch on with AI, which without a doubt, people are using AI and it's known, but it's not traditional. And that's when people send a message to somebody saying my daughter's been kidnapped and they use that voice and they, and then they find out their daughter's been home and they like, it's like, Oh my God, it's so scary. There's parts of the voice that only my daughter has been known to do or that screams or the screeches. And then there's the other part with the AI, um, And I know nothing about this, so I'm probably touching upon something that I'm not good at, but AI created photos and doctored stuff. And I know it's been used in certain wars for people doing certain things, but people know without a doubt that's been created via AI when people go back and research it. They might not know immediately, but they might know historically.
Yeah, so I think it's fair to say Yes, I would agree. I think that we are, especially in 2024, think about 2024, we have, for the first time ever, we have a major US election. We have major elections in the European Union. We have the Paris Olympics, the UEFA championships, and then two ongoing major conflicts in the world. Along with that, we also have ubiquitous access to generative AI. For the first time ever, we're going to experience this as a thing. And it is very possible to create increasingly convincing deep fakes of text, image, audio, video. So there's this great quote, and I can't remember the book where I read it, but it basically says that for the first time, your reality is not based on your experiences, it's based on what you pay attention to. And so you can live in the same physical world as someone and have a completely different reality from them. And I think we all kind of experienced that over COVID from a large part, like where there's a lot of people that were living in the same house that had two different realities of what is going on. And what's scary to me is generative AI is going to be able to take that up a notch because you're going to be able to create this digital reality of video audio, text, image, to push a narrative across the board. So that's actually a great segue, Adam. So I was going to say, like, what are you seeing from AI? And when people ask me, are they using it for phishing? Are they using it to write malware? And I would say, are we asking the right question? Because if they're using it to write malware, if they're using it to do phishing, if they're using it to bypass security, That doesn't change cyber security for us. Our strategy remains the same. Yeah, they bypass. We already deal with bypasses. We already deal with phishing. We already deal with all this stuff. But if they use these advanced technologies to create an attack that we aren't considering yet, that's really scary. Because if they're able to use these advanced technologies to do something we don't have a strategy for, For example, I, as a criminal, let's say, let's say for, I was incredibly smart AI person, and I come up with an incredibly charming AI model. My AI model is so charming that I can get your AI model to do things on behalf of me. Just like you would be social engineered as a person. Wow. I could be so, I, my model could social, social engineer your model.
I wonder if that's already here, to be honest.
We're very close. Now, that's the questions we really need to be asking is like how we, you know, tracking things from an adversary point of view is we need to see whenever they're shifting their behaviors to a place that we don't, have not considered yet.
You know, that's really interesting because, you know, I haven't thought about in those terms, you know, we've been so terrified, but yeah, there is an element of, okay, a lot of these things that will just make them, they'll be able to do what they've been doing faster, cheaper, maybe a greater scale. But the idea of them coming up with totally new things that we don't know how to deal with, that's interesting.
Well, they've already said, so this is not exactly what John is saying, but kind of sort of, that AI models have been manipulated to have cultured biases and prejudice. It's not exactly the same, yeah.
You need so much data to train these models, right? And so they use the internet. Now, I don't know if you guys know this, but the internet is full of bias. So when you train- Don't say that, that's not true.
It's full of bias and flat out bullshit.
Right, exactly. So you're training these models on stuff that by default has bias or is full of lies or all this kind of thing. It's crap in, crap out at the end of the day. So the, uh, so it's, when you're thinking about how AI is going to evolve in the future, like curating proper data sets and making sure that your data is explainable and secure and everything like that, it's going to be paramount to the success of it. But we have models out there right now that are training on data that we know is biased, and then it's going to end up being biased at the end. And you can turbocharge that by knowing how to manipulate an AI model.
How to normalize it too. Yeah. So if you keep on producing, you know, normalized data for something else and that, that, that, um, that AI model learns that bias. And in itself, it's not biased. It's only going based on what it's been provided.
It's kind of, how would you know? Like, oh, how did, how do I know that the answer I got from chat GPT is biased, right? Because I just go, I just go in there and ask her the question. Oh, it told me it it's gotta be accurate. Right. And because it's chat GPT, because I can't explain how it's going to come to me and get that answer, like how the whole algorithm works. So that's the thing that when I think about generative AI going in the future, it's like, how do we pull the reins back a bit to say, okay, before we really start to use this in enterprises, How do we do things like explainability and reliance and making sure that we can detect when biases are shifting out of the norms? I'm not a data scientist. I don't know the questions to that. I just know that we should be using this technology. We definitely should. But we need to be able to know whenever you give it an input, what could you expect the output to be? Because otherwise, you know what it reminds me of is like, So back in the day – well, when people would just – you just Google something, you see the headline that you like, and then you say, well, boom, this is my proof that… Of course. My opinion is great. I don't read the article. I just get the headline that I want without reading the actual content, and I was like, this is times a billion because I can ask my model, oh, tell me such and such, and if it agrees with me, I'm like, there you go. It's a fact.
It's a fact. taken some time to really study AI, besides just reading it and doing some prompt engineering. And I suggest everyone try it, dip into it, like take a real course or look into it, because you start to realize that it really is reacting to you. What you ask it has so much of an impact on what it's based on. Yes, it's based on everything that it knows. Or how you ask it. How you ask it and what you ask it has a huge, huge impact. And if you do a little experiment, try asking the same question or more or less the same question a couple of different ways. Unless it's something really trivial. Well, actually, it's interesting. If you ask something trivial, then it will always spit out the same answer because the evidence is so overwhelming statistically that this is the answer. But if you ask it something that's not so clear cut and just ask it a few different ways, you will get radically different answers. And it really is reacting to you to a certain extent.
So this goes back to how governments get manipulated for elections. And I don't want to get into the politics of it, I'm just talking about in general. People, nation states produce data, put articles out there, do things, eventually that data becomes normalized because so many people have listened to it and read it and resent it and republished it and that becomes normalized. So if I started publishing things that says the sky is green and then four other, I don't know, broadcasting companies or journalists wrote stories and then other people researched it, eventually it becomes normalized and you can work together to crowdsource and change a reality, at least in my mind.
No, that I agree. That scares me a lot. It scared me so much. That I like stopped in 2016, I stopped consuming all news like I don't watch the news anymore because you don't know why, guys, because I would read the news and I would find myself getting physically upset. And I was like, why? Like, I'm going to start the day just like angry because of something that I read online that I'm probably not going to do anything about. It's like, why am I doing that to myself? So then I just unplugged, like I got off of Facebook and then I stopped reading the news and I post like such like a happier person. And I was like, if I just focus on like what's immediately around me, the stuff that I can control, then I'll be happier. Uh, because like add it to your point, it's like that you can be manipulated so easily by content online.
So here's a similar thing, right? Many, many, many, many, many years ago, and I'm sure you want to hear my medical history. I had kidney stones, but when I didn't know what kidney stones were.
I want to hear about that.
I don't care.
Again, by the way, I've heard about this enough times.
I'm interested to see how kidney stones is going to tie this together with me.
So when you start, so when you start researching, I didn't know what kidney stones were. I didn't, I never had kidney stones many years ago. I started researching the symptoms. I became a web doctor. And I found out I had kidney failure. But I didn't.
Oh, you mean like you.
So when you start researching and you start looking at the things and a lot of people have that issue, they use Google to diagnose. Some people have had very positive results where they found out something that 10 doctors told them they didn't have and they were able to find it out. But the average person, when they start researching something, and I'm talking about serious things, I'm not talking about not serious things, like you can research how to make a banana cake, I made that up, right? And find 10 different ways to do it. It's not gonna really impact your life. It might impact your cuisine. But when you start finding out like, oh, I have these symptoms, and then your mind goes astray. You start saying, I have, horrible disease or I have a horrible medical issue and that's the same thing that would happen with AI I bet you like if you start going into chat GPT I've never done this and start trying to diagnose something chat GPT might tell you something that you don't want to hear but it's not true so my point I'm making is you got to be careful how you solicit information or ask for help
Well, what I found with it and using ChatGPT, I mean, I happen to use ChatGPT, is it is kind of similar to the way you use Google or the other search engines. And I think, Adam, you said the right thing. It's like you go to research and you're doing research. It's going to help you find information that will help you figure stuff out. But you need to look at it. you know, with the critical eye, realize it may not be correct, you know, hopefully actively look for the differences of opinion. And that's a really good thing to do with the generative AI too, is to ask it, what are the prevalent schools of thought on this? You can even ask it, what are some of the outliers? If you just ask it a general question, it won't tell you generally. But you can kind of ask it to give the broader view. And realize you're doing research. Unless it's something trivial, it's not a good idea to just go to it and ask for this and say, what's the answer? Especially if it's something that's a little deeper. But plenty of people do. Well, that's the problem.
Yeah, it's a responsible use of it. Joe, I think you were trying to get to the point. It's like, it is a great technology. if and why it's so important that everyone should understand just a little, you don't have to be an expert, but just a little bit about how it works so that you can properly consume the output because it's going to be a great technology and everyone should be using it, but using it in the right way, which is to say, Oh, I understand that there are things like bias and Conflicts and you can make these things hallucinate are all reality. So I need to take this information and then verify it afterwards, but Yeah, it is. It's great. And I think From a positive, I mean everything everyone talks doom and gloom about AI all the time a little bit of a positive spin on us from a security to say like we are making we are getting better at security everyone always says we're getting worse at it and I think we're actually getting better at it. You know, because the thing is, is before no one, no one ever cared. We're just getting better. We're getting better at security by people caring. There's more people that care about security than ever before. And if you think about whenever e-commerce launched in the nineties, were there congressional hearings about that technology before we let people do financial transactions with it? No, it's just yeeted out there. And then it was like, oh crap, wait, we need to, There's a threat here, there's risks, so we need to start regulating it. For the first time in my life, there is a disruptive technology out there in which governments and companies are coming together to develop some sort of governance or policies around it before it becomes ingrained in our society. And I think that that's like a real good indicator that as a people, we are becoming more secure conscious.
Well, that is kind of a double-edged sword because, and you know, I had one of the things down here in the notes, you know, they passed that act in Europe and the Biden administration put out some, I think he put out an executive order about how the government's got to look into AI and all this stuff. And on the one hand, yeah, that's great that they're actually being proactive and everything. On the other hand, you know, it wasn't Reagan who said the scariest words are, I'm from the government and I'm here to help. I just hope they don't make an ungodly mess out of the whole thing or get hijacked because also, you know, all the other things are that a lot of the, you know, corporations that say they welcome you know, legislation and regulation are, well, the ones who have all the lobbyists who want to write the laws themselves, you know, so we've got to watch out for that.
Keep in mind, also, AI is just not, I mean, I know we know this, but I'm saying it for our audience, AI is just not on a computer. It's on a lot of IoT devices, whether it can tell you, differentiate maybe from a camera for gender, or whether it can use it to identify, you know, uh, animals and stock when herding them. And it's used for a lot of different, uh, IOT devices and it's not. And when you're using it more for quicker analysis, it's not so bad, right? Because you're not making determinations where people's lives are on the line. You're able to use it to analyze large, large data sets in shorter, quicker periods of time.
Yeah, that's the real difference. I think that that's going to be the real difference maker from a security point of view, is that, especially with large language models, the thing that they are exceedingly good at is data summarization and data distillation. Um, because for 20 something years, we've been talking about things like alert fatigue and there's too much data. There's too much noise. Our analysts can't keep up. We can't hire enough SOC people to triage all this stuff. And we finally have a technology that is going to say, Oh, well, I can, I can go through a billion alerts a day and tell you the story behind them. And they like, this is like, we're at a tipping point in security where You know, for mostly, especially with grant, I think people like ransomware, like they're sick, the successes, they hide in the noise. For the most part, we did this study about in 2021, and it was like, how fast, how fast does a ransomware attack happen from beginning to end from the moment they go hands on keyboard to the time ransomware is deployed. how in time, in days, hours, or whatever, how fast does that take compared to 2018, 2019? And it got incredibly faster, and then it got down to a little over four days in 2021, and then the newest thing, hey, quick plug, in the 2024 Threat Intelligence Index, you'll get the updated stats, but it's like 3.8 days. What was really interesting as a side data point in that analysis is that year over year are, Analysts are the responders. We're finding more evidence of the attackers in existing security tools Prior to the ransomware Dean being oh well So what we can take from that is that people were buying stuff? That stuff was generally working. It was waving its arms ringing alarm bells, but we were missing that detection and response piece and Largely probably because that was one alert in a series of 500,000 alerts that got kicked off that day. So what we can look from a technology point of view is this game changer, which is these large language models, which are phenomenally good at taking disparate data sets and saying, give me the TLDR of that. You could do it with documents all the time.
You know, that's really interesting, John, because I'll tell you, that just clicked in my head. I hadn't thought about this at all. One of the things that I've done in the past occasionally, you know, I've been running a security group, was to do a compromise assessment. And for everyone, you know, a compromise assessment is basically when you bring in some consultants and they go through everything that you have. They take the place apart. They look at all your logs and they try to see is there basically, is there something you missed? Because, you know, the break-ins we say they're stealthy and all this stuff, but it's not magic. If someone's in there, there is some indication somewhere that they were there of what they did. And it was always the most terrifying thing imaginable to me because I'm like, shit, if I find out like, you know, we missed something a year ago and someone's been in, you know, I'm going to have some explaining to do. But yeah, automating that with AI and being able to do that so much more quickly, so much more often, it's like... Joe, I had no idea you were in the Compromise Assessment game.
That was the first thing I did at IBM, was build out the Compromise Assessment service.
Well, I can tell you, I'm sure it's a lot more fun to perform the Compromise Assessment than to buy the Compromise Assessment and have it done to you.
But there's a great story in that, is that if you look at When you do a compromise assessment, one of the major things you do is that you do an audit of persistence mechanisms. For the listeners out there, don't know what I mean by that is that, you know, generally attackers and malware, whenever they get access to a system, they want to keep access to that system in case it reboots. So they would want to persist quote unquote on there using various techniques. The number one persistence technique for, I don't even know, 15 years now has been scheduled tasks. And you can find very fancy malware that definitely costs a lot of money for someone to write and then deploy, persist as a scheduled task. And if you just had the ability to collect and enumerate all scheduled tasks in your environment, there's a good chance that you're gonna find how someone is made. And so it doesn't matter, like they wrote this fancy malware, it bypasses all their things and it does everything, it's blind everything, but it persists as a scheduled task, that scheduled task will be anomalous in your environment and you can identify it.
Yeah, I think we just figured out a product here. If you could just go, because you could take all those logs, say, with your tools, collect all the schedules, tasks, and even if you don't know what you're looking for, you could ask the LLM,
Anything with dodgy here? That was my segue. I was actually going to ask you. So what do you think, John, about the future of SIEM or security information event management, for those who don't know what that is, and XDR as it relates to AI? Because everything is related to now with SIEMs, they have these AI models and AI apps that you can run. And XDR is that next step level of having those advanced I think it's better if you explain, you'll probably do a better job.
Yeah, so they, I mean, basically there's always gonna be these tools with data. So I don't think that we're gonna get rid of SIMs or XDRs or any of these tools. What I imagine the future is gonna be is that there will be an AI platform that sits over top of that, in which you'd be able to, it will collect and process data from that and present an analyst with a contextual awareness of a data point to say, oh, I got this alert. And then I went out because I saw that it was associated with a Windows machine. I collected the Windows event logs from the XDR data lake. And then I checked to see if there was any alerts in the SIM. Okay. And then I saw that it was sort of related to a new scheduled task being created. So then I went to the endpoint and I grabbed a copy of the binary that it was associated with it. And this is what I found. You tell me what to do. I think that that's going to be the future of security where you're using AI to tie together All of these tools that have different data in the context in which that data exists is important for different things. Um, but I don't think it's going to like eliminate any of the things I really see it being an overarching platform that's going to be able to pivot through different data points and present the Anna. There's still nothing better than the human brain and making a, an informed decision or identifying patterns. Like we're, we're not there from an AI point of view. So like we're going to maximize our humans by. saying, you know what computers are really good at? Collecting and processing big chunks of data and distilling it down into a story. And you know what humans are really good at is making the decision based on a story. And so like, we're just gonna reassess and redistribute workloads to the right locations going forward, I think.
Wow, John, I think you've just done the impossible, which is you've made my old cynical self somewhat hopeful about the future of security. Because you know, it's, It's a thing in security where everyone, like every other product, something new comes out and they say, oh, we're building this in, we're building this in. And now it's like, you know, all the security vendors are saying, now with AI, now with AI. But you know what, in this case, it could actually be helpful. It could be even a game changer in a lot of ways.
So I got it, right? We just copyrighted it two seconds ago. It's A-I-D-R. Instead of MDR, XDR, it's A-I-D-R. A-I-D-R. Yeah.
The, uh, I mean, AI is going to be the, what we're waiting for, for, for listeners. If you're not in this is like AI and machine learning are baked into a lot of different disparate tools already. Even LLMs are in there. Right. But they're all individually baked into little pockets of technology. The weird thing about security, Joe, and as if you ever bought security technology before in your career, which I'm sure you have, is that. Security is the only industry in which integration costs are pushed to the consumer. So like there'll be like a startup that comes up that solves this really niche security problem and then they'll become and they'll sell you it and then you end up with 75 different tools. that are now your security tech stack. And then you go to your engineers, you're like, okay, make all these talk to each other and then put, try to get these in that quote unquote, single pane of glass that we've been chasing for 25 years.
And that has improved a tiny little bit in the past few years, but you're right. Largely. And in fact, I've even said, had people say, you know, they, they say they're a SIM company or they're log, you know, logging company. They're not, they're consulting. They make a little money off consulting doing exactly what you say.
Yeah. The product will solve world hunger if you buy enough professional services.
Exactly.
Wait, are you saying that's not true? I bought that product last week.
It could be. It could be. I've never been a consumer. I've never bought. That's the one thing in my career that I have not done is I've never been someone who's like, oh, I'm going to decide what we're going to invest in. I've either been an engineer that's been charged with integrating it after the fact or making them work as well as they would be. But I have no experience in dealing with vendors to say, I have to balance all these kind of different relationships and like, okay, well, I just bought this thing and like, it's going to cost so much money to rip and replace it. So like, I might as well renew, even though I know it's got these sort of limitations, but I only have so much leash on my security chain here. And like, if I mess up, that's so tangent time. I, I've, I've was having conversations over the last couple of years as my role has kind of changed. And I talked to way more CISOs than I ever did before. And one of the things that I've starting to sympathize with the CISO is that they're, they're constantly playing this like high stakes game, a seesaw where they're trying to balance. making the organization as secure as possible without messing anything up. And every time they mess something up, even if they're trying to do the right thing, then they have to take something away from security going forward. Because it's like every time you mess up, then they're going to blame security and then you're going to lose that political capital to do something in the future. But if something bad does happen, then they get all the blame. So it's a very difficult job that I don't think I properly appreciated that job in its fullest until I started meeting more CISOs and talking like, what are they dealing with on a day-to-day basis?
Joe has had the unique pleasure of, when I worked for Joe, of having me vet vendors to see whether or not the product was possibly the right product for us. Needless to say, I think some vendors do not like me.
No, but you're right. And you know, Adam, we've had many very interesting conversations. I mean, John, you hit it right on the head what CISOs have to do and why they have that dead look in their eyes often. You know, I have someone like Adam, I can't tell you how many times we'd look at a product, we'd evaluate it, we'd POC and they go, what do you think? I go, it's wonderful. It's great. It's fantastic. And they go, do you want to buy it? And I'm like, no, because besides the fact that it's a big investment to deploy and all that. If I'm going to invest my resources in that, it's a trade-off. That means I can't do something else. That means I need to either, well, you know, I need to not do something else that I'm doing now. I need to give up, or I need to, you know, not have another thing that I'm planning. You know, and even if I say I'm going to go get more money for this, well, maybe I was going to go get more money for something else. You know, what's the best mix for me? That's the really tough part.
And conversely, when we've told vendors that their product, when they're like, oh, isn't my product the best things in sliced bread? I'm like, yeah, OK. No, no, no, no. Tell me. And you sit there like your product can't do this, can't do this, can't do that, can't do this. can't do this, can't do that. And they're like, they're like they're mortified because they've actually found out that it's true. And they and they didn't realize what they were selling was not really capable of doing what they expected. And I don't like ripping products apart because we found some really tremendously good products out there that really did what they said. But um, No one, it's like, no one wants to hear the real truth sometimes.
Yeah. And you're putting yourself on the line. You know, I say, look, if I, especially with something expensive, you know, it'd be like, look, you know, if you say you want me to sign for this or talk my boss into signing for it, if it's really big, it's like, that's my credibility. I got to know this is going to work. Not only work that it's going to, that it's the right thing for us. Um, But anyway, so on that happy note, I think we're kind of headed towards last call here, but we're getting to the end. I know I'm out and I might have to have another one of these. I'm really liking this.
Yeah, these are excellent. Dangerously good though.
Yes, dangerously good. That's right. You got that right.
Well, I mean, since we're on last call and we're talking about football and other sports, I think AI should replace all the coaches that run these teams. See how it goes for them.
Oh, you know what, dude, I've always thought it would be so interesting to like, you know, like if, um, I don't know if you guys follow like, uh, like the English premier league, like English soccer at all. And they have like the divisions as you go up through, like up in the premier championship. But like, I always thought it would be so interesting if you took like someone who was really good at Madden. And they, as they go through, they get to control like real people, like the tippy top of that would be like, yes, you are now you you're so good at Madden. Now we're going to give you a chance to call plays for a real life team to see how well you could do against a. Wow.
That's interesting. That could actually be a video game.
It's almost like a sim. Yeah. Oh, yeah.
And we spoke to my son, play soccer. And we spoke about you. You played soccer. So it's very interesting. I don't follow soccer well, but that would be interesting. You become knowledgeable about how a player is because you actually live the player. You get almost immersed into that player and what they do and how they think. And then instead of being the player, now you're running the team. That would be a unique experience.
Now that I think about it, I think there was a movie about that. There was like a movie where they were like, it was like Call of Duty in real life. Like you controlled like a real person while they- Yeah, Gamer, was that it?
Yeah, it was- Yeah, you got to control someone.
Yeah, something like that.
Now I gotta go watch that.
Yeah, and like the guy, I think it happens like, I think the guy like doesn't take the instructions or something. He goes, why don't you listen to me? It's like, cause I was gonna die if you did that, some crazy stuff like that. All right. Well, fortunately, we're not, uh, we're not quite at that level yet, at least, but, uh, the eyes are not completely taking, taking over, but you know, we just got to learn to use them. Right. Oh, sorry.
I get it. You think the CISO you immersed them in a, in an IR team and then you had the CISO call the place.
Yeah. Yeah. You know, uh, the, I think the future of these. augmented realities that we're able to do right now in terms of security to gamify cybersecurity to get better is there's a real future in that to take the tabletop to the cyber range to the next level to make it even more immersive in which you are really in a network that is being compromised by whatever, and you make a game out of it so that you can practice doing incident response internally, like every single day.
So there was a movie about that in the 80s. It was called War Games. Think about that.
Yeah.
I can tell you having done a bunch of tabletops, I mean, they are basically gamification. The way to make it real and the way to make it seriously is something like, you know, bet your bonus on it or something. You got to make it real for people. That's it. Give it some stakes.
Exactly. Immerse somebody really into a tabletop exercise. And when they make mistakes, there's real life reality consequences. Like you see a whole network go up, you lose all your money. Imagine a game like that.
You know what, not to, not to plug too much, but we do have cyber ranges here in IBM where we do that. It's in Boston, right? Okay. Yeah.
There's a couple of plugs plugs at the end of the show. We're cool. Absolutely. Go, go right ahead. Anything else?
Yes. We're launching. Yeah. There's going to be one in DC. There's one in Boston, one in Ottawa and one in Bangalore, but yeah, that's the idea is that we're probably, we're probably not going to make the one in India, but I always wanted to visit the one in Boston.
I didn't get a chance.
It's awesome. It really is awesome. We have a fake news studio there, which is my favorite part. Oh, really? I saw the video. That's Mission Impossible. You take the CEO of the company and you put them in the hot seat, and they have to answer questions from a news reporter about their decisions, how they handled the incident. And as they make a decision, you see the stock price go down or up, and you can see social media start trashing the company if they answer it incorrectly. They try to make it as real as possible.
I didn't know you did that. Yeah. That's intense.
Very cool.
That is very cool.
It's a great way to get people outside of the security teams to realize how involved they are in the incident response and the shared responsibility of security.
I like that. All right. Well, I'm going to have to check that out. That's got to be, that's got to be fun. But the, uh, you know, the security team must really like seeing their CEO, like seriously sweat. I hope they share that around. Yeah.
It's a fun event, but yeah, it's really impactful. Most people leave like, like exhausted and they're like, I didn't realize that it was going to be like this. And that's exactly kind of that impact that we want to have.
It's very intense from what I remember. Yeah.
Oh, very cool. All right. Well, we learned a lot of interesting stuff. So, hey, John, thanks so much for joining. Yeah. Thank you for having me. This was a lot of fun. This was a very cool discussion.
Yeah, we really appreciate you coming on. We really do. Yeah, that's right.
And, and Oh God, I forgot our plugs. I'm putting it at the end. Remember, please like subscribe, follow, tell your friends, spread the word and you know, it's a cocktail. Yes. And send feedback, even though we don't encourage drinking, even though we kind of do, but whatever. All right. Thanks a lot, everyone.
All right. Bye-bye.
