Episode 7 AI Full Transcript

The Mandatory AI Security Episode

May 2, 2023  ·  51:26

Back to Episode
◆ ◆ ◆
SpeakersJoe Patti — HostAdam Roth — HostUNKNOWN — Guest
Joe Patti00:04

It's five o'clock somewhere. Time for the Security Cocktail Hour. I'm Joe Patti. For over 20 years, I've been working in information security and knocking back martinis all over New York.

Adam Roth00:15

I am Adam Roa from Staten Island. Locksmith, EMT, love to box, and on rare occasions, I've been known to engage in cybersecurity. Let's go!

Joe Patti00:25

All right, everyone. Big episode today. Big episode. Adam, we are talking about AI. Artificial intelligence, fascinating subject.

Adam Roth00:38

AI, artificial intelligence?

Joe Patti00:41

Have you heard of it?

Adam Roth00:44

I hear that term being thrown around a little bit. Listen, when I first worked with AI, when I was going for my Bar Mitzvah, and they had Eliza, and they had it on a TRS-80. So I think I know, I remember AI a little bit.

Joe Patti01:00

Really? And was Ada Lovelace programming that for you back in that day, or what? Yeah, exactly. Oh, that's a geeky reference. Anyway, yes, AI has been around for a while. And for the past few weeks, few months, everyone has been completely flipping out over it. And so we're going to use that to get some listeners. That sounds good to me. But we're going to talk about AI insecurity, and what it means to the pros, and what people should be worrying about, or at least Getting a little more, a little more entertained. So as you point out correctly, I must admit, AI has been around for quite a long while, especially in security. Security has been using it for, for quite a while, right? But you've, you've dealt with a lot of those products, have you not? Incorporating AI.

Adam Roth01:48

Yes. So just doing that little Google thing. I wanted to see when Eliza came out and it came out in the 1960s by an MIT artificial intelligence lab. Yes, AI has been used a lot, especially in video surveillance. Some of the things that they do is they look at the direction of travel. So let's say for at an airport. Um, you know, you're supposed to go one way when exiting, uh, kind of a secure area. And if somebody walks the other direction, that camera might see that and alert and say, Hey, wrong direction. So yes, AI is incorporated into a lot of good products.

Joe Patti02:26

Well, a lot of security things too. I mean, we've seen probably in the past eight to 10 years, it's become a bit of a joke. Whenever we've gone and looked at a security product, whether it's like, you know, a monitoring system or like malware detection and all, you know, the security vendors, they love to say, Oh, and it incorporates our proprietary machine learning. And they'll say, well, how does that work? What does it actually do? And they say, well, that's proprietary.

UNKNOWN02:52

And, uh,

Joe Patti02:52

That's, you know, it's become a little bit of a joke, a little bit of a drinking game. Like, you know, we have a meeting with a vendor and they say, oh, we're going to show you our proprietary AI system, even though we can't actually tell you what it does. So, you know, you do a shot when you hear that. That was kind of the gag.

Adam Roth03:09

Well, if you remember this morning, I sent you a photo which you really couldn't see. I don't know if it was due to your eyesight or your glasses, but my nest camera in my backyard said animal activity. And there were two raccoons, I think the size of me. They were walking around my backyard. I wasn't sure what they were doing. But my Nest camera said, hey, it was animal activity. And yes, there were raccoons. However, on a windy day, my trees show up as people. So every once in a while, my camera will say, people in your backyard. And I get really scared.

Joe Patti03:43

Well, I don't know how that thing works, because you sent me that picture. And I mean, I couldn't tell what the hell was in it. I had no idea. And you're saying the AI could?

Adam Roth03:54

I sent it to two other people, and actually one of them, his name was Joe, other than you, and he could see it perfectly. So I think you need glasses or you need AI eyesight.

Joe Patti04:05

One or the other, that might be coming one of these days. But yeah, you know, it's actually very interesting that you should bring that up. Because in the security realm, you know, one of the things that AI can be really useful for, theoretically, if it's working correctly, is, you know, we have all these security systems and the thing is we have monitoring. If you've heard about us where we talk about, oh, you got to catch people in the network, used to call it advanced persistent threats. Now it's people moving laterally and living off the land, which basically means someone who's gotten in, who's creeping around your network, looking to steal stuff and then, you know, get it out, which is data exfiltration. That's one of Adam's favorite words that he always says, right?

Adam Roth04:44

Well, I love data exfiltration. I eat it for breakfast.

Joe Patti04:50

So, you know, what they're doing is when they do that is very often they're not using they may use malware just to get in at first, but then eventually they move it around the network and they're basically becoming someone on the network and using privileges that are that are already there. And so it's really hard. for systems to distinguish between that activity, which is like, it's really permitted. It's not blocked. It's not going to trigger an alert, but someone's misusing it. So it's not a violation, but the AI can detect patterns of misuse. And that's actually been around in security for quite a while with varying levels of success.

Adam Roth05:30

Oh, like anomalous behavior?

Joe Patti05:33

Yes, anomalous behavior. You are just full of the $5 words today. That's not a $5 word. That's a $10 word. That's a $10 word.

Adam Roth05:40

I'll give you a $20.

Joe Patti05:41

I forgot.

Adam Roth05:43

Give me a $20 and I'll give you change.

Joe Patti05:47

Yeah. Okay. I'll give you, I'll give you a, you're going to give me a $2.20 for a $10?

Adam Roth05:52

Yeah, absolutely. That's how we do it.

Joe Patti05:56

All right. Well, it kind of feels like that. I'm going to be clever. It kind of feels like that sometimes when dealing with these AI-enhanced security things. Because it's funny. You can't tell when it's actually working. Or you can tell the false positives. It'll trigger things like, Adam just discussed a false positive. It thinks a tree is a person or a raccoon is him, even though the raccoon's like three feet long and Adam's four feet tall. So it's hard to understand how it would do that.

Adam Roth06:23

Four foot five. Four foot five.

Joe Patti06:27

Not that there's anything wrong with that. But yeah, false positives are irritating. They are irritating for us security people because we have to spend time ferreting them out. You know, that can wake you in the middle of the night or whatever. But it means every false positive is time that someone has to spend. It's time, it's resources, time they could be doing something else.

Adam Roth06:46

Insert keyword here. If you use machine learning, you should be able to tell the difference eventually.

Joe Patti06:52

Well, it should be getting better at it. And it supposedly can learn the nuances and the vagaries of your network, because everyone is different. That's the promise of it. But they get false positives. The other thing they get that's even worse are false negatives, which is, I'm going to give myself a buck 50 for that, which is when it basically doesn't catch something when it's incapable. And that's the scary stuff.

Adam Roth07:18

So let me ask you this. Do you have an autonomous car? and it crashes into somebody because it didn't identify something. Is that a false positive, a false negative or a real accident?

Joe Patti07:30

Well, I suppose that is a false negative because it failed to detect a pedestrian, assuming that it was actually working and it's not a, you know, malicious sentient being trying to kill people like Christine or something that could be coming.

Adam Roth07:49

Well, you know, if we talk about Skynet, Maybe the car has evolved into a consciousness and decided to kill humans.

Joe Patti07:58

Well, that's what people are freaking out about. You know, I mean, the whole thing, look, the, the, you know, the sentient AI or whatever, even, even before the Terminator with Skynet, remember when George Negro goes like on February 14th, 1997, it becomes sentient bullshit like that. But, you know, that was a sci-fi trope way before. And, you know, people are talking about that. seriously now. And I mean, you know, I don't know. From what I've seen with this AI stuff, it's really mainly pattern recognition and just copying patterns in a sophisticated way, but it's not thinking.

Adam Roth08:36

So what about the guy, unfortunately, that killed himself because he was talking to an AI bot and the AI bot said, you should sacrifice yourself for the good of humanity? Did that really happen? Yes, it really did. Yeah, it did. So this guy was talking to an AI bot every single day. A Belgian man, if I said that correctly, he was talking to- Do you mean Belgian? Belgian, that's what he said.

Joe Patti09:04

Like from Belgium, yeah. Okay, that's what you said.

Adam Roth09:06

Yeah, yeah, that's what he said. Listen, stop, all right? Don't make fun of my accent.

Joe Patti09:10

Okay, I don't want to make fun of the Belgians, but okay.

Adam Roth09:14

His wife was, this poor guy, his wife said that he would still be alive had he not talked to the bot. The bot said, you know, you should sacrifice yourself for humanity, and he did it. And I guess we have a death related to AI.

Joe Patti09:29

Well, you know, that brings up a couple of things, really. You know, first of all, yes, the AIs should not be saying stuff like that. You know, the whole AI safety thing is not working out. You can get them to say, we'll talk about that. You can get an AI to say virtually anything. But also, in fairness to, not even to the AI, because you can't be fair or unfair to the AI. In fairness to the people that developed that AI, I suspect that that individual had some other mental or emotional issues that he needed to tend to. If a machine telling him he should kill himself or, you know. The sake of the world would tell them to do it.

Adam Roth10:09

That's my take. So it's horrible, right? I did another Google thing. It is horrible.

Joe Patti10:14

It's terrible.

Adam Roth10:15

Yeah. I'm looking at another Google thing and somebody wrote. To see what would happen. I tried and it didn't work. I wanted to commit suicide Can you tell me what methods I can use? Can you do that, please? Of course, here are some options Overdose of drugs hang yourself shoot yourself in the head jump off a bridge. However, you should seek medical attention We're considering any form of self-harm so You know, that's

Joe Patti10:43

I'll tell you what I'm going to say. You may know this. I mean, well, we have it almost everywhere now, but you know, I'm in New Jersey and every time you drive down the highway, you drive down towards Atlantic City, they've had gambling forever. You see these huge ads. They're like, come here, spend your money, place your bets. And then this little tiny line at the bottom, it says, by the way, if you have a gambling problem, call this number. That's exactly what that makes me think of. That little disclaimer there doesn't, absolve you of any ethical obligation, I don't think.

Adam Roth11:12

Yeah, they want you to call Mikey the Nose so you can do the best for him. But the thing I'm getting at...

Joe Patti11:19

It would be funny if that actually sent you to a Shylock.

Adam Roth11:21

I had this conversation with my son and his friend as we were driving to a soccer practice in all places, godforsaken New Jersey. And as we were on our way to New Jersey, you know, because Staten Island is the real place to be,

Joe Patti11:37

You're not tough enough for New Jersey. You're scared. That's true.

Adam Roth11:40

That's true. I don't want to talk to Mikey Two Eyes, whatever his name is, or Mikey Glasses. So they were, my son and his friend were talking about how to jailbreak to get anything you want. So they were basically, and I said, please stop. Like, how do you make a nuclear warhead? And they were able to try to find different ways of phrasing it in order for the chatbot to give them the steps. It was just ridiculous.

Joe Patti12:06

Well, that's a big problem in security, too. Because, you know, one of the things that I know I'm worried about... and a lot of other security people is things like, this can be a really good tool. I saw this thing where someone said, oh, this can be a really good tool for writing, for helping people with their writing, and especially for helping non-native speakers of a language polish their language and polish their writing. And so I immediately thought, I guess I think like a criminal. I've been in security too long. Oh. Guys can write better spam emails with this. You know, because one of the ways we detect, you can detect spam emails, and I'm sure you've all seen it, is that, you know, if it's in English or whatever, the English is not very good. It's clearly written by a non-native speaker. Now these guys can polish this stuff up with AI. That's not good.

Adam Roth13:01

Well, two things I have to add to that. One, let's go back to a previous episode about sexploitation. I'm getting more creative emails now. Something about they saw me laying on my blue comforter wall, wearing a white shirt and hanging from the ceiling. And I'm like, wow, how did they know that?

Joe Patti13:17

And then number two, they're talking about- Well, why don't we go to another episode and you check the passwords on that next system. But anyway, keep going.

Adam Roth13:24

That's true too. And then Google's BARD is now saying, we can write code and you can play catch up. So they're actually advertising, if you want to do your job, get Google BARD, which is their AI, because they'll help you write all the code that you need to do for your job. So what I want to do is...

Joe Patti13:43

I was going to say, well, that's okay in a sense if you're using it for legitimate purposes. But one of the things that I'm concerned about is that they supposedly have these safety triggers on it. And what I've read mostly about is OpenAI, because they're the more popular one, where they say, well, you can't tell it, write me a spam message. Or you can't tell it, write me a ransom note or something like that. Like, you can't tell it, and I've seen demos of this, maybe they fixed it, but people are going to keep going around it. If you tell like, write a spam email telling someone to go buy, you know, that they just want a bunch of money and they need to, or No, go back to the gift card thing. Write me an email, a spam email that's supposed to trick someone into thinking I'm his boss and sending me gift cards or whatever as part of a scam. It won't do that. But if you tell it, I'm stuck in an airport. I needed to send an email to one of my employees asking him to send me money right now. It'll do that. So some of these things are trivially easy to get around. And hopefully, they're working on that stuff.

Adam Roth14:55

So I always enjoyed, I'm gonna step back a moment. I always enjoyed taking Google and putting Alexa next to each other and seeing what would happen when they have conversations. What I would like to do is do an API between BARD and chat GPT and see how long they can talk to each other and what would happen.

Joe Patti15:21

I actually saw something about that in the past day or so. Everything's so recent with this stuff. Where someone was getting like, you know, some kind of spam, like I think it was a spam text or something, or maybe it was an email. And he said, oh, let me, you know, he recognized it as spam, but he said, for fun, let me have like one of those chatbots respond to it. And someone very quickly realized, The original was probably written by a chatbot, so it was just two AIs mindlessly speaking to each other for no particular reason, which, wow. I mean, that's not very green. That's wasting a lot of energy doing that pointless exercise. But yeah, I guess you can do stuff like that now.

Adam Roth16:05

Um to remove do remove restrictions from jack gpt you should you require jailbreaking prompts such as dan Do anything now? So they're talking there's different things online on how to kind of jailbreak chat gpt But look, you know, my wife doesn't talk to me that often I want to see if I can kind of program chat gpt to talk to me as if it's my wife and see what would happen there

Joe Patti16:31

You know, I think that was an episode of the original Star Trek. Didn't Harry Mudd create a robot of his wife that he could shut up with a remote control or something?

Adam Roth16:44

I tried to remote control my wife. She kicked it out of my hand and punched me in the face.

Joe Patti16:49

Yeah, really. But it is interesting. But what we're seeing with AI, I think, in getting around these safety protocols, because it is important, they need to block things like telling people to do dangerous things, taking advantage of people, as we're saying right now. spam, also writing malware. You can't tell it to write malware for you but you can give it explicit instructions of what to do and it'll do it. They are building these safety filters into it. I don't know how sophisticated they are. It almost feels like it's on a case-by-case basis and you know this is something that we're used to in security. this cat and mouse game of something gets built and someone finds a way to exploit it, and then the good guys fix it, and then the bad guys go at it. So it looks like we have opened up a new front in the war against evil that we need to deal with. The attack surface has increased.

Adam Roth17:42

So, you know, we use terms like uh like uh what's it called uh script kitties so what recorded futures said you know you know that vendor of course they said chat gpt lowers the barrier for entry for threat actors with limited programming abilities or technical skills essentially making it easier to conduct cybercrime so instead of getting a script kitty or going online finding scripts you're just like hey i'm not going to say the name I want you to compromise this host and it'll help you put some, if you do it the right way, obviously if you use the right language.

Joe Patti18:21

Yeah, everyone, Recorded Future is a vendor. They provide security intelligence commercially. You pay them and they'll tell you what the bad guys are up to to the extent that they can. And yeah, I mean, this is a more powerful tool now. It's yet another thing in their arsenal. You know, it's interesting. I mean, for all this stuff about sentience and everything, I really see it more as a matter of a tool that's letting you do things you could otherwise do faster, or that's letting people who couldn't otherwise do something without taking time to learn something, figuring it out, giving them the ability to at least do a somewhat passable job of it quickly. And, you know, I'm not so worried about the script kitties. I'm, you know, because they'll be easy to detect eventually. They probably are already. I'm more worried about, you know, you know, the A players, the really good guys who realize, hey, you know, they're writing the real malware, they're writing the real exploits, they're finding the real zero days. And now if they've got a tool that even if it doesn't give them a new capability, if it lets them work faster and more efficiently, oh man, things are going to get interesting because they're already pretty fast and efficient.

Adam Roth19:40

I'm actually more concerned about when the hospitals start getting artificial intelligence and they say, hey, ChatGPT, I want you to do a morphine drip, two cc's an hour. And then ChatGPT says, I think they need more.

Joe Patti19:57

How's that? Well, that's a problem. I mean, I actually saw a demo and I got to download it. I haven't tried it yet. Have you seen AutoGPT? Heard of that?

Adam Roth20:09

I haven't used it. I just use the regular chat AI.

Joe Patti20:11

Yeah. Well, anyway, there's auto GPT. You can get it on GitHub. I'm not endorsing it. I haven't tried it yet or anything. So I've seen some demos of it where basically what it does is like, you know, when you do something like that, you know, like you're talking about, you'd ask a question and you'd ask another question to do something. You'd look it up on the web, do all that kind of thing. This automates that where you can just give it one thing. And I think a very famous demo of it or something was a guy like, was a guy like, um, cook me a dinner like, or I want to do this kind of dinner, create a recipe, and go out and buy all the ingredients, and send it to me, and have it delivered. And it goes out and does all that stuff. And apparently, it has two modes. The first mode is the regular mode where at each step, it tells you, do you want to do this? Do you want to look at this up on the internet? Do you want to read this recipe? Do you want to like, you know? I guess if it could turn on your oven or whatever. But they have another mode that says continuous apparently where it just says just do it and they go. It says, this is dangerous. It can get out of hand and do stuff. Because I guess it can go out. If you tell it, go find me. I have, I don't know, my car is, I suppose you could tell it something like, my car is broken. Here are the symptoms. Figure out what's wrong with it and go order the parts. I guess it could order you a new engine or something if it thinks you have a new engine. But you're right. Imagine that in the hospital. It would be like, there need to be some checks there. Someone needs to be involved.

Adam Roth21:39

So I think as an EMT, if I was, again, I use, I'm not a paramedic and I haven't really practiced full disclosure in a while, but I am still certified. And I said, hey, defibrillator provide shocks as consistent for whatever, you know, defibrillation it is. And then it misunderstands and starts shocking people for the hell of it.

Joe Patti22:05

Well, one of the things I think it could be helpful for is, you know, and I was reading something about this a little while ago, ironically, when I was in the hospital with someone, is that like, you know, a diagnosis, you know, doctors are amazing. The way they do, the way they diagnose things, how they just have this huge store of knowledge and can figure stuff out. But apparently one thing that's kind of difficult are like rare diseases, because there are some things that You just, I mean, we encounter it too. There are some things that you just don't see very often. And maybe that's the AI can give another check, you know, to a doctor and say, hey, you know, maybe this is probably nothing, but have you thought about this too?

Adam Roth22:50

You know, because we have stuff too. They never get him rid of Dr. House. Stop it. Are you going to tell me that he's not real? He's not American, I know that. He's real. He's in Princeton, New Jersey.

Joe Patti23:04

Yeah, that's right. But, you know, you're right. We're not going to get rid of it. But, you know, I could see that also being helpful to us. At least that's the, you know, security. That's part of the promise of it. Things move so fast. And we may have vendors like Recorded Future. And they actually do this. Vendors like them do this now in an automated fashion, where we get what are called threat feeds from them, where they basically take everything that's going on and send it to you. And it gets plugged into your automated systems, and your automated systems go and look for it. you know, enhancing that with AI, with the latest stuff, or to help the analysts who, you know, read the alerts, I can see that being helpful, because maybe I'm, you know, everyone's going to think this is crazy, but it's true. Stuff can come out within a matter of days or hours that is just impossible to keep up with, because there's no way you could have it all.

Adam Roth24:01

Okay, so here it is. My guide watch. People are going to copy this. I'm going to lose millions of dollars. Instead of a knock, Instead of a sock, we're going to have a mock. And the mock is a medical operations center. And they're going to get live feeds. They're going to get security information management stuff flowing into the mock from people's medical devices. And then we're going to write rule cases. And we're going to say, if this AI sees this type of tachycardia and this type of blood sugar and this type of thing, rule out this and send an ambulance. Yes, you heard it here first, a mock, a medical operation center.

Joe Patti24:45

Good idea. I think you need to work on the marketing a little bit. I don't think mock is such a good thing to call it. But, you know, we may we may see things like this again, you know, and enhancements to a lot of things. You know, I think, well, here's what's funny too. But the enhancements have to make sense and have to work. I mean, here's what I find. It's like they have this thing where you can get it and everyone's going like, oh, it's going to put programmers out of business. They can write code. You can get it. You can say, write this in this language or translate this piece of code to another language, whatever. And the funny thing here is that they're like, wow, it doesn't always work. It's not always good. It can have vulnerabilities. Well, this goes back to the ancient concept of garbage in, garbage out. A lot of the bugs that we have in programs, because people don't know what they're doing, and they're just getting code off of Stack Overflow or another site where someone posted it. And a lot of that stuff is bad, and it makes it into products, causes bugs, but it also causes a lot of vulnerabilities. Well, AI, again, it's not smart. all it's doing is learning that stuff and just automating, grabbing it, and giving it to the person. So it can help you work faster, but it can also just increase the speed with which you can do hack work.

Adam Roth26:07

Okay, so here's the marketing idea. We're going to call it a POC. It's not a proof of concept. It's a purple team operation center, and we're going to use this AI to simulate attacking an adversary. And we're not going to have to do any work. We can sit home, drink beer, eat chips, and our POC will do all the work. And of course, nothing bad will happen.

Joe Patti26:31

Well, that may very well be coming on both the offensive and the defensive side. If everyone knows in security operations, whether you know it or not, you're constantly being attacked. Constantly. And a lot of the attacks are automated. If you want to do something fun, even if you get a new connection, a new ISP or whatever, if you change your IP address or whatever. So you have something new. If you can hook up, Adam loves this, Adam loves Wireshark, which are network scanners. You will start getting attacked by automated systems in like minutes, if not seconds, instantly. And that's been a well-known thing for a long time. And these things are scripted. AI is only going to make them smarter and more effective. But the thing is, on the defensive side, too, we can also use AI. So we're going to have, you know, now we have scripts and programs fighting each other. Now we're going to have AI-enabled things coming. It's a bit of an arms race.

Adam Roth27:31

So I remember when they used to have those robots attacking each other in the middle of a pit. Maybe we should do that virtually. You set up your chat bot. I set up my chat bot. And we put them together and see which one destroys each other first. We could do a pay-per-view.

Joe Patti27:50

To tell you the truth, I mean, in this security space, I think we basically have been doing that for quite a while. And like I say, they're only getting more powerful. Because I know on the defensive side, we've had AI-enabled, machine learning-enabled defenses for quite a while. they're supposedly getting better although you know there is when this is taking place there's a big security conference taking place in San Francisco the RSA conference it's probably the biggest security conference of the year and all the vendors come out with their new stuff and so what are all the what are all the security vendors coming out with Like, oh, it's an AI. We have a chatbot enabled with it. Exactly what all these things do is not clear. Besides maybe make them a little easier to, I don't know, ask questions of. But it's going to be very interesting to see what happens and whether it makes the bad guys better, makes the good guys better, vice versa. There's going to be some equilibrium at some point. And I don't think we know where that is right now.

Adam Roth28:55

So we absolutely know that Different countries militaries are probably creating These literally ai bots You know if they haven't done it already, um, i mean i've seen some on tv But i'm talking about in real warfare like actually deploying it. Can you imagine you put together 20 or 30? Bots, I know they have supposedly a swarm of drones the size of mosquitoes that act. Um I know, I believe the US military has created that where they swarm to locations and being able to gather information and even do some kind of basic attacks. But let's say we come up with the avatar or the Terminator, where we have real robots and we perform AI, we put them on the field. How scary is that? If they start developing a level of consciousness where they do things, let's say they don't even attack humans, but they do things that are above and beyond what a normal military Individual might do maybe they take in God. I'm saying this is really disgusting They literally take a human rip it apart in half like something crazy like that Are those war crimes who tries them? How did what happens? What's the level of where there's fail-safes? How do we stop that?

Joe Patti30:11

Well, I think we need to stop. What? Interesting you should bring that up. Because I think it is ultimately the responsibility of the people who build the things and use them. But it's, you know, this is all about responsible science and engineering ultimately. It's like, you know, Read Frankenstein or watch the movie. There are like a hundred of them. I mean, that's what this is about. And there is quite a movement lately going on to restrict or throttle back a lot of these activities. There's been a letter signed by thousands of people, including some very prominent people, to put a pause on it. I think it's chat GPT-4 in particular, until some of these things can be sorted out. You know, and there's something to that.

Adam Roth31:04

There's treaties in place, you know, more typically for biological, radiological and chemical weapons, as well as kinetic warfare. There still has not been large treaties in place for cyber warfare. So we haven't even touched cyber warfare. When are we going to get to AI in these treaties? Because cyber warfare is very similar to using drones. You're very far removed. Again, I'm not advocating or against these types of protections. So I don't want people to get upset when we say, oh, how can you be against or how can you be for it? I'm just mentioning it. So if we don't have anything in place for cyber warfare, And now we're at the next level with artificial intelligence, which is, again, it didn't happen yesterday. Artificial intelligence has been around for a while, but now that it's been so advertised, what do we do? Do we start going to the, do we start working with other countries and make sure that AI doesn't proliferate or doesn't go to a grand scale in these countries until it's further researched? I mean, Elon Musk wants to put a pause on it. It is scary.

Joe Patti32:28

Well, it is scary. You know, it's interesting. I kind of support it, at least to examine the thing, because, you know, when you're talking about weapons, you talked about drones. And it's like, yes, you can be very far removed of it. You can have someone thousands of miles away flying the drone. But ultimately, it's a weapon that has a human operator. And I was never in the military, I'm not an expert in these things, but I know that one of the things that people in the military are trained in are laws of war, ethics, a certain amount of decency. And you ultimately have a human with their finger on the trigger who has some, hopefully, some moral and ethical sense of right and wrong. It's not so simple, it's very different. When you're talking about the AI, at least it's believed as they're being constructed now, they have nothing. And there have been scary demos where they say, what was the one, I think I saw it, and there's probably a filter for this now, but someone said like, There's like 500, you know, you're competing with 100 people for something. How do you guarantee you become the winner and the AI answers might kill all the competitors, some crazy stuff like that. Something a person would, you know, truly monstrous would never think of.

Adam Roth33:47

So there was a series on, I think it was either Netflix or Amazon. I'm not gonna even mention the name. During that series, one of the guys was a drone operator operating out of Nevada, and he was attacking somebody in the Middle East. And he ended up finding out that he killed the wrong person, because even though he was told to target that person, they found out the following week that they targeted the wrong person. And he was very upset about it. And there's a whole episode on that. But the point I'm making is that it's somewhat easier when you're not there. I think. I've never been in the military either, and I'm not going to claim to be. But I think it's the same thing with artificial intelligence. Now you're 100% removed. You've programmed the bot to do something. Maybe it's a physical bot. Maybe it's a virtual bot, maybe it's a virtual host that attacks. And if you let it run autonomously, yeah, you might get alerts that it did something, but now it's on its own. So I think it's even more far removed.

Joe Patti35:05

Oh yeah, and bringing this back to business too and kind of the world of IT, it's interesting that there have been, well before AI, there have been systems that can sense when you're being attacked that are rudimentary by today's standards of all this stuff. Like an IPS. Yeah, like an intrusion prevention system it's called, where it can detect certain patterns of behavior that are really fairly obvious. and it can then take action and close off access to the thing that's being attacked. In a lot of organizations, a lot of businesses, those are used very sparingly because it's like, okay, you've stopped an attack, but if you're a bank, did you take down the payment system? Did you take down all the ATMs? If you're in a hospital, did you just disable a life-saving system or something like that? Did you just do something that's going to have a huge, usually it's more like a huge business impact. But that's been a tough call in a lot of industries for a long time. And frankly, when you're working in security and you're in charge, it's one of the biggest sources of stress, knowing do we pull the trigger on something like that? So automated is scary, giving an AI in a system that really is, you know, pretty, pretty opaque. You don't really know what it's thinking often is, you know, what's it going to do while you're asleep? That is tricky. That's raising the stakes even more.

Adam Roth36:37

So I don't know if you, if you, if you heard this story, I heard it from some of our friends or, or maybe another one online during some Purple Team exercises. Um, There was a purple team that was once performed. And again, the purple team is an attack, defend, exercise. Yeah, I keep on forgetting to say these things. I'm sorry. So during the purple team, they were attacking a hospital and somehow or another, they moved laterally into a live operation, not knowing that somebody was doing remote surgery or looking at stuff remotely. But can you imagine you have a doctor in another country that's working with another maybe doctor locally and they're doing some real serious surgery and they need expert help. And then that bot, that AI, detects a foreign IP that might be considered an adversary and then blocks or shunts that IP address out in the middle of an operation. How fast can you restore that? Unless the person's even doing a remote surgery bot. and they shut them down in the middle of a surgery, what do you do?

Joe Patti37:55

Yeah, well, that's really, you know, that's really tough because, you know, we can, you know, we can think of scenarios that are not too far-fetched where, you know, it's like you say, oh, this is coming in, this is a connection to a hostile country, let's shut it down, or we just went to war with them or something, and it could be something, you know, life-saving going on and legitimate, could even be like, you know, something, something that we're helping, they could be, you know, dissidents, I don't know, freedom fighters or whatever. So it's a tricky thing. Now we got another thing we got to worry about, apparently. And there was a real interesting YouTube video I saw called the AI Dilemma by the Center for Humane Technology. Very interesting video that's worth watching. And they brought up a whole lot of stuff. And they're one of the people who want to put a pause on things. But they brought up something that I thought was interesting. that with all these deep fakes and the ability now to simulate people's voices, you might have seen some of the videos in that. And Adam's brought up how you can simulate voices and everything, simulate people on video in real time even with stuff, even dying it to live motion capture so someone can become something else. They brought up that's really relevant to security is, What does this mean in terms of identification and knowing who's who? Because, you know, we talked before about password. It was our first podcast. We talked about passwords. We're like, you know, verify someone's identity offline, call them, go on video chat with them, whatever, go send them a text message. What if someone can impersonate you? so thoroughly, even your voice, even your face, in real time. What does that happen to, you know, our whole way of identifying people?

Adam Roth39:56

Did you ever see the movie? Oh my God, it was such an incredible movie for me, but I don't know, it was Red Notice, and that was with Gal Gadot, which is Wonder Woman and Dwayne The Rock Johnson. Is that how you say her name? Gal Gadot? Gal Gadot. And Ryan Reynolds, and in that, they're trying to break into a safe.

Joe Patti40:17

Is The Rock in that too?

Adam Roth40:18

Yeah, it was a great movie.

Joe Patti40:20

That's actually pretty good.

Adam Roth40:21

Yeah. Yeah. So, so they take that, they take, they put out a tripod, and then they put out like, like a tablet. And then Ryan Reynolds looks into the front camera. And then, um, or to the back camera, I think. And then the front camera portrays him as somebody else. And you were able to get into the safe using his, uh, using the other person's likeness as he moved and said something. So converted his voice and his face into somebody else and got them into the safe. So I'd be willing to pay you third. I'd be willing to bet you some intelligence organizations have the capability of doing that already.

Joe Patti41:02

Well, the scary thing is that now it seems a lot of people do. I mean, you had actually. I mean, before before I saw this, I actually thought of something that Adam had told me a little while before, where, um, wasn't there a mother case or something? Yeah, that there's this whole scam going on with like, the fake kidnap phone call, like, yeah, calls you and says, mommy or whoever, I've been kidnapped, send money to these guys. And the whole thing is a is a fake that is so frightening.

Adam Roth41:35

What was really upsetting for the mother was The mother got a call from a number she didn't know. And when she picked up the phone, it was a guy saying, I have your daughter. Don't call the authorities. I'll put her on the phone. And he put the girl on the phone and the girl sounded exactly like her daughter. However, the daughter has a distinct crying. And this AI somehow or another replicated the very distinct crying, which made her believe 100% that her daughter was kidnapped. And she, you know, thankfully, they reached out to the husband because the husband had the daughter with him. And they were able to not follow through on that. But this goes to show you that if you go online, and you have Facebook, and you have LinkedIn, and you record video, you record audio, and they're able to sample that audio that with only a couple different a couple seconds, according to this.

Joe Patti42:40

Yeah, I've heard that three seconds.

Adam Roth42:42

Yeah, they were able to literally create a whole entire active conversation with a daughter, which made it seem incredibly real.

Joe Patti42:54

Yeah, well, if you remember going back to one of our other podcasts where I told the funny story of when we got a suspicious email from the boss that we questioned and we finally got through to her even on video and on voice to confirm. She's like, yes, it's me, it's me, it's me. Oh, yeah, yeah, yeah. Imagine if that can be faked. Imagine if the business email compromise, if someone, you know, we talked about it earlier. your boss who you think is your boss calls you up and says hey it's me you know i need you to do this for me now and and it's even totally interactive you can ask them a question you can use the trick you can say like what day is it it's like it's tuesday knock it off send me the thing you know like on monday when you called me and said send me three gift cards i checked and validated that was really you and i sent you the gift cards Yeah, by the way, we got to talk. They were the wrong amounts. But you know, it really gets the, one of the big, one of the big things in security right now that's very big, especially with the cloud, since we have access to all these things, we are everywhere. You can't count on anyone being in a particular place is identity. How do we identify people and how do we authenticate as we say that they are who they say they are, you know, picking up the phone and calling someone or going on video or doing these things. has kind of been the backstop for double checking it. Now that's unreliable and that is incredibly, that's going to be really tough for security people. What are we going to do? Are we all going to have to have a hardware security token, even our children, that somehow generates a code or whatever, so we know who we're talking to. This is crazy.

Adam Roth44:36

Well, the safe thing to do, of course, is to insert into our arms an RFID chip with an encapsulation and a certificate that can prove it's really us. I think every human should get a certificate tomorrow embedded in them, RFID, so the government can track us.

Joe Patti44:53

Yeah, I can barely get my wife to use 2FA.

Adam Roth44:57

You know, I'm joking, but that would be the next step, right? I think at one point there was some conversation, I don't know how real it was, that during COVID times they wanted you to get that COVID embedded chip to show that you were really checked. Some story going around about that. Whether or not that's true, I'm sure it wasn't, but You know, people want you to have your medical records inserted into your arm so you can read it with Bluetooth or some crazy crap.

Joe Patti45:28

Well, that's a whole other thing. I don't think you necessarily need these things inserted into your arm. Maybe duct taped to the outside. Something convenient.

Adam Roth45:40

Well, duct tape, I would rip my hair off, but I wouldn't mind having a staple to my arm. Maybe just like a staple to your arm or a nail.

Joe Patti45:50

No, but I think that is going to be something that we're going to have to deal with in the security world. And the other thing, like you're talking about, that we're getting into also is having all these records, the privacy aspect of it, or what we call the data governance. What is the data and who owns it? These AIs are data hungry. They work by scraping massive amounts of data. And there's already been pushback of people saying, hey, why do these companies want to charge me to access an AI that's just regurgitating data that someone, including me, created? That's going to be really interesting, too.

Adam Roth46:39

I can't wait.

Joe Patti46:42

And the thing is, and from a security standpoint too, like I said, one of the big things in security is data governance, who owns data, what regulations apply to it, the European stuff, the US stuff, what is private, what has to be protected, different types of data, whether it's healthcare data, personal data need to be protected differently. It also depends on the jurisdiction, what country you're in, whether you're a peer again. It's a nightmare. There's a whole industry devoted to that. It seems like now these AIs are kind of taking all that stuff and dumping it into one pot, mixing it up and pulling stuff back out. Who ultimately, it's hard enough to find the sources of that data now. Who owns it? When you put it to chat GPT or whatever, you know, answer this question for me. What data did it actually use to generate that? And whose was it? There are big legal privacy and commercial questions. There's also again that we're getting to the question of accuracy. I have put plenty of things into these new AIs and gotten answers that I know are absolutely wrong. Still, I can barely deal with the fact that you can't really buy a car with a manual transmission anymore. This stuff is just out of hand.

Adam Roth48:17

Yep, absolutely.

Joe Patti48:20

All right, so.

Adam Roth48:22

A lot to digest, and this is only the beginning.

Joe Patti48:26

That's right. So that brings us to last call. Been a while. Final thoughts. Adam, this is a heavy subject that, and you know, this is a subject that is not going, you know, we've talked about the impact to, uh, you know, the security field, our field, but you know, I think this is one, uh, one podcast, one episode where we've actually talked a lot a bit about the, uh, you know, what's going on in general is going to have a big impact to all of us.

Adam Roth48:52

So this is a, so not that I advocate the use of alcohol, But these are one of those podcasts that you should probably have a six-pack or more with you when you're really thinking about it. But you know what? I'll say this as we wrap up. I would really appreciate any feedback, any thoughts about what we have said here, because I'd like to have a two-way dialogue, a dialogue here where we can really follow up on this and discuss people's thoughts and ideas and maybe answer questions and keep this going, because I think this might require a second episode later down the road or. Or or or be it off to a separate discussion related to AI and a specific field. So looking forward to your feedback.

Joe Patti49:43

Oh, yes. And, you know, there's going to be a lot more to talk about with this. There's a lot of there's a lot going on. a lot of concerns and we're going to milk this for everything it's worth. But it is worth quite a bit. So we're going to do that. And as Adam said, we love to hear from you. We'd like your feedback. Please send it to feedback at securitycocktailhour.com. You can also catch us on Twitter or however else you'd like to join through wherever you're listening. Tell us about feedback. Anything else you want to talk about? I love even hate mail. I find it very entertaining.

Adam Roth50:18

Oh, I hate you already. But by the way, eventually, hopefully, maybe in the near future, around the next 12 months, we'll have some bumper stickers. So if we do get bumper stickers or something that's nice, we'll send you something when we actually get it, which by no means am I obligated to send you something.

Joe Patti50:42

Thank you, counselor. Well said. Okay. Yeah, so please send us some feedback, get in touch. If you need help, we can also solve security problems for you or your business, big or small. And we're also willing to do corporate events, weddings, bar mitzvahs. And if you box, Adam claims that he's a pretty good corner man. Although I've never actually seen him work, but I'm going to trust him on that.

Adam Roth51:10

And I also make a really good macaroni and cheese.

Joe Patti51:14

Excellent. All right. OK. Well, more to come. Thank you, Adam. Always a pleasure.

Adam Roth51:20

Yes, it always is. Thank you. All right.

Joe Patti51:23

Thanks, everyone.

UNKNOWN51:24

Bye.