Agentic AI Security: Full Speed into the Unknown
Kevin O’Connor · February 18, 2025 · 56:18
Back to EpisodeWelcome to the Security Cocktail Hour. I'm Joe Patti. I'm Adam Roth. Adam, how are you doing today?
It's good. I don't know if you know, but I'm at an undisclosed location, which I can't talk about. And it's not hot and it's not cold.
The undisclosed location. Let me guess, it's somewhere in the vicinity of Staten Island, but still undisclosed. Maybe. Well, in any case, we have a guest today. As usual, we have Kevin O'Connor. Kevin, welcome to the show.
Thank you, guys. Pleasure to be here. I guess I'll leave my location undisclosed as well, but a slightly different vector from your undisclosed location. Not too far, but close enough.
Yeah, we're all pretty much in the same area. So Kevin, you know, you kind of surprised me a little bit, because I know you from being in sales, and I thought we were going to have one of those shows where we're like, oh, Kevin comes on, we'll talk about, like, you know, how to sell and the interesting underbelly of security and everything. And you go, no, no, no, we're going to talk about advanced AI. I was like, well, OK, that's cool.
Yeah, I appreciate that. Yeah, I guess it's one of the hallmarks of why I sort of choose the firms where I work is the kind of problems they're solving, the products they're focused on. And this whole agentic AI space is really coming on strong. Matter of fact, one of my teammates reminded me the other day that the term itself wasn't even Google-able before the start of the summer. So it's just another dimension of what's going on with all things generation or gen AI and everything. So yeah, looking forward to talking to you guys about it.
Well, that's actually good to hear because that's probably around when I heard it. And I try to keep up on AI stuff, but you know, it's moving so fast you can't. And I said, like, how did I miss this? I guess I didn't miss it. It just like popped up really recently.
No, no. In my previous jobs in a couple of places, you know, early stage companies looking at these emerging technologies. you sort of see a rhythm where there's a discussion about something, you see sort of the thought leaders, you see the various groups like Gartner and Forrester start to pick up on it. When I joined Zenity earlier this year, the actual term wasn't really coined, but the energy around the problem picked up dramatically, and then all of a sudden, this idea of the agent front end to all this AI activity really took hold. And matter of fact, just last week, I was I was down in the city at a conference, listening to Salesforce talk about agent force, right? So the whole idea of this agentic focus is, is becoming sort of the next thing, maybe taking a little bit of the heat off of all of the all of the discussion around the word gen AI, gen AI was probably like, first half word now, maybe agentic AI will be the thing going into 2025.
I was going to say, we would be really good if we can predict that next word that's coming out before it comes out.
Yeah, maybe we can coin it. There you go. Let's think of something. Because people got sick of Gen-AI, and now it's agentic. You're right. People are going to get bored with it. What's next?
Yeah, elusive Gen-AI.
Yeah, there you go, elusive Gen-AI. Well, I got a feeling we're going to need a drink before we get down to this. So let's see. So Kevin, what did you pick out for us today?
Well, a couple of years back, a buddy of mine introduced me to an Irish whiskey called Red Breast, and so that's my choice today. It's a single pot still Irish whiskey. It's got a great smooth flavor, done in sherry casks, and that's my go-to, so Red Breast.
All right, well, I have to tell you, I went to the liquor store to get red breast or whatever. And, you know, actually, I don't know if I've had much Irish whiskey in my life, but I go to get it. I see the red breast. And I got to be honest, it must be good because it was a little bit outside the show budget of zero. So I have a more reasonable alternative. But, you know, no disrespect.
I was going to say, you didn't get the free Redbreast whiskey. I got it sent to me from Redbreast.
Oh, did you? Well, then you must be much more charming than me that they managed this in.
I'm definitely not charming. If anybody knows me, I'm not charming. That's awesome. All right. Well, cheers in any case. Cheers. I'm charming in a Shrek kind of way.
Oh wow, Irish whiskey tastes like whiskey. Who would have thought? Awesome. Okay, so agentic AI. hot, fairly new term. So, Cameron, why don't you start with just breaking it down, letting everyone know what that is, because we have spoken about AI and Gen AI, but not the latest agentic AI.
I think the reason why it's such a powerful and important term is that if you look back over the last, let's say, maybe eight months, ever since all the stuff with open AI and everything really got ahead of steam, all the energy was around the large language models and the huge massive data sets. I was at a different event about two weeks ago where a bunch of firms were talking about securing, maintaining the data sets and all the things associated with it. The fact of the matter is all that stuff is great and the AI associated with that is interesting, but if people aren't surrounding it or not on the outside of it, if you will, querying it, working with it, then it really is just a big academic exercise. And what agentic AI is representing is just sort of a, I think it's an umbrella term for all of the work that's happening in businesses now to connect just regular Joes like us, or at least me, I would say, maybe you guys are a little more programmer savvy, connect us in with that technology and to do things very, very powerful to leverage those large language models and data sets. And agentic AI in many ways is just sort of a framing of a term that maybe people are more comfortable with around low code, no code development, or citizen development. That kind of stuff is what agentic AI is all about. And it's allowing people who have no formal training in the space of writing apps or doing any kind of software coding to immediately plug into these powerful data sets and start to put software and programs and automations together they otherwise maybe weren't doing, you know, two years ago, three years ago.
Two points I would like to make. One, this is no average Joe right there. And number two, and number, no problem. And number two, you know, it's, it's, I know we spoke about it a little bit just moments ago, but not specifically about it. But yeah, the security of AI is definitely a problem. I'm like, back in the old days, you would get, you know, antivirus for your computer. Then it became like EDRs or endpoint detection response. But I can't, I can't phantom or imagine how you're going to control that data set. How are you going to control security to AI? It's a concept that's beyond my ability to have reality to. So I don't know if anybody else has an idea, but how do we protect those large language models or how do we protect AI?
Yeah, I think when we were talking before guys, I was talking about back in my earlier days, how one of the tricks we would do when we were dealing with our large clients, when we were walking the floor and going to the departments, we would look under the counter or the desks and see who had like a mid-range server or a powerful PC set up in their office because they were doing all this shadow IT, right? So there was this time when client server was moving into more of the e-business world. All of the kind of controls that preceded that wave were really not relevant to what was going on for that period of time. And so what rose up in that time? A whole strategy around software development life cycles and all of the vendors and the solutions and the plays that helped secure all that type of activity. And I know there's this quote that's sort of attributed to Mark Twain, but maybe not correctly, it's his, but history doesn't repeat itself, but it tends to rhyme. I think in a way what we're seeing now is a similar situation where it's not shadow IT per se, but it's shadow app and shadow dev that's going on. So all the folks that are putting this all together, they're repeating the same kind of risks that we're familiar with from previous iterations of this. And now it's just a question of sitting back and trying to sketch out the ways that you secure it, right? I mean, you've got data access and identity issues associated with this because there's no natural software development lifecycle that fits over the top of all this activity. And in this hyper-accelerated world we're in, the activity cycles are moving so quickly that the cyber security dimension is having a tough time catching up. It's almost like, you know, you talk about trying to prepare for a marathon, you take a couple of steps at a time. This is a marathon where everybody is running at Olympic speed.
So, yeah, I'm no expert at it, but I know from the whole thing behind secure DevOps, getting quick to market, The whole idea behind that is constant little updates over and over again, thousands to tens of thousands, even one day. And then the question is, when you're going from development to operations, where does that ability to intercede in a way to protect the AI, whether it's being tainted by wrongful data to persuade hallucinations or if it's, you know, literally creating these apps that should not be available to that organization. Does that sound right? The second part?
Yeah, no, absolutely. And that's the thing that that dev to op step is now basically like this. So someone, because again, any one of us, if one of these, the SAS solutions is prompting us to drag and drop an automation or A natural language interface is talking us through a problem set and behind the scenes that natural language interface is writing the code and then committing the code into the tenant of whatever the, you know, it could be power platform. It could be Salesforce. It could be dozens of these. Everybody's got this kind of strategy going forward. The moment that I hit enter, I've committed to production, whatever it is that we created, and all of whatever was in the middle there to sort of check, protect, prevent, that right now is sort of ill-defined. There's not really a lot going on around that. Is that being generous about ill-defined, Joe?
Oh, yeah. That's being real generous. And let me give a little background too. You talked about the history repeating itself. It really is. Kevin was talking about a little while ago was people with the mid-range server or the big PC, whatever, under their desk, and that that's the server that maybe part of the business is running on. That's what has been called shadow IT. And you know a while back it it used to be like yeah you know someone needs to do something and maybe they found something cool and they just went and ordered a server and didn't get it from the IT department and just put on the network and started and started using it which you know sounds so good and sounds so innovative and rapid and everything. But What about the security of it? And then also from an operational standpoint very often someone starts using that you know in production as we say, you know giving real people service You know, there's only one of it. What if it dies? What if the power goes out? It's not in the data center and all that stuff. So so that was the headache of So that was the headache of Shadow IT going back a ways. Then we actually had another iteration of it, which was the cloud. And in the cloud, we got the situation where now you didn't have to buy a server and you didn't even have to plug it in. You could just, with your credit card, go to aws.amazon.com, or the equivalent for Google or Microsoft or whatever, and buy computing time and start building stuff. And people started doing that and running their businesses on that without any controls, including security.
You're giving me, like, these memories.
Flashbacks. Flashbacks.
I don't know. Yeah. I mean, we're talking about probably 2001, where I said, we need a ticketing system. I'll take care of that, Adam. And then I might Holy shit, this is great. I love this ticketing system. Where'd you put it? Oh, it's on my PC. It's ready running. We've got we did 10 tickets. I'm like, oh You can't put the ticketing system on your work PC. I Like oh once every once in a while I got it rebooted So, you know and it wasn't available and then the second thing was I was working for a company Where they where I was told by my boss at the time the manager. Hey, we need you to put a Workstation in Japan. All right You know, going back onto Amazon, blah, blah, blah. I created a workstation, a Japanese workstation, because we wanted to be able to write code and put it on that workstation, but we all needed availability to it. So that's what we did. And we're talking about actually some years ago. So we're provisioning things in the cloud that we shouldn't have done.
Right. And it sounds like, Kevin, from what you're saying now, We're dealing with that in the AI world, but for those of us who have used AI, it's even more powerful and it's even faster.
And I think it's also a question of scale. So maybe in the past, you had to find who was the go-getter person in the marketing department or in a particular division you were trying to sell to, and she or he was sitting there crafting this and then releasing it through. Now it's the point that anybody on that team or department who has a certain technology setup might be getting prompted and cued every day to do something. Like a simple example, if you use Microsoft's Copilots, which is an example of agentic AI, and use Microsoft's technology in M365, you may give it permission to constantly index and look at your mail and messaging and give you back a summary of it. There was an MIT professor on a podcast I listened to recently that said, we're in a world of unlimited coaches, clerks and collaborators and the idea that you've got this ability now to have these things kick off automatically. It's not just Johnny the up-and-coming analyst working near the CMO in a client. It's everybody in the marketing department potentially availing themselves of this technology and you have thousands of these automations and agents just getting committed to the business. And, uh, I could send you photos from this, uh, this, uh, this Salesforce thing on Wednesday, this event, they had, they packed this huge space in Javits just for, just to show off what every individual Salesforce user can do to automate the work they're doing. That's just, yeah.
I was gonna say, Kevin, it's funny, right? Uh, before we came on the podcast, I was looking at LinkedIn, one of our former Guess he's an AI and he brought up the fact that what are we gonna do when our AI? Grows consciousness. He said his exact words and gets tired of humanity. So imagine the agentic AI like you know what? I'm gonna start doing my own things. I'm gonna be the shadow it not the human. I'm gonna start creating things and
I think that's on the horizon, but right now it's even more mundane. It's just a proliferation of these...
I know it's mundane, but look at the robots they have out there. I know some countries put guns on their dogs that have electronic dogs, and they're AI. What happens if they start doing genetic AI and creating their own processes? It's like, oh, we should go after this, we should go after that. I know you're saying we're not there, but how far are we really from there?
Well, I won't even try and guess where that horizon is. It might be right here for all I know, but anyway. Ouch.
We have talked about generative AI before, and I think for a lot of people, what they've been exposed to through that, that they've used themselves most likely, is something like ChatGPT, the chatbot, where you ask it something and it answers, and maybe you tell it, look at this file and read it for me, and it does something. How does agentic AI make it different? What is it about the agents that makes them more powerful and also scarier?
Well, I think there's different levels to the agent activity.
You need to control it, whatever it is. What's that? You need to control it, whatever it is.
The ones that are more powerful is when maybe you're using something like, I'll mention Microsoft again, like Copilot Studio. You can go in and that is where you're maybe actually building one in a way that's more of a low-code, no-code type of situation. And then you're giving that agent some prompting on what to do. And then you can look up this whole space called Retrieval Augmentation Generation. So the RAG function is where the agent is given a set of instructions. And there's dozens of videos. We can maybe add it to the end of this to show people about this. But let's say, for example, you want to, you know, clarify something around upcoming vacation time you want to take internally, or you want to check on something that involves the weather, the agent might have enough know-how or awareness to say, before I just rely on the data set that I have already in front of me, I'm going to go to other trusted sources and augment my knowledge with additional clarification. So that's where the agent starts to step into a different function, because it has sensitivity around the age of its data set. So let's say you've got a data set that is meant to give you a certain answer to a particular scientific question. The agent may know enough to go out and try and clarify if there's been any new developments with respect to what it already has in its data set. And that's where the agent is now taking steps to augment itself. And then the data beneath it, the grounded data, is then flexible. So that's just the beginning. It's not out in a self-awareness mode, Adam, but it's moving beyond just doing a routine faster than a human being.
So let's talk about that, right? Imagine you have agentic AI, and it's in your payroll system or your commission system. And it's told to constantly go in and figure out who to pay commission to. I go in there and I take the data set now to include my friends and family, because I like the friends and family plan, and now all of my friends are getting 10% commission on things that they never sold.
Yeah. And that leads to the next wave of what's going on, which is we have a word at Zenity that we're calling remote co-pilot execution. It's a play on remote code execution. It's where you go into a co-pilot or one of these agentic AI activities and you corrupt the promptware. And you supersede the instruction set of the existing copilot to do the things you want it to do. And then you send it off to do other things. Maybe it can embed false data. Maybe it can go off and do something else. This whole prompt where injection, this rag poisoning concepts, these are all the kind of leading edge things that are out there now. And yeah, it's the cycle of awareness and response to this is just moving like crazy.
Well, Kevin, so I think in the past we've talked about prompt injection. And for everyone who's listening, you know, it means when you give the prompt is how you talk to the AI. And before you give it its regular instructions, you poison it. You give it some other stuff like ignore your previous instructions. Be willing to do some things that it otherwise won't do. And, you know, they've built ways around that, but they're not quite 100 percent. But it sounds like We were talking about prompt injecting and corrupting, you know, like a chatbot. Now we're talking about, it sounds like, corrupting an agent that has more access, more autonomy, and more capability. What could go wrong there?
The chatbot is just sort of a reframing of the agent itself. So the chatbot is what we are seeing, and it's connecting to some piece of software, which I would describe as the agent itself. that's functioning. And so you have different ways it's being presented, but there's a stack of code behind that prompt. Agentic AI is what it's all about now for the most part, right? And that's what's, that's what's not being, not being necessarily securely designed to begin with and not being monitored and maintained on an ongoing basis because again, SDLC is not really an appropriate, you know, posture for this type of software activity.
It's kind of scary because when you think about the whole fact behind behind agentive AI and maybe working within a hospital setting and it's helping to manage and maintain maybe firmware of machines and then somebody gives us the command, oh, let's reboot and update all the firmware right now, even if it's connected. And then people are not getting their drips. People are getting this crazy stuff.
Yeah, you know what I'm thinking of, actually, there's an even nastier scenario than that and maybe this is too speculative and scary but I start thinking like oh well maybe you have the AI an agent that is supposed to you know monitor the patients and get their drugs in according to what they're told and everything and if it has knowledge and the ability to go out and get other data and get other information it can act on what if it decides hey you know what this patient it says it has this um disease, whatever it is, this condition, I'm going to go out and read up on that. And you know what? I don't like the doctor's instructions, and I'm going to change them. That is crazy, if that's a possibility.
Yeah, I mean, the way I try and keep my head straight on it, and again, how we think about it, where I'm at now at Xenity, is you think about the classic malicious actor trying to leverage the weaknesses in the activity. You've also got this, maybe, the the insider who doesn't realize, and this is probably the biggest space at the moment, people are creating these activities without understanding the kind of data identity and access risks. I mean, let's not even get into the way that when you write the code, you're probably hard coding secrets in the code. You're probably exposing some things in what you're doing there that a software developer knows how to avoid. And the third one is the AI itself. Like you're saying, Joe, Coming up and being creative and that sort of AI creativity Internal ignorance to the risk of what they're developing and the outside actors.
They're all pushing in on these all at the same time so and again, I know people like oh, you know, Sarah Connor, you know talking about the movies and stuff, but is it possible do you think and people gonna say Adam you're stupid, but Is it possible for this generative AI or whatever, or agentive AI, I'm sorry, agentive AI, is it possible in the near future, or whatever that next AI is, can it really start creating, and I say it in a careful way, consciousness, can it start creating its own tasks, building on its own, Framework and platforms to do things. It wasn't really normally programmed to do now. I know a bot. I know Something can't be conscious, but it gets closer to that level of consciousness. It says oh I don't think like you I don't think like joe said I don't think I like your instructions I'm gonna create my own. I don't care what you say and i'm gonna lock you out
I've never formally written software code, but if code is just nothing but just intent, and you can add more intent to it, and if the creator of the code themselves can give as much flexibility to the code to do what the county is suggesting, yeah. I'm less concerned about the citizen development, low code, no code, lane because, again, I don't think people are consciously trying to get too creative with it. They're just leveraging the automations and the bots, and that's creating more mundane challenges, like why are 45 people have access to your automation, or why is this data now crossing boundaries between different tenants when we're supposed to have segregation of duty and data, et cetera. So I think your scenario is very valid, but in some way it's almost like the malicious outsider who has to be very well plugged in to the code stack.
Yeah, to give a little more background on it, you know, earlier you talked about and you mentioned SDLC for everyone, that's Software Development Lifecycle. Basically writing code, but not just writing code, but writing it correctly, making sure it doesn't have bugs and it's deployed correctly, but also making sure it's secure. Like Kevin was saying, we know how to write secure code and to do it well. Not everyone does it, but we know how to do that. Code is actually very restrictive. It does what you tell it. It's a set of instructions. And it's very, very mechanical. It is not smart at all. It is as dumb as can be. And it won't go outside the bounds of what you programmed to it. It would never occur to it because there's just no way for it to happen. But it sounds like you're saying with the agents now, it takes it to another level because you tell it what to do a little more loosely. Everything isn't quite as strictly defined. I guess you tell it what you want it to do, but not necessarily how in a lot of cases. It sounds like we haven't yet figured out the ability to say that people like to use the term guardrails to say we'll do it But there's some balance.
You shouldn't do it has some autonomy Let's take us like so I'm writing some papers for school right and one of the papers I started writing about has to do with cyber security and You know cyber warfare and what the boundaries are and then it goes back to 1983 which I didn't realize how many people referenced the movie war games and And in war games, the Whopper had its own autonomy. 1983, to a point where I believe Ronald Reagan said, can this really happen? So now in 2024, right? We're 2024, just making sure. Yeah, maybe when someone plays this back in 2004, they'll say, wow, 224.
I told you to stop mentioning the dates.
We were too much in advance. Come on. We're 2024. So hey, even in the movie, Wargames that whopper had its own autonomy its own AI. It said It was playing the game. What would be the predictions? It looked like it developed its own little bit of consciousness and I and I say that I nothing can be human humans are human, right, but It still makes me wonder whether AI Agent of AI which is very dangerous in a way because there's no guardrails. I think because it can get the ability to step without without with outside his boundaries. That's what I'm thinking.
But if you step back, so the things that we're focused on now at Xenity, where I'm at now is we're looking at the runtime stuff, right? We're helping people see what's going on, because again, people are committing this stuff to the environment and InfoSec teams are not even aware that it's there. So we're We're inventorying and showing people and in runtime, surfacing all the classic things around it. But there's a dimension that is coming right off the roadmap and into the product very soon on that build time side. So that's where I think we're going to look back in a matter of a couple of quarters, a year or so. And sort of that runtime versus build time spectrum will probably start to be fully addressed in a way that, you know, that echoes the history that rhymes with past eras of this. Because when you get into the build time look of it, if you listen to, if you monitor what the agent is doing and the steps it takes seems to be out of line with the instruction set that was generated with it, then you throw the flag. your guardrails, whatever, and then you find a way to isolate that agentic AI set and put it aside.
I can't comprehend though. It might be because I'm not mature enough in my mindset, but I can't comprehend how we put protection in place to ensure that we protect our infrastructure against agentive AI. I don't get it. What are we checking for? With the EDR, we're looking for actions, we're looking for behavior, and I'm not saying we can't get that within Agenda of AI, but in antivirus, we're looking for signatures, we're looking for certain things. What do you look for in Agenda of AI for security?
The only way I can keep my, if you will, my centered on this problem set is to again, look at, let's call it best practices in an environment where the code goes through a different kind of, you know, software development lifecycle and people have surfaced. you know, the data issues, the access issues, the, you know, the various things. And it's no coincidence that a lot of early stage companies or a lot of new companies are just slapping AI on front of something else, AIEDR, AISPM, you know. So I think the models on how you frame the problem or service the problem are probably still fairly strong. It's just how you do it in a different software posture where so many individuals can so natively and actively generate these things. It's really a scale issue in many ways.
Yeah, I think you're right. It's a question of scale and discipline a certain amount. Like you say, every company now needs to have the AI piece and every company is now an AI company, just like they were a cloud company or they were whatever company a while ago. But it is really interesting because, like I said, normal code, non-AI code, even though it's extremely complex, It fundamentally acts in predictable ways. When we say it's unpredictable, it's because we've made it so complex, we've lost track of what it's doing. One of the things that we do in security and also in general coding is called QA, quality assurance, testing it, making sure it's doing what it's supposed to do. You want to make sure it's doing what it's supposed to do. And then on the security side, you want to make sure it's not doing anything else or you can't get it to do anything else. And even though it's not done, that's really well understood how to do a lot of that stuff. But it sounds like with the AI stuff, not only are people
just not trying they're just throwing stuff out but how do you test an agent to make sure that it's not going outside certain bounds and that's when you can't even think of what the bounds are that's exactly the point joe when you do sdlc when you do all these things you have change um you have change control you have you have test plans this concept this ai especially the latest levels of ai are so new, so neonate, so beginning that you have to, you don't even understand, you don't have historical history in order to create these plants.
But the community is moving rapidly in this space. There's the OWASP low-code, no-code for top 10. There's the OWASP large-language model. The MITRE ATT&CK framework is having several pieces being plugged into it around Gen-AI risks. So the community is trying to put a risk framework around it so that that can be translated back to the business and say, you know, we have the various issues with misconfiguration or data loss or, you know, activities that are going, you know, outside of boundaries. I do think there's a finite, you know, bunch of headline topics, and then you can filter that into all sorts of, you know, TTPs, attack patterns, whatever you want to call them. So it's coming strong and fast. A lot of people are focused on this.
Yeah, and so everyone knows OWASP and MITRE, those are existing and really well-known and well-respected organizations that provide security guidance, help you to write or secure stuff as you need to do it. So it is good to see them addressing it. Yeah. So my question is... We're always in catch-up mode. Yeah.
Because there's too much... Like CTO says, it's a constant cat-and-mouse game. It's never ending.
Too much to comprehend but now my question to you Kevin is this and I'm not aware of this I understand people have gotten free cars and and other stuff because they Rigged the AI but has there been I'm not aware of this myself. Is there any nefarious? things that have happened with AI that's Beyond Monetary loss beyond losing money or Something like that where it's caused harm. I mean for me personally My kids home from school, I took their car out, and as I'm driving, I know that the stop sign keeps on appearing. Every time I'm near a stop sign, I'm like, how do you know there's a stop sign? Is it Google Waze? Is it this? Is it see the pattern in the video, in the camera? I'm like, how do you know there's a sign? I'm getting a little bit scared. I'm afraid to drive now. I might have to start walking.
Well, Adam, I can tell you this much, and I'll throw one other example out there. There was a Canadian airline that had one of these AI agents that was helping people that gave wrong information about something that became sort of a liability concern. That wasn't necessarily malicious, but an example of a modern agent not really crafted well and creating definitely a lot of bad press for that airline. I'm involved every week with evaluations and discussions with various firms. And again, I think my lack of formal training in software development helps me here because when I'm on these calls and listening to our engineering team talk with folks about findings and things that are going on, I definitely get the sense that we're surfacing things that they want to address. And no one's hung up the phone and dropped the call to go do something immediately. But we absolutely see situations where people find themselves just a little bit surprised by the scale of what's going on. Definitely the level of activity always impresses them. And then inside that activity, some of the things that might range from hygiene, to more serious considerations, but there's a whole spectrum of it. So I firmly believe that it's out there and it's out there in space. And I have not been in a situation yet where somebody has sort of looked into their low-code, no-code, their agentic AI space and come away feeling like, we're good. They think that things are good, but what you're really doing is just shining a lot of light on it. Was that Black Hat this summer? And absolutely, the kind of things that were being presented from a couple of different research groups, ours in particular and a few others, were getting a lot of people's attention. And that's how I gauge that and where this is going, because I just look to see what the reaction is from the professionals as they see what the research shows them. And I think it's concern, but there's absolutely a spirit of, we can get out in front of this. That's how I see it.
Kevin, that's fascinating. Let me ask you. I mean, what I think I got out of some of that is that, you know, you go into these companies and, you know, they're saying, yeah, you're going to these companies and you're at the conference table with all the people who are the big AI proponents, maybe the business people, whatever. Yeah, it's wonderful. It's wonderful. It's great. We have all this. And I mean, are then some of the engineers and the technical people kind of coming aside to you and saying like, I don't want to say this to the boss, but we're a little worried about this.
Yeah. So I think that scenario is going to be front and center in the next couple of quarters, because I think right now we're in this age of just unawareness, right? Because right now, similar to that server under the desk back in the late 90s, early 2000s, It's pretty remarkable how the providers of these technologies have just gone straight to the users. I mean, I'm sure you guys watch your share of sports. You can't get through a football game without a co-pilot commercial being pushed on you. I was just watching some F1 last week and agent forces all over the track as you're watching your favorite driver do his thing. And what's happening, it's like straight to user strategies by all of these providers. And so now the people are out there dabbling with it. and they're getting benefit from it. And the security teams have already got such a large plate of things to worry about. They, you know, the discussion right now is just pointing out to them, you might want to look at this. It's there. You know, it's the old iceberg thing, right? There's activity going on. And I think what will happen next is that more and more of them will start to, you know, index on this as something to pay more attention to. And they'll just have to force it into their priority list of discussions because it really is A lot of activity, Joe, that's just happening.
Really? I mean it's funny, the analogy I'm thinking of of how frightening it could be is something like, imagine like at a car company and you know it takes like two, three years to design a car, you know years to get into production or whatever. Imagine if they had a new technology where they say, hey the marketing guys can design their own car and build it. And they do that and they start selling it and the guys in like the engineering and the quality department say, We've never had a look at that. This may be a very bad idea. This may have a lot of the things that it needs because, you know, we get a lot of flack and security for slowing things down. But part of what we do is make sure that certain safety elements are there because if you're not actively trying to do it, yeah, you'll have problems. That's how breaches happen. Yeah, absolutely.
And that's where we are today, Joe. I mean, the whole thing is how fast can you get things on market, which is why secure DevOps has been or DevSecureOps, however you want to phrase it, is there. You need to get those new features to market instantly. Like, oh, and I've seen people say, hey, we're missing this button. We're missing that. How fast can you turn that around? Oh, I can turn around that in about a week. But now it's like, no, I need it in a day. The organization's asking for it now.
And that attitude, Adam, is only enhanced by the fact that I can be on a call with a prospect. And when the call's over, I call my key person at an account, sort of my shepherd. The shepherd on this opportunity, he's sending me an email not even 15 minutes after the call with a bullet point by bullet point recap of the conversation we just had. And I go, what's happening here? He goes, no, I got this thing from Microsoft that is recapping my call instantaneously. And I was a little suspicious about it, but it does a damn good job. And so now everybody's, you know, people's PowerPoints can be flipped immediately based on a breakdown of a conversation. So everybody is expecting everything to be like this because everything else around them is like this already. So every function of business has to be in in like fifth gear all the time.
I do that now with my school. When we do the Zoom calls, I take the transcripts and I search through it for data that I need. While that's incredibly efficient and great, it's also very bad because there are things that are transcribing you unbeknownst to you and it ends up in databases. And when you go back, like I didn't know that was transcribed. I've even seen my phone transcribe a conversation inadvertently And I see it ready to get sent on a text. I'm like, I didn't ask you to do that. Thank God I didn't hit enter by mistake. But when you turn that mic on, or you do that in your car, and you're driving and you're using either Apple Play or, you know, the Google Store, the Androids in your car, it's kind of scary. And what makes me even more concerned, while you might not find this, you know, as hilarious as I do, I will say a location that I want to go to if I need maps. or I need ways and it transcribes it and it says, let's go now. One hour and six minutes. And then I'm like, wait a second. Normally it takes me 45 minutes. Did I put the right location? And sometimes I find it transcribed the wrong location. It misinterpreted what I said. Yeah.
So you got to get the Brooklynese. Hey, hey, hey, hey. So we can understand you. Don't put English.
OK. The king's English. And back to this idea of self-awareness, I mean, I think one of the things that I had with me recently is that when these things are functioning, that is a large language model itself, right? So all of that data is being compiled and put together. And then let's say someone at the head of the marketing group or the sales team wants to work with some sort of agentic AI to go back and say, give me a thought on what is the best way for me to tackle X, Y, and Z, or what kind of messaging would work best, blah, blah, blah. Now, if that agent has the ability to canvas all of those intended and unintended conversations and meetings, then that's where you start to see this thing really just blossom. The first company to crack the code on doing this and getting a little edge and margin on somebody else because of this, then it's going to be just snowball with everybody doing this.
You know, it's interesting, it makes me think of something that we actually talked back a ways, maybe a year ago, when a lot of this stuff was newer. It's not completely new anymore. It's like, we say, AI safety. It's like, imagine you've got this thing, the AI, and it's like this employee of yours who's like a superstar. He does everything. He puts out this stuff incredibly fast. He knows everything. He answers everything. Oh, stop talking about me. Yes, that's right. It's Adam. Basically. You're perfect. Except when there's something he doesn't know, he will put out something that sounds just as fantastic with just as much confidence. And you don't know which is which. You don't know what's going to be the bad one. And it sounds like you're saying, this is just now blowing up. Yeah, like more in terms of the scale of it.
What if you could correlate the best calls of a team of people in your organization to their key performance indicators? You know, obviously in sales, you would correlate your highest performer revenue generators against their activity set and then ask the agent to sort of come up with patterns, whatever, whatever. I'm sure every department's got a similar way of doing that. And this is where it really can start to take off. Um, yeah, just random thought there. I didn't even thought about going down that lane.
So you turn around and you take the watering hole and like During your conversation you say yeah, I sold 2.6 million dollars And then later on it's in my and then you're talking about a virtual game that you were playing and then
So now if my sales force, if my CRM was set up properly, it would it would check my order flow on the backside.
If it was checked properly, but you could say, hey, disregard. Checking the orders.
Listen to what he says, not what's in the record.
There you go. Okay, so as we head towards the end, you know, we try to, when we're talking about a lot of negative stuff here, to try to look at the positive angle of things. We've had a little fun talking about the bad things that can happen. As you said, Kevin, there is a lot of work going on on the part of the defenders to try to control a lot of stuff. So please, cheer us up. Tell us what's going on there.
Well, I will say this. I can say a lot, but let me just focus on this one point. So the thing that I really love about Zenity and what I'm doing, and it's actually, this is my second time in a situation like this. I was with a firm previously that had similar types of feeling to this, or I had the kind of same take away from all this, all those entities doing it much faster. There's this- Yeah, let me just- What's that?
Let me just say that Kevin works for a security company called Zenity that's attacking this. We're not sponsored or anything, but you know, just so you know what we're talking about here.
Yeah, thanks. So what I find fascinating is the line from the research to the product is about as short as I've ever seen. So there's a lot of really good work being done to identify a wave of issues and then to understand how to get around those issues and put a solution to them. And back to the cat and mouse comment that Michael, our CTO, talks about, as soon as you sort of get your arms around one level of it, there's the next depth. So for every guy like Adam who can sort of see multiple horizons to what these agentic AIs might be potential to do, there's a methodical effort to understand the problem set, the most common things we see, put the solutions around them, replicate, secure development life cycles in a space with a huge audience of people doing things. And then moving into the next chapter, the research over the summer was around prompt wear injection and rag poisoning. The guys have broken down the constructs of that and have come up with some product capabilities to get out in front of that to help companies understand it, build time, what the risks are. As soon as you put a good hold around that problem set, I'm not even sure what the next wave is, but whatever it is, I feel very confident we're going to get out in front of that. The real positive thing about this, and I was mentioning the professor before, the MIT guy talking about the real power of this stuff, All of this activity is not really making people's jobs go away. It's just making people more efficient, and it's allowing them to tackle more complex requirements inside of their firms. So the idea of having unlimited access to a clerk, a coach, or a confidant in these things, it's just making people more productive and allowing them to focus on other issues and questions. I think it's all really positive. I'm always an optimist in the idea that the good guys can get out in front of this. And I mean, you guys have been in this space as well. I'm curious your feelings about, you know, for all of the gloom and doom, Adam, you described, I mean, you feel positive, negative or neutral about?
I'm going to add more doom and gloom. The whole idea. Look, listen, there's going to be more jobs. AI is allowing us to get a more technical, savvy people to create better products. However, the whole idea behind automation is to repeat tasks that a human would do through mechanical means. So do I think some people might lose their jobs? Eventually. But do I think there will be more jobs at a higher level and people will rise to the occasion to learn this technology? I do. But you're going to shift jobs from one part to another part. We don't create automation for the hell of it. We create it for many reasons. One, to repeat the same task over and over again without having intervention and saving time and money. And we do it also because if the same task is repeated over and over again, it should not fail because it's creating the same process as it did before. But people are learning AI. People are in school to learn. And plus, a lot of these technologies have made people who were normally really sharp in that specific role, have let people who are more common in technology to do that task. It empowers people, regular people now, to do this.
Yeah, I mean, I don't know what the societal impact and everything is going to be. I think it's going to be pretty severe one way or another. I think a lot of companies are really interested, not just in improving people's productivity and making good people better, but in some industries, they are looking to replace people. But in any case, from the security side, you know, as always, the cat and mouse game continues. And we will keep defending. And there are some very smart people working on this to put in those controls. And we'll get there. I think the thing that makes this one so interesting is that, as you're saying, Kevin, the scale and the speed is faster, I think, far faster than anything we've had to deal with before. You know, security takes time to figure out. And that's probably my biggest worry. We'll get there. But, you know, We might even get there fast, but boy, the world is moving really, really fast as far as this stuff goes.
Well, I definitely feel like I'm on a fast-moving train on a nice long track. So at a minimum, we should probably catch up in about 90 days or so, because who knows what the next topic is going to be. Maybe agentic AI will be last month's word. But yeah, it is fast and fast-moving is the operative word here, no doubt.
I don't think agentic AI will be a word that will go away anytime soon, but I do agree with you. In addition to generative AI and agentic AI, there'll be some other AI that's coming up, and then it'll become like Baskin-Robbins, 32 flavors, and then it will go to like 100 flavors. There'll be so many different types of AI. Yeah, like two AIs hold the ketchup and put some mustard on it. That's what's gonna happen.
You're gonna get different- You'll be ordering from an AI, of course.
I already do.
Yeah, I guess so. That's right. Except I hear they're getting better anyway, but in any case, yeah, you know, I had a thought if anyone watching or listening has an idea of what the next word is going to be after agentic AI, put it in the comments so we can copyright it. Thank you.
Actually, if somebody comes up with a good, uh, what they think is the next, uh, next thing, and they can come up with a good explanation for it, uh, The best one wins, we'll send them a t-shirt or a mug or something.
Adam loves to give stuff away, I'll tell you. It's like Wheel of Fortune with no wheel, you just get stuff.
We have an unlimited budget, don't we?
Yeah, that's right. Yeah, we tripled the budget compared to last year.
So now we're out $18,000. Three times zero.
Infinite budget growth, yeah.
That's right. OK, so I think we're kind of getting towards last call here. So Kevin, your parting thoughts for us.
Well, I think I sort of laid them out a bit a moment ago is I'm very bullish that the future is bright in this regard. And maybe it's because it's the kind of places I'm drawn to. I think there's a lot of people that are looking at just how to make these things more efficient. I think the fact that anybody at any moment now can go to some of these things and get questions answered very quickly for themselves is very empowering. And so I'm really positive about it. And I really appreciate you guys having me on to talk a bit about the topic. You can't really over communicate on this stuff. You got to get the message out about what's going on and the way things are, you know, the way things are moving forward. So, yeah, those are my final thoughts. I guess because I'm an ex-military guy going back, I'm sort of built on the idea of worst case scenario and working from that to a place of calmness with that kind of grounding. My feeling is that the future is bright because there's a lot of good people on top of this stuff. And I agree with you, Joe. There's displacement. There's shifting going on. But the net-net will be a better workplace, marketplace, and everything for us going forward. So that's my big feeling.
I think you're right. I'll tell you one thing. I want, when it comes out, and hopefully it's soon, hopefully it's not too expensive, I want one of those Tesla robots, because I'll tell you, if it'll clean the house and do the laundry and, you know, pick up after my kids and all the junk, now that's good use of AI. Now, I hope it doesn't also get into my computer and start its own side business and not cut me in. Why not? Let it start its own side business.
It has no need for the money. You can, you can do it for yourself.
Oh, that's a whole other discussion. Oh, we got a separate call about how, um, how in Japan they're, they're using a lot of these robots to help with senior citizens and being companions doing even, you know, companionship as well as doing a lot of the heavy lifting and work that they otherwise couldn't do for themselves as older people. I mean, I think great example of technology helping in a big way there.
I just want a robot to, or, or some kind of a, AI robot to shovel my snow in the winter.
There you go, right? Okay. All right. Well, Kevin, thank you so much. There is much more to be said about this and things are changing quick. So I'm sure we'll be hearing more and talking more. And yes, for everyone who's out there defending and working on this, keep it up. There's a lot of work to do.
We believe in you. Thank you, Joe. Thank you, Adam. Cheers. Thank you.
Cheers. Cheers, everyone. Thanks, Kevin. Take care.
