Modern Cyber with Jeremy Snyder - Episode
44

Kelvin Green of CyberSec And I

In this episode of Modern Cyber, Jeremy chats with Kelvin Green of CyberSec And I. The discussion centers on artificial intelligence (AI)—its opportunities, risks, and ethical considerations in cybersecurity.

Kelvin Green of CyberSec And I

Podcast Transcript

Alright. Welcome back to another episode of Modern Cyber. We've got another great guest lined up for today to have a conversation about the topic. I mean, really the topic of the year probably, and I think it's safe to say it's going to be the topic of 2025 as well. We are going to be talking about AI.

We're gonna be talking about risks. We're gonna be talking about ethics, experiences, opinions, all kinds of things with today's guest, Kelvin Greene. Kelvin Greene is a chief cybersecurity adviser who has won multiple awards to include President's Club as a sales engineer. Along with his wife, Kelvin started CyberSec And I, which is an advisory company focused on utilizing user and entity behavioral analytics, UEBA. I'm sure you're all familiar with that acronym.

And AI with the goal of helping organizations be more secure in an ever evolving digital world. Kelvin is focused on health care and government. He's given in person and remote presentations and training on behalf of multiple organizations, About 20 years of IT and cybersecurity expertise with some really interesting experiences, such as infrastructure and operations lead for the Kentucky Health Benefits Exchange, lead messaging engineer for behavioral health organization, some work with the Navy, some work in Hawaii, some work as a solutions architect for DHS, IT support for SMBs in Hawaii, and even game testing for a Tetris release. And if we have some time, I'd love to hear more about that test rates experience. Kelvin, thank you so much for taking the time to join us today on Modern Cyber.

Yep. Thank you, Jeremy. Well, AI is the topic. AI is probably the number one thing that everybody in IT and cyber is talking about and has been talking about for the last year and a half, 2 years now. And I guess the question I'd really like to stay start today's conversation with is, what do you fear most about artificial intelligence?

So what do I fear most? I I'll be honest with you and say, meeting myself. Okay. You know, at the end of the day, meeting myself well, you know, you always see yourself as you, and you're always in you, meaning you're in your head. But what happens when you don't have you have a system that doesn't have your morals, your controls, and it's able to mimic you?

Meaning, it can do you, but it doesn't sleep. It doesn't rest. It doesn't have all your same morals. That has to be a scary piece because everything about yourself that you feel will can be exposed to the world, it can do that. So, yeah, you may have myself.

And and so when you think about that, one thing that comes to mind is, well, that can be a scary thing or that could be a very empowering thing. If you think about yourself in a positive light and the work that you do is, let's say, for the benefit of your customers or your organization, great. Well, now I don't have to sleep. I don't have to take time off. I don't have to take vacation.

I can just keep doing the work that I do for the positive 247. Isn't that a good thing? Absolutely. And that, brings up one of the things that AI doesn't have. AI doesn't have morals yet.

That's one of the reasons why we have to put guardrails around it. When I go operate like I operate doing something called the golden rule. Do unto others as you would have them do unto you. As long as I'm thinking that what I do with myself is always gonna be positive for the good. Okay.

However, if you strip that away, what will happen? All your capabilities to do could be used for bad. So it's like, hey. Is is electricity good or bad? Because it was used for is a gun good or bad?

Because it was used for so your morals are right. So I'm just that's what would be my experience because it AI today doesn't have the ability to have morals. Yeah. Well and I guess kind of just thinking along those lines, if you if you think about an AI that's trained on you, meaning it's trained on, let's say, the documents that you've written, the emails, the correspondence that you have, To your point, it doesn't know the motivation that's in your mind for the way that you, let's say, address a certain topic or answer a certain question or why you did a certain task. It just knows your history of doing those things.

And so if there's kind of a situational change, maybe the context is lacking and and there's a a little bit of a lack of understanding about how your motivations might change in response. Is that kind of what you mean? Absolute. Absolutely. You hear it on the net.

Absolutely. Interesting. And and what's been kind of your experience to date with, let's say, some of the AI experimentation that you've done? I I guess, actually, just starting from the first perspective, have you trained an AI to be Kelvin? Absolutely not and never intend to do so.

With that being said, in in my experience so far, it's it's mostly just been in the realm of achieving the goal to get a hyper daily alerts. And and what I mean by that is to to you the goal is to train it with as much contextual data so that it can operate as I would as a human being. So one of the flaws that we we see today in, you know, in security is that we're operating without a complete picture. Right? And so Yeah.

The goal would be to get AI to bring in all that different data to build a context so it can have more complete picture to achieve to these goals. And I think this is one of the things, especially in the field of UEBA, where having that context is so important. Right? So this you always hear the example, especially in UEBA. And I'm sure a lot of our audience has spent time looking at the space or is at least familiar with the topics.

You hear about these challenges with UEBA, things like the impossible travel. Jeremy's credentials are used to log in from Virginia right now. And then 15 minutes later, they're used to log in from, I don't know, Spain. Well, like, how did that happen? The most likely answer is that Jeremy's credentials are compromised.

But similarly, Jeremy logging in from Spain for the first time may not actually be bad. I might actually have be there on a business trip or on vacation or something. But an AI may not have that context. And so I think that one of the things that this field has suffered from historically is a lack of contextual awareness. So how do you think about applying AI to providing contextual awareness or to minimizing alerts with without that context?

Uh-uh. So without that context. So, that that's where it gets more complex, without that context. So you wanna give it that context. You wanna say, hey.

Let's bring in HR data and say, hey. We know that you're on vacation. So because we know this behavior about you, and we do know that you're overseas. And so it's it's okay. You're out.

You're on vacation. Well, then you gave us. Also, how the AI help with that? Well, simply put, you mentioned that you logged in for the first time. Right?

You're like, great. Yep. Did you do unusual behavior when you logged in? What if AI was tracking to say, hey. I know how you normally use the system.

I know what you normally get on this day. Are you doing something different? I mean, if you started dumping out data all of a sudden, it's going to trigger too. Why? Because now you're overseas and you're dumping out data that you know in the whether you got hacked or you're doing it yourself, doesn't matter.

We don't want that to happen. So is the more the more data in the threat chain and then having the the right amount of context to, you know, bring it all together. Well, I guess just like as a background, when you think about training AI for UEBA, we talked about HR systems. We talked about, you know, the the activity that performed once I logged in. What are some of the key data sources that go into training AI to to deal with the flood of data?

Because, like, if I think about UEBA as everything connected to me and my account, that is so much data. Right? There there is just, like, way too much. I email and I Slack and I text and I do all these things. Like, what are the key data sources and how do you think about, I guess, kind of ingesting them and then maybe filtering noise from the training data?

Yep. So key data sources would be to, well, clearly, XDR, EDR, XDR, for for your endpoint. Next, we wanna get your network traffic. Big thing is we wanna make sure that when data comes over to us from wherever it originated at, that is secured. So your network would be net.

Okay. My third would be authentication. We wanna say, hey. Is it multifactor authentication? We wanna make sure that we are the same from you know, the triple a.

Right? We we're making sure that we understand how you logged in, what you have access to, and what you did. Right? You wanna check check all those, then and and and and clear the firewall at the end for the network? After that, you know, it's tough to go into, well, what are you doing?

Right? So, like, what what are you doing in the system? Clearly, you know, if you're if you're logged into a server, we have the XDR there, but are you logged into a database? What what are you asking for? Because if we don't have, understanding to a grainy level of what you're doing, you know, think of the black box theory.

If I know what you did on the computer versus Yeah. Each transaction in the system, I could only tell you what something bad happened to the computer and not at the transactional level. So I I would say, you know, that that's where it gets into the the area of I can't give you much more than so I could really just say, you know, you have you have your EDR, XDR, and you have a SIM. Yep. You would wanna have a firewall.

But after that, again, you know, it depends on your business and what you're doing. What what what's your data? At any day, we gotta secure your data with your environment. Yeah. This is one of those areas that I think is, like, particularly challenging for people because to your point securing your data for your environment, I need to understand a little bit about, you know, about you as an organization, about how you operate into something you said earlier.

You know, what are the normal things that Jeremy does when he logs in? Jeremy logs in on Monday morning. First things he's gonna do is check his email and go into HubSpot and look for updates on, you know, customer pipeline or customer support tickets or whatever. Right? And so you can kind of learn that context.

But one of the challenges I see around this is that, like, ultimately, with the data sources that you talked about, I mean, these are massive data sets. And their data sets that really, like, grow with with scale of the organization. And I think and I, you know, you can correct me if I'm wrong, but it's probably, like, a little bit nonlinear growth. Meaning, like, the more systems that you add, it's not that each system adds, let's say, like, 10% more data. It's I add 2 systems, and now I've got instead of 20% more data, I've got, like, 40% more data because those systems might talk to each other, and they might be integrated into other systems that I'm using.

So, like, what are some of the things that you've learned over time in terms of, like, how how do customers manage that data best? Because it could be a flood. Right? It could just be way too much. So, that one, I would say, comes in, 2 forms.

1, prior prioritizing the data sources, and then it comes down to prioritizing the events in the data sources. Like, if you capture all the network traffic, yeah, it's gonna scale massively. Same thing if you had a high transaction database, scale massively. But if you said, hey. I only wanna know these 10 subset of events.

And then you did a bit about it, then you would acquire a solution that would help you filter that down, and you would log that and you would say, hey. I'm giving the information I need. So everything else, just discard it now. Am I saying that everybody makes the right decision on terms of what they discard? You know?

Live and learn. Yeah. Yeah. Yeah. Well, that that's great.

And I think, like, you know, the kind of thinking about almost like 2 layers of filtering there. Like, I've got my initial layer of filtering on the data as it comes in raw. And I, let's say, like only look for certain types of events or certain types of data signals that I want to then store. And then once I've got that stream of data coming in, then I can apply the the second filter, meaning, like, the actual detection on top of that stream and and look for, like, a particular a potentially bad thing. That makes a ton of sense.

Coming back to AI for a second, you know, you mentioned something at the beginning that I wanna kinda come back to, and that's like AI has no morals. And one of the questions that to me comes up around that is, like, what's the role of regulation in your opinion in kind of thinking about that problem? You know, should regulators like congress, like lawmakers be stepping in to do more relative to to, you know, assigning morals to AI? Absolutely. And and and and that's a a big topic that, you know, like, I I I love to go down, to consumer commerce and interact with, different organizations there and and talk about this topic because, yes, right now, it's per industry, they're implementing their guardrail.

They're saying, hey. We we wanna we wanna decide what we're gonna do. But let's go with the the the concept of hallucination. Right? That's really dangerous.

I mean, I can say make up something. What if somebody never drove in the United States and they came over here and they say, hey. I want them to drive. And the LMM that they, used had cracked up auto, the game, as a data source. And he say, hey.

You can drive on any side of the road. You can do whatever you want as long as the police officer doesn't see you take the car. Okay. Well, let me go. How they go do this?

I can pop the window. Well, it's because it hallucinated. Right? It just say I'm making up something. Well, in this case, it was real but it hallucinate.

That's dangerous. Well, think about that as a health care. It's one thing you can say, hey. You can get this you should be able to get a refund on this ticket or that bag you put to the airline. But what about a health care?

What if they say, oh, I think that you have cancer. So I'm gonna just say you got cancer. Yeah. Yeah. Is one time, one hallucinate, you can have 10 different things that are signs and indicators of having cancer.

But just the fact that it hallucinated and tells you that one time can destroy a person's life. So Yeah. Do we need regulation to control that? Absolutely. If you don't, we we won that risk.

And, I mean, I'm not gonna say for better way because some will argue that you don't wanna have as much regulation because you wanna make progress, but I'm like, there's definitely been. So I'm a firm believer that we absolutely need more regulation around that. Yeah. It's it's really interesting. I mean, I think, like, if you zoom out for a second and you think of kind of, let's say, the broader Internet.

The Internet is a domain without a ton of regulation, but it definitely has some. Right? So for instance, can you sell alcohol on the Internet? Yes. Can you sell it to people under 21?

No. Right? So there are levels of regulation. Same thing with, you know, guns, firearms. There's any number of things where there is, like, the overall ability to have a capability, but then there are limits on that capability.

And right now, it feels very much like the Wild, Wild West in a lot of the kind of AI use cases. And at best, to your point about hallucination, you have this kind of like tiny little disclaimer at the bottom of an AI response that says, answers may not be correct. Well, you know, I I think we've all been in technology long enough to know that, a, nobody reads the fine print. And b, like, there there's a lot of kind of sampling bias. You you try something once or twice.

It looks good. It works out. You're like, okay. Boom. Let's go.

Right? And and you just kind of implement it. With customers that you've been dealing with, how do you counsel them around, like, not moving too quickly? So the the the easiest method there is to say, hey. What is the risk of the unknown?

And what I mean by that is that, we all we we we love to say AI is going to solve all of our problems. And I say, you're giving the control of all of your data to a solution that could think faster than you. That's the goal of artificial intelligence. Right? You want to do what you can do faster than you.

And I say, well, you gotta understand since you can't keep up with it, you gotta understand that once you give it that capability, it has it, and it's not going to stop. And so then they they will, in response, stick to themselves, well, what is the risk that I'm going to have from that decision? There are some things that you were like, hey. If it goes fast as can be without my control, let's go with, filling orders from for for for a delivery service. Yeah.

Fill these orders. Right? You're like, I can track my inventory. I can know I got this, so keep going. But think about this in terms of, you know, I want to give people information, about themselves, what whatever information I wanna give out.

I wanna get out you know, you come get your bank information, Social Security, whatever it may be. Bank information. Right? And say, hey. I'm gonna get this information properly.

And it's like, hey. I'm thinking and I'm doing this at the speed of, like, way faster than you, and you can't monitor them. So when ever a leak happens, it's going to flood. What if they started saying, hey. Whenever somebody came in, they started thinking that the Johns were all the same, and they gave out different Johns informations to you.

It gave you different John's paint different John's social security number because you just thought you both were the same person because you could be John Johnson. Mhmm. The fact that it you can't stop it because it's going so fast, it's just a scary thing. So, you know, I depends on the risk of the data that's going out. And and once people start to understand, people understand and connect with their own information.

So yeah. Yeah. And and when you think about, like, guardrails, which you've brought up a couple of times, what have been the the or what are the key kind of guardrails that you're talking to people about putting in place right now? Is it on the what you're asking the AI side? Or is it on the what you expect back from the AI side?

Or is it on the handling of the interaction? Or or all of the above? Or how do you think about that right now? Or all of the above? Or how how do you think about that right now?

The data and how you train it. So Okay. The the biggest, proponent to me is that the data that you give it and how you train it. And what like I I said, the hallucination level on how how so how you train it. So the the the the I guess you can look at it as, human interaction in terms of training.

Meaning, if it trains based off of real data and the action that it got back from a human being. I say yes. I say no. So I say up there. Good, bad, needed, unneeded.

And then it it it's learning from that. I I'm not a proponent right now of the speculatory nature of AI. So I would say in those cases, speculatory information should go to a human being to make a decision. I mean, they will rationalize out whatever it is. Meaning, should this be considered real or not?

So that's that's why I sit right now in terms of recommendations for AI. Gotcha. Gotcha. And along those lines, where do you think the appetite with health care organizations is right now? Because I think, you know, over the course of my career, health care, right or wrong, has gotten the reputation as being like maybe the most conservative organization or maybe tied with government.

You know, maybe those 2 are generally seen as being like the the last adopters of new technologies. Where do you think though health care is with AI adoption today? Is it, like, very, very beginning? Is it, you know, talking about it, haven't really started anything? Or is it, like, hey, we've got production use cases coming, like, in the next 3 months?

So it's it's it's burning hot actually right now. We we got a lot we got a lot of traction going on there because what is is is in terms of so let's look at it clinically and operationally. Clinically, they're they're diagnosing breast cancer. They're they're they're tracking down, you know, down to the same level. They're they're they're saying, hey.

These capabilities allow us to look at large datasets and understand it in, a lower amount of time. Good. And and and and and it's serving its purpose. Operationally, I can't say that same tracks. I was actually just attending an event earlier, on that on that topic.

And but I would say operationally, they're they're they're gonna be fine because the in Michigan is when we talk about the hallucination where Yeah. If you give our health care information, that's dangerous. Like, nobody Yeah. Do you really do you really wanna get some information that you're, like, going on to the end there and saying, hey. Is this medication good for me?

Right? Am I Yeah. Am I coughing? If I if I have a cough and I'm and I have a fever, am I okay? You're going in there and it's like, hey.

You can have this hay fever right here. You have this thing. You you're like, I'm gonna die, but you really just got to call me COVID. Right? So it's like, it's it's in in in terms of health care, it's it's really, you know, clinically, I I do see more traction operationally in terms of giving out, advice and information.

I see them being more conservative in that in that in that realm right now. Gotcha. Gotcha. And with along those lines, do you think that the the organizations that you're working with, they're a way they have a good understanding and awareness of the risks? Or is it really the case that, you know, as you get into these conversations with them, you're having to do kind of the same educational process every time?

So I I I say, you know, I I I treat it like people are good at what they do. Right? So what I what I like to do is I like to make them understand it from their own perspective. So it's it's I don't I don't expect them to know you know, like, you don't expect somebody who's focused on understanding your heart, your lungs to understand cybersecurity. But if you say, hey.

This data that you get from there, who knows how truthful it is? If you make them understand it in that perspective, then they they catch it. So it's really just as you know, make it relatable. So do they understand it? Yeah.

If you give it to them the way that they understand it. Got it. Got it. That's really interesting. There's a question that I got I got here that I'd be I'd love to hear your thoughts on kind of coming back to the ethics side of it and kind of navigating the the complex interactions.

And that question is like, okay, AI, AI everywhere. But does AI know that there is AI everywhere? And what are some of those implications and how it might affect us in some of these use cases going forward? Yeah. That that's a really dangerous one because AI doesn't.

No. We don't we don't right now, we we we integrate AI whether we just slap it on top or we build it into the system. We're not telling the components or the other solutions that we're interacting with about our AI and how it's thinking. What happens if one AI is elucidating crazy and it's Right. Entry source to another AI.

The information that it will get, how will it know if it's reliable? And you just told it, listen if it comes from that data source. Right? And flip it around. What if I, you know, and say, hey.

The the the secondary source had AI, and it was what's your name crazy? And they say, hey. You know, like, I early when I was talking about the the the driving with with the car, the GTA. They said, hey. You know what?

You told me that you need to get a car. You can go take anybody's car that you want on the side of the road as long as no car is getting around here, you can give the go. Whoo. Yeah. That's dangerous.

Right? So it's like because we because AI is becoming their way. We're not we're not really building AI out to be aware of of AI and to operate off of, well, hey. You wanna check to make sure that's the AI that we're saying Yeah. Itself.

Because it's just you, like human beings. Yeah. Yeah. And I think at the end of the day, to your point, I mean, one of the things that I saw recently that I thought was really compelling on the one hand, but I didn't really think about this this kind of aspect of it was I saw a company that had built a multi agent solution for a particular problem that they had. And they're a very large organization, tens of thousands of employees worldwide, and they've got all kinds of corporate complexity issues around.

You know, you can imagine an organization that side that size rather. They've always got questions around payroll, benefits, time off, leave, you know, changes in employment status, somebody going from contractor to full time employee, somebody going from single to married, and then the benefits that they might get, and, you know, from, let's say, married to having a kid and all of these things. And they found that, you know, they were just having to spend so much time managing internal HR issues that they put together a solution that handles each of those different things, benefits, employment status, payroll, all those things around it. And they found that actually different AIs were better for different domains. One was much better for talking about benefits.

One was much better for talking about accounting, and payroll and tax and another one around something else, which I don't really remember off the top of my head. So what they put together was this multi agent solution where employees would ask one question like, hey, my wife and I are having a child. What do we need to do? And what they might actually need to do is actually go change their health care plan change the number of dependents on their payroll taxes withholding and also let's say notify somebody and request time off for maternity leave or whatever it is right And that goes through an initial AI. And then that AI farms that question out to the specific topic AIs.

And then the answers come back, and it all gets synthesized into, like, one response to the employee. But to your point, none of those systems know that the question is coming from an AI at the beginning or that the response is going back to an AI before it goes back to the employee. And so, you know, one particular link in that chain that gets it wrong, there's no quality control process, really. There's no kind of, like, certainly no ethics check or wrong along the way. And this is, you know, internal stuff.

This is maybe not something that's going into directly, you know, let's say a medical diagnosis or whatnot. And probably the employee reading it would look at it and be like, well, that doesn't make sense. So if there is a a complete, you know, weird hallucination response, The stakes are lower. Plus, it's not, like, directly impacting somebody's, medical outcome or something like that. But low risk, but exactly to your point, like, there's there's no ethical consideration anywhere in that chain.

So so how would you recommend organizations to think about that differently or to think about, like, managing the fact that AI doesn't know it's interacting with another AI? So that that's the the the complex plan. And I will say that's what we need to take technology to another level, and we need to be a or or or at least look. You know what? Step back.

Let's go classical and just say, hey. Checkbox. You're talking to AI. I mean, we just literally told that, hey. Be aware that if this AI does say anything outside the normal, that you gotta accept that this may be a hallucination.

How do you double check that? What would be the guardrails and and and not to do that? I don't know the answer, but I I was, you know, I would say at a basic level, at least let it know. Right? Yeah.

Because we you know, the the the the blind truck is is if you if you right now, trusted somebody and say, hey. Whenever somebody says something to me, I might have a blindfold on, and I'm a go in whatever direction they're going in. Tell me to go in, and you did it right now. Yeah. How easy is it for you to run into a wall if you don't know who will tell you that and what their intents are when they're telling you that?

So Yeah. Again, you trust you you trust the human being. You trust them. You you build organically that trust, but you gotta have that same level of trust with the system, and that's it comes from. Right?

I mean, it's Yeah. At the end of the day, you you you you you trust the integrity, but nothing you need. You gotta trust the integrity of that AI. Yeah. But, you know, but to your point, I mean, maybe like disclosure at the beginning when you think about it and and thinking about, like, is it a blind trust or is it a trust but bear verify?

Is it trust with disclosure? I mean, that is a a real question that I think you you know, organizations are gonna have to navigate depending on the use case and depending on the implementation and maybe the level of risk around what could go wrong if the answer really is completely off. Right? So yeah. I mean and and that's gonna come from that that that UVA side of the house.

Right? So, I mean, that that's that comes from, hey. We are tracking all the events on accident because we we we gotta find that unknown. So we gotta say this is unusual. So we gotta say that in that case, it stood up because the only way you can really go back and backtrack and verify is if you know what the normal is.

Right? Because you when you when you go back and you look, you gotta say this is how far outside the normal. How that's determined, you know, I'm a happily that went up to the AI experts, but that would be, my thoughts on that. Got it. Got it.

I wanna change gears for a little bit, because I there's a couple things in your experience and your background that I'd love to hear more about. You know, you spent some time in different parts of the US from our you know, both of our homes, base in the DC area. Also spent time looks like you worked on some stuff with Kentucky, worked on some stuff in Hawaii. And I know you mentioned a couple other places. And as we were getting ready to record today's episode, I guess one question that I have for you is, like, do you see big gaps in the level of technology adoption and sophistication in different parts of the country?

Oh, absolutely. Absolutely. I I I'll I'll sum that up by saying, the larger the city, I will say the higher the technology adoption. Okay. Smaller the city, smaller state, the less.

So Got it. Yeah. And and do you find that across the different industries that you've interacted with? I know you spend a lot of time in health care. But in the time that you spend time outside of health care, do you see a big difference in the levels there?

Yes. So I I I will say that the slowest is government. Okay. Government is the slowest, and and and that's just because, in my opinion, it's because of regulation. Right?

Yeah. They they wanna test everything. So, you know, the strict regulation's there. So government's a slow adopter. I will say that retail, manufacturing, they they are blazing the way.

Finances, you know, getting up there. Yeah. Health care is, again, like I said, clinically making leaps and bounds. Operationally, not as fast. Yeah.

Yeah. Yeah. And what did you learn testing Tetris? So, I will say, I love the game, and I love competing against my fellow testers. However, it's a whole different story where you have to complete a certain move or a certain sequence, a certain amount of time in a certain time frame because that's to follow your job requirements.

You stop and look at testers differently. You're like, is this fun, or am I working? Am I I I have to achieve this goal? So every time you don't get it, you're like, crap. I gotta redo it.

So Yeah. You know, the the we always had fun at the end of the day because we will always get on there and play and have fun and just compete against each other. But it got tough at times with you. Yeah. I'm supposed to play on high levels and achieve task.

Yeah. What did you think? After that well, 2 questions. 1st, do you still play Tetris, or or or was it, like, so much overload that you're like, nope. Done with that game?

Right now, my wife and I is doctor Mario. So, although we we play chess, we have a Nintendo emulator, and we play doctor Mario every night. So doctor Mario There you go. It's so funny, man. I I don't know that I have found any more enjoyable games.

There are games that are arguably like the level of art, the level of, the level of, let's say, you know, graphics and physics engine and everything is, like, off the charts amazing. They are without a doubt better technical achievements than some of these early Nintendo games, but I don't know that they're more fun. For me, Super Mario 64 and the I can't remember exactly which Mario, Mario Super Mario Brothers on Wii. I think it was, like, New Super Mario Bros on on the Nintendo Wii. For that one.

Those 2 are, like, the most fun. 4? Yeah. I is there a better game than Super Mario 64? Like, I'm not sure.

To say. I this weekend, this past weekend, my wife and I played Super Mario 1. So that's what I'm saying. Then from there, I will say that I do play so it depends on the game style. Like, I was saying when I was younger, I played the World of Warcraft.

I'm like, oh, I was addicted to it. And, you know, try to play with my son. That is fun. Right now, I I I do play some game, like so my wife and I, we like playing co op games, so we play games like Yep. Yep.

Moving out and overkill where you're just doing the same task? Oh, don't forget. Like, you're moving people furniture or you're cooking food. So it's Yeah. Yeah.

Yeah. Yeah. I'm I'm with you on the best games? No. I just Yeah.

But this is my point. Like, I think co op games, fun, you know, fun for the sake of fun is, I think, you know, my vibe as well. But, I was gonna ask on the Tetris side, are you better than everybody else that you know at Tetris? Yeah. I mean, what kind of we we were praising us to this point.

Now now you're questioning me. What Awesome. Sorry. I don't know that I've laughed this hard on modern cyber before now. I appreciate that, Calvin.

Hey, Calvin. Just to wrap up today's conversation. 1st sorry. Thanks again for taking the time. I've really enjoyed today's conversation.

Are there any closing thoughts that you'd like to leave us with, whether it's around AI, whether it's around some of your other experiences, and anything you'd like to any links you'd like to share with the audience? Yeah. So four four things. 1, when you're dealing with AI, think about your information sources. I know this may, go for a corporate, perspective, but think about it from your home computer.

Just Just think about it if you had a shared computer with your family. Would you really want your son or your daughter to see your profile? Come on, man. Well, you want you might not want AI to see that data. 2, you wanna make sure, where your data is going.

Do you use any SaaS cloud? Where's your information going? You're telling me everything about you and how you think? Where is it going? So always check to see where it's going.

And 3, check the default hallucination levels. You wanna make sure that your AI is the same as you are. Awesome. Awesome. And for anybody who's looking to learn more about yourself, your own organization, what's the best place for them to look?

Cyber second eye dot com, or you can go to c s c s a I dot pro, same website. Yep. Alright. Well, Kelvin Green, thank you so much for taking the time to join us on this episode of Modern Cyber where we talked about AI, and I inadvertently somehow question Kelvin's Tetris skills with no, offense intended. And I will definitely, back away from that.

I I would it's really funny. I I've never I've never been good at Tetris at all. I'm terrible at Tetris. But it is always consistently rated one of the top two most addictive games of all time. It's Tetris and Civilization.

Civilization is a game that I had to give up and walk away from. And I I'm sure there are other people in the audience who are nodding their heads right now about either one of those games. But, Kelvin, thanks again for taking the time to join us on modern cyber to our audience. You know what to do. Rate, review, share with a friend.

If you'd like to come on the podcast or if you will know somebody who should come on, please do reach out to us. Just podcast at firetail.io. Otherwise, we'll have links in the show notes to CyberSec And I, and we will talk to you next time on the next episode of Modern Cyber. Bye bye. Thanks.

Discover all of your APIs today

If you can't see it, you can't secure it. Let FireTail find and inventory all of the APIs across your organization. Start a free trial now.