Modern Cyber with Jeremy Snyder - Episode
15

Alexey Sapozhnikov of Andeavour

In this episode of Modern Cyber, Jeremy talks to Alexey Sapozhnikov, CEO of Andeavour, about AI's role in cybersecurity. They discuss how AI can reduce workload in security operations, challenges of AI compliance with regulations like the European AI Act, and the future impact of AGI.

Alexey Sapozhnikov of Andeavour

Podcast Transcript

Jeremy at FireTail (00:04.59)
All right, welcome to another episode of the Modern Cyber Podcast. I am always, I'm your host Jeremy and Modern Cyber is brought to you by FireTail. We've got a really interesting topic for today's discussion. And I know we're going to get into some areas that are top of mind for everybody in 2024. You know, I kind of make the joke that we are legally obligated to talk about AI. And in fact, AI is going to be a big part of the focus of today's conversation. And I'm joined by somebody who has actually a ton of experience in the space.

even before AI became the buzzword that it is today. I am joined today by Alexey Sapozhnikov. I hope I got that name right, Alexey.

Alexey (00:41.825)
Absolutely. Absolutely right.

Jeremy at FireTail (00:43.374)
Awesome. Awesome. Well, Alexey is the founder and CEO of the Andeavour Group. Andeavour is a pioneer in organizational intelligence that brings zero integration AI to the world of organizational business units, especially in information security and compliance. Alexey is a serial entrepreneur with two exits. He's raised over 35 million in his career. More than 20 years of launching and directing cutting edge large scale technology production practices.

He is the author and creator of the very first Israeli generative AI enterprise grade product adopted by several Fortune 1000 global customers. Previously, the CTO and co -founder of Proof Incorporated, which was acquired by a UK based PE firm and is now a leader in POC as a service in AI. So a deep, deep experience in AI and also formerly the technical executive.

and VP of R &D at SAP Labs Israel, a senior team limiter in MindCTI and other prominent software companies in his background as well. Alexey is a Frost & Sullivan 2018 Clown Innovation Award winner, member of the AWS Founders Club and an Entrepreneurs' Organization EO alumni. I don't know where you find the time to do it all, Alexey, but thanks so much for taking the time to join us today.

Alexey (01:53.597)
Thanks to you Jeremy for having me here.

Jeremy at FireTail (01:58.158)
All right, my pleasure. Well, let's get straight in because like I said, AI is the top of mind for most organizations right now. I hear so many different things about it. You know, we use it in our own product building, but I hear, AI is better for attackers. No, it's better for defenders. it creates so many risks. No, those risks are not true. Where do you start the thinking and kind of what's your overall state of AI right now in your own mind?

Alexey (02:29.185)
So first of all, great question. And I would like to give my vision about this the following way.

You know when the chat GPT was released by open AI for many people and many people in cyber security It was like iPhone or moment. Okay. It was like the moment of truth You know some amazing Capabilities something incredible and believable and so however

for many people that actually used to work with neural network, with knowledge transfer of various models, much before ChargePT was released, it was like, okay, so it's a next step, but it wasn't like a wow moment.

So, like, for a majority of people, it was like a shock. And this shock inevitably actually led to, you know, multiple...

multiple extrapolation to both sides. It will be a super weapon for the attackers. And it actually should be multiple magic pill for the AI fractionneers and vendors. So I believe there is a change. There is a change for both sides, for beta actors and for the security fractionneers and the vendors. However, it is...

Alexey (04:05.075)
is a little bit overhyped currently. And I think we need to carefully to speak about the real change that's going on on the bad side and the real change that is going on on the good side.

If we will have time, we will probably speak today about the AI, which, yes, will be a critical change if it will happen. However, the AI is a little bit like slightly less powerful thing, and it's like you should be really careful and analyze it for both sides. Where are the real use cases?

Jeremy at FireTail (04:32.206)
Okay. Okay.

Jeremy at FireTail (04:48.302)
Well, let's talk about those real use cases. I mean, obviously chat and chat bots and, you know, kind of natural human interaction is where a lot of use cases are right now. And we've seen so many instances of, let's say customer support agents that are chat bots and so on. And some of them are helpful and we're starting to see in Google search, you know, if you haven't turned that feature off, you'll see AI recommended results right at the top, you know, but what are some of the use cases that you think are actually

Alexey (05:14.017)
Yeah, for sure.

Jeremy at FireTail (05:17.518)
concrete and real today.

Alexey (05:20.449)
That's a great question. So let me start from the following angle to answer all that. So let's start from the security risks and the real use cases that should be solved from the perspective of your products and your features. So if I'm looking on a...

on the pure side of using of AI today, I would say that the biggest problem that it's probably will create for the security information exposures,

It's actually the problem that the data that wasn't so sensitive and wasn't so important and before it maybe could be shared with some sort of party, today became more dangerous to share because in the bad side, you have the capability of making the reverse engineering, of making some advanced attacks based on AI. They also have this weapon. So this is probably...

probably the most dangerous part.

Jeremy at FireTail (06:39.854)
Yep. Yep.

Alexey (06:40.897)
Now, the over -hyped moment that we have in the part of security products and all that is to create a special product that will secure only the LLM used by a company or the usage of LLM. The truth is that those things should be addressed and they are addressed by application security and by

Jeremy at FireTail (06:58.67)
Yeah.

Alexey (07:10.803)
enterprise browser. So if your company is actually, if your organization is using the advanced application security, you know, and definitely the API security and the ETR, and you know, the standard, you know, things that are already exist here for many years, and it's also using enterprise browser, you should feel, you know, safe with the problems of inappropriate usage of LLM. And if you are not using this,

So your problem are much bigger than just the LLM exposure. So if your organization is not using the enterprise browser or even the extension from the enterprise browser, if it's not using the standard application security, so it's not just about LLM, it's about the multiple things. So we can read today that...

From one perspective, we see that many venture capitalists already starting to say that companies that are dealing with the LLM security and all this stuff are a little bit overhyped, like the rest of the AI companies. We are starting to see that companies that raised the seed round and still didn't go out of the stealth already saying that they are doing a people.

which is kind of a strange situation.

Jeremy at FireTail (08:44.654)
Yeah, yeah.

Alexey (08:44.865)
And the reality is that, again, the big players in the field that are providing standard set tool of the protection, like I said, application security, enterprise browser, API security, and all the rest, are deeply into the protection of LLM into this area. Now, the access control, for example, is an interesting place, but again, it's not clear now how much of

access control will be provided by AI providers and if it will be not secured enough or yes secure, it's not a clear weapon. So I will say that a real use case that we probably have, we have a real use cases. The question is like a posing of model and they're trying to reverse engineering the data and did was the model, all those things are existing.

And so it's like, but it's protected today, but this by the standard case. But the big story is to understand what bad guys can do with using the AI power on their side, because that can change what you should protect. That's what changes, yes.

Jeremy at FireTail (10:04.974)
Yeah. Yeah.

Yeah. And it's interesting. You raise something there that we see in our labs and our own testing all the time, right? We focus on IPI security, but you could focus on anything. As soon as you put something online for testing purposes, it starts to get traffic. And, you know, typically we see it within like three to five minutes, right? And we're like a tiny little company, middle of nowhere. And we put something online, no DNS name, just an IP address. So there's nothing linking to it, right? But.

Alexey (10:12.641)
Yeah. Yeah. Yeah. Yeah.

Alexey (10:23.809)
Yes.

Jeremy at FireTail (10:37.518)
Everything gets traffic all the time. And you brought up something that we actually witness, like in real life when we look at the logs is anything you put online, it's gonna get traffic, but that traffic is getting increasingly smart. Meaning that it's not just a random HTTP request, it's an HTTP request, there's a response? Well, that's followed by 25, 30, 50 more requests to try to discover what's running at this IP address.

And is there something running with a known vulnerability? So I'm curious from that perspective, like to be as a cyber defender and a former practitioner, what that says is, okay, number one priority, eliminate vulnerabilities, right? Because what they're looking for primarily is they're looking for known software packages, known, you know, server fingerprints, that kind of thing, or application fingerprints, and to just leverage and exploit against that environment. Is that the right way to think about it? Or is that just one aspect?

Alexey (11:38.081)
So I would say that, you know,

If we are talking about RBVM and, you know, all those approach of basically address your vulnerabilities, you know, based on the risk. And this is like a neighbor area to what we are doing with our cyber product. I would say it's not necessarily should be changed. Okay. It's not necessarily should get an additional, you know, priority, an additional increase risk score or something like that.

that. However, what is really important is to understand...

which parts of exposures became really risky for reverse engineering and for creating more sophisticated attacks. Because you said that if you are putting something in the cloud, with just this IP address, we saw it many times in my previous company when we actually had a huge production paying to Amazon, colossal sums. So we had like, I don't know, maybe 400 ,000.

Jeremy at FireTail (12:22.862)
Yeah.

Jeremy at FireTail (12:33.422)
Yeah. Yeah.

Jeremy at FireTail (12:43.31)
Yeah, yeah.

Alexey (12:47.571)
with machines in the good times. So, you know, I saw many times that once you are putting the machine, so you immediately get the traffic because there are bad actors that know the range of the IPs from the Amazon and they're starting to brute force, all this stuff, and you should never expose anything, you know, before you have some protection. But the issue is that, let's take this example as a, with the brute force or just, you know, some tries to, to,

to hack you, it's not just that they are analyzing the answers. They are enhancing the answers with the real time, with some external knowledge, with some external things, and they can compare it, create more sophisticated answers, enhance it by external information, which was very, very problematic to do before. Now, another issue is that with the parts of information that are available outside,

Jeremy at FireTail (13:40.622)
Yeah.

Alexey (13:47.283)
site using those AI techniques, the beta actors can actually create more sophisticated way to enter in site and to, you know, block your defense that...

Jeremy at FireTail (14:00.334)
Yeah.

Alexey (14:01.057)
wasn't so simple to do before. So that's the kind that, for example, they can say, okay, based on this history, this is a former employee of this company, and we see that he is active in the Jithub or something like that. Let's run this script very fast that will look for the credential that he may leave in the Jithub, something like that, and then it's probably

Jeremy at FireTail (14:03.438)
Yep. Yeah.

Alexey (14:30.963)
like create multiple new vectors of attack which were taken to create such vector of attack, much more efforts for the better actors before. And we are still talking about the current state of AI. We are not talking about some new quality levels that AI should jump and definitely we are still not talking about AGI.

Jeremy at FireTail (14:57.966)
Yeah, yeah. I mean, this is just with the stuff that we have today, right? And to your point, the AI right now really lowers the cost and it lowers the time to make those, let's say, more informed attacks or more informed targeted attacks, etc. So I'm curious about something else. So, you know, we think about that's from an attacker perspective. So if we think about from a defense perspective, or

Alexey (15:01.376)
Yeah, yeah.

Jeremy at FireTail (15:24.334)
You know, and I know security and compliance are not the same thing, and that's a longer discussion that I don't want to get into today. But how can we think about using AI from an organizational perspective to either check our security state or, you know, improve our security posture?

Alexey (15:44.001)
So that is a super point. That's why we created our, you know, I don't want any promotion here, but that's why we created our security product, which is doing it. But you know, what I think we probably need to understand as a security.

officers is that AI is not a magic pill also for the defending. So you can create something that will do one task very good out of multiple sources.

You can create multiple sources that you can create some that will make some limited number of tasks out of some special data, then you probably know the structure of the data or you can hint at the structure of the data. What you cannot do is actually something which will do everything for everything. Now I see a lot of, you know, a lot of

of tries to come to security officers and informize this, which is like to discover all unknown threats and then this story, which I personally don't believe. But what can be done, it's like to take one concrete hard problem with some huge volume of information coming out of your sock and try to...

Jeremy at FireTail (17:00.654)
Yeah, yeah, yeah.

Alexey (17:20.115)
try to solve this, okay? What you can do is actually automate the mitigation that you can actually do it much more powerful way. You can predict the things in much more powerful way. But the essence in all this is that you can do something that you know that you need to do and...

Jeremy at FireTail (17:20.334)
Yeah. Yeah.

Alexey (17:44.641)
to understand from which sources you are getting the information. Now I have, you know, several of my friends, they have a PhD degree in the data science and they are working for big security vendors. That is actually their, you know, day to day job to explain to security, you know, officers, to security researcher that AI is not a magic pill. It's not a magic pill. It's something that you need to define what you want to find, what

Jeremy at FireTail (17:54.414)
Yeah, yeah.

Alexey (18:14.547)
you want to automate, what you want to enhance, and then you can do it. But you cannot over promise and come with some topical vision that it's something that will do everything for everything.

Jeremy at FireTail (18:29.486)
Yeah, I think that's such a great point, right? You kind of just like you said, you kind of have to know what you're targeting, right? What is the use case or the analysis that you're trying to do and then, you know, use the AI for what it's good at, which is taking huge volumes of data, turn that into something meaningful and then probably correlate it to something else, right?

Alexey (18:48.833)
Yeah, classify, yes, classification, yes. Yeah, so that is a good point, you know, that, you know, for example, if, let's take this, actually this challenge for finding the attacks out of, out of,

Jeremy at FireTail (18:50.638)
Yeah, exactly. Exactly.

Alexey (19:08.705)
log data out of some, you know, some metrics data, telemetry and all this kind. Now I can tell you that the most cut edge solutions that are available, you know, today, what they are able to do is to create, you know, some kind of model understanding, knowledge transfer understanding of which kind of attacks they should find and then trying to, you know, find those patterns in the, in the,

Jeremy at FireTail (19:10.382)
Yeah, yeah.

Alexey (19:38.659)
the data itself, classifies the data by the patterns. So it's like, this is the real, the real.

the real power of today's level of AI models, of transformers models and all this stuff. And definitely every classification like that have its own level of false positive, have its own level of what is the percentage of the data that can be explained and actually solved correctly by the model, level of if this model is good or bad, suitable to the task, not suitable to the task.

of the task. And so that's the point. And we still didn't, you know, we still didn't speak about the very interesting point of AI compliance, which is coming those dates, you know, to Europe and we have in Europe the European AI act.

just starting this journey and we have it in Europe and we have it in the States soon. So that what actually requires you to understand as deep as you can what's going on inside your models.

Jeremy at FireTail (20:52.27)
Yeah. Yeah. But I, I'm curious and I mean, I've been, this topic has come up before in some conversations around this and actually two things around it have come up. I'd be curious to get your take on the first one is there seems to be a really fast time between let's say like mainstream adoption of AI to creating this set of guidelines from the European perspective, much, much faster for instance, than let's say GDPR.

Right, GDPR took probably 15, 20 years before they realized, actually all these companies are collecting so much data, we should think about regulating it. But then the second part of it, and this is the part that I'm actually most interested to kind of watch is, okay, from the EU perspective, you have GDPR, it covers the entire European Union, all of its citizens worldwide. Great. On the US side, and I say this as somebody who is both an American and an EU citizen,

we have 50 states, we have no federal standard, and out of the 50 states, maybe five or six have any level of kind of privacy or data, let's say disclosure requirements for what data is collected, et cetera. So how long will this be the case with AI as well that we have a European standard and then like 15 years later, we get a US standard and also will it be the case that we have a European standard but then we get 50 US standards for each state? I have...

I don't really know, but I've been thinking about this. I'm curious, like, I'm curious with the smile on your face, you must have thought about this some as well. So what's your, what's your thinking?

Alexey (22:31.393)
So what I'm thinking about this, this is exactly great comparison between the GDPR or CCPA and the times that it took. Here again, we are talking about the regulation for AI, not for a J -I.

Jeremy at FireTail (22:42.222)
Yeah.

Alexey (22:53.025)
And I think that Europe is starting a good point. And I think that having these controls installed in place, it will also remove many security risks and many application risks and many application misalignments or something like that. However, unlike GDPR, when the price of implementing GDPR

is relatively not so dramatic. Here, the implementation of such thing can require from the company not a simple cost injection and not a simple effort injection because it's not just documentation, it's not just thinking about what I can do to actually have some explainability about my models, it's also the...

It will, I think...

the part when today many companies are using knowledge transfer. So it means they are taking the source, they are taking the trained model, and they have fine -tuning model. So they will be required to open the model, which will not from this zero, which will took from some stage, to open this, let's say, transformer model, and to explain everything that's going on inside of the model, to explain why.

Jeremy at FireTail (24:03.438)
Yeah.

Alexey (24:27.267)
Why is the model getting the decision? Is this model an ethical AI?

It's just the models that were strained with some nontactical data. All those questions, it's not just getting the access to the needed data of the person in your database. It's much more. So I believe, and this is like, it's like, maybe it's coming also because of some fear of what AI can actually do, can actually make

Jeremy at FireTail (24:51.662)
Yep.

Alexey (25:05.091)
with your services so that's probably a part that will be much more costly you know toy and the GDPR.

Jeremy at FireTail (25:10.126)
Yeah.

Jeremy at FireTail (25:16.526)
So the other side of this, besides the timing and the US fractured nature, the other thing that I worry about is...

Alexey (25:18.913)
Yeah. Yeah. Yeah. Yeah.

Jeremy at FireTail (25:28.302)
Okay, we as legitimate companies, we will have to comply with all of these regulations. The thing is that our adversaries don't.

And, you know, they're not signing up for this. They don't care. Right. And in fact, like, if you look at most of the LLMs that exist today, they have ethical guidelines. If you go tell chat GPT, write me a phishing email, it's going to refuse. Right. And if you say, write me a malware, it's going to refuse, but it's not the only model. There's going to be many, many more models. And in fact, there already are, I don't know, 20, 50, a hundred, a thousand. I don't know how many LLM models are out there. And, you know, most of them.

Yeah, many more. Yeah, fine. Tens of thousands, millions, who knows, right? But like, how do we think, how should we, in the cybersecurity community, how should we think about that? Like at what point do we say we're actually fighting with like one arm behind our back because we're using all this regulation and or is it the case that, you know, we just have to accept that's the status?

Alexey (26:31.585)
You know, one of the, so first of all, you know, regarding the, regarding your note.

that the chat GPT has a strong ethical guidance, that's true. However, if you are going to today's accessible LLMs, I don't want to say any names, but if you are going to very known marketplaces of the LLMs and you are trying to run some not ethical prompt, trust me, you will succeed. So that's also.

We check it and many other researchers check it. So that's, that's, that exists. I would say that, you know, I think that behaving in the...

according to this compliance will help us in some period. Because one of the risks that we probably should take into account is, okay, I know what I'm doing with my AI models and I'm protected, I'm good, okay. But what about the vendor that I'm working with him? What about his internal AI, he's telling me the story that he's not using some external things of his AI, he's on full control. But how can I, how can I?

Jeremy at FireTail (27:37.102)
Yeah. Yeah.

Alexey (27:50.387)
can I know about it? Where is the certification? If I'm asking multiple vendors to bring me SOC 2 certifications or something like that, so here, how can I be sure that my data that I'm giving to him, because he's not in control with his LLT,

Will not be you know misused. Maybe he said them, you know trained to send some Data outside. He even don't know about it because he actually you know, he actually took this in the name from some Fourth party provider. Well, that's a though. I think that's kind of the risk that will be okay with that but in general definitely Germany you're right because you know the Large groups, you know the government or hackers and

you know, those bad actors, they will use AI, they will use the ability to create a powerful transformer model, so something like, and as you know, the architecture of models without any ethical training, and they definitely will use it for attacks, but the standard toolings, the standard...

tool sets that we as a security practitioners that we have should be ready for it. And the biggest question again is what kind of data or what kind of endpoints which were acceptable to expose in the past, now cannot be exposed. This is the number one question that is really important in this circumstance.

Jeremy at FireTail (29:25.422)
Hmm

Jeremy at FireTail (29:31.31)
So that's really interesting. So in a way, what that says is you need to really, it's almost like your architecture, you need to rethink about like, what is exposed at the outer edge? Maybe what is, what does a perimeter mean at this point, right? From the standpoint of, you know, do you just put an API out there and then you have a ton of controls behind this API and you decouple the API from the app and.

You have additional access controls, whether they're network or logical or identity access controls that layers further behind. That's really interesting. I think that's a great thought exercise. I'm curious about a couple of other things. You know, you've mentioned AGI a couple of times in the conversation and I've always heard the thing that, you know, there was, I think there's this running thing of like the singularity and a prediction. Like if it doesn't happen by such and such date, then it's never going to happen.

Alexey (30:03.041)
Yeah.

Alexey (30:11.569)
Yeah. Yeah.

Jeremy at FireTail (30:28.238)
I don't like I've heard of it. I've heard that there is a date that a few people have pointed out. How do you think about it as somebody who's been in this space way longer than I have?

Alexey (30:38.817)
So, you know, just I will answer this super important question immediately, just to make one remark about the previous story about the, you know, the start thing. I think one of the biggest real advantages that we as a security production is through the S -Stake from the current state of AI is to reduce the amount of job that the SOC people are doing, that we as a CISOs we are doing so that, you know,

Jeremy at FireTail (30:47.374)
Yeah, please.

Jeremy at FireTail (31:05.518)
Yeah.

Alexey (31:08.771)
like we are producing in Andeavour and additional companies that are doing in the December, that is a big advantage that we as a practitioners should take out of this new step of the things. Now regarding the AGI, so first of all, I myself and any data scientists that are, you know, I'm...

have a big respect for them that are deeply in the space, I can tell you this will be achieved. I don't know the timeline and I don't know the exact...

period of time when it will appear, we can believe to Mr. Musk that says, I don't know, next year or something like that, or it may happen in 10 years or something like that, but in the moment this will happen, this will really change everything. You know, the job market, the security market, everything. Because in the moment, you actually have something that can learn how to connect.

Jeremy at FireTail (31:54.446)
Yeah, yeah.

Alexey (32:17.251)
to everything. Connect to it. Do it fast. Do it with the speed of the light.

or the sound and even create the its own clones that will you know do some additional job that that's definitely change everything because the the major limitation of today's AI is actually that AI can give you some answers it's can do a very limited things very limited actions that you pre -train this AI to do but it's not it's cannot you know can it

it cannot retrain itself. That cannot be done by today. You can give the context, you can have the fine tuning, but it's you who are the owner of AI, you can do. You cannot create the AI that retraining itself and you know, and learn in the real time how to make a new actions from, you know, from, from.

from any source of information. So that will definitely create a completely different reality. And this will be a real jump from the moment we are now or from the moment we are ready to the next level, that's for sure.

Jeremy at FireTail (33:38.83)
Interesting. And so like you said at the beginning, this will really be the iPhone moment that we're all kind of waiting for. You know, the current GPT is just an evolution and one step in the evolutionary process, but this could really be fundamentally revolutionary.

Alexey (33:44.033)
Yes. Yes.

Yeah.

Alexey (33:55.361)
You know, I can tell you that the most, you know, wow.

with ChetGPT was that you are giving some context to the model and the model is answering the question based on this context. But that was known to people that are dealing with AI for many, many times before actually ChetGPT was open to the public. However, this regarding the AGI and the model can train itself to do everything. And it can imagine the situation when you are saying to the model,

Jeremy at FireTail (34:18.062)
Mmm.

Alexey (34:28.963)
you know what this is the access to my platform do something very you know very advanced with this platform bring me the result with those results go to the internet and do something else okay so today you cannot do it with a chat GPT but think about this scenario so for you know for so that's

Jeremy at FireTail (34:47.694)
Mmm.

Jeremy at FireTail (34:54.286)
Yeah. Yeah.

Jeremy at FireTail (34:58.766)
And do you think that AGI also fundamentally changes the game and security?

Alexey (35:05.569)
Fundamentally, I don't know if it's fundamentally changed because you know the standard use cases will be there, but this I think, yes, will require.

Jeremy at FireTail (35:09.966)
or let... Okay.

Alexey (35:20.001)
a new class of protection solutions that will concentrate solely on AGI use cases and will probably create some protection specifically for this kind of attack, which today, this is not, it still doesn't exist, doesn't hold this direction, but yes, this probably,

Jeremy at FireTail (35:30.702)
Yeah.

Alexey (35:49.955)
require much more quality, you know, jumping than it was until now.

Jeremy at FireTail (35:57.838)
Awesome, awesome. Well, Alexei, we're coming close to the end of today's episode. We're getting up against time. Tell us briefly about Andeavour and share with the audience a little bit about what you guys do and where they can find you online.

Alexey (36:09.729)
Okay, so thank you very much for this opportunity. Andeavour .io.

Now we are dealing with organizational intelligence, we actually are coming to various business units in the organization. We are figuring out what is kind of the information, critical information is actually critical for this particular business unit. But because of multiple reasons, they cannot discover it, they cannot secure it, they cannot regulate it. And then we are providing end -to -end solution to it based on the same

AI button pending technologies that we developed. Now we have a product for cyber, you know, where they have multiple paid customers across states, Europe, and so on, which is significantly reduced the amount of work for both vulnerability management and alert management. We have a great product for...

sanctions, sanctions compliance, which is actually the first product in the world that is going after derivative sanctions, you know, not just finding if you are a better organization, but also finds all your derivative affiliate entities and, you know, answer this one. And we have a great product for HR people that are analyzing the reason for attrition in the company and why people are actually leaving the company. And if this is a

Jeremy at FireTail (37:15.278)
Mmm.

Jeremy at FireTail (37:28.878)
Yep. Yep.

Jeremy at FireTail (37:37.966)
Mmm.

Alexey (37:42.467)
pattern of something bad like white waiting, you know, the ivory nation and things like that. So that's what we are doing under this umbrella of organizational intelligence.

Jeremy at FireTail (37:52.814)
Awesome, awesome. And people can just find you online at Andeavour .io. That's like Andeavour, but spelled with an A at the beginning, correct?

Alexey (37:59.521)
Yeah, and the beginning, you know, Andeavour .io, we also have blogs there that we are running there for few interesting topics about AI, about cyber compliance, all this stuff, and people that are interesting can, you know, just ask for a demo directly from the website, you know, very simple. We'll be very glad to talk to anyone.

Jeremy at FireTail (38:23.31)
Awesome, awesome, awesome. Well, thank you to everybody who's joined us on today's episode. If you've enjoyed this episode, please do us a favor, hit that like, subscribe, follow whatever the button is on the platform that you're using. You can find us on Apple, Spotify, YouTube, really anywhere you get podcasts. We really do appreciate you taking the time to join us, listen to the content that we're putting out here.

All the great speakers, if you know somebody that you'd like to recommend to come on the show, please just have them reach out. They can find the modern cyber podcast on firetel .io and ratings reviews and all of that are also greatly appreciated. Alexey Sapolshinikov, thank you so much for taking the time to join us today, sharing your thoughts on the current state of AI and some of the future state of AI as well.

Alexey (39:07.137)
Thank you, Jeremy. Thank you for this amazing opportunity and thank you to your audience.

Jeremy at FireTail (39:12.974)
Awesome, awesome. We'll talk to you next time on the next episode of Modern Cyber. Bye bye.

Discover all of your APIs today

If you can't see it, you can't secure it. Let FireTail find and inventory all of the APIs across your organization. Start a free trial now.