In this enlightening episode of Modern Cyber, Jeremy welcomes Simo Kohonen, founder and CEO of Defused, to delve into the fascinating world of cyber deception technology.
In this fascinating episode of Modern Cyber, Jeremy welcomes Simo Kohonen, founder and CEO of Defused, to delve into the fascinating world of cyber deception technology. Simo shares insights into how deception techniques have evolved beyond honeypots to encompass innovative methods for misleading attackers, including emulated decoys and synthetic data. They discuss AI's potential in accelerating both defensive strategies and attacker methodologies, emphasizing early detection and the creative use of deception to neutralize threats. Tune in to explore the past, present, and future of deception in cybersecurity.
Simo Kohonen is the founder and CEO of Defused, a cutting-edge cyber deception company that empowers organizations to outsmart attackers. With a background in computer science, Simo has transitioned from warehouse work to becoming a leader in deception technology. He is a guest lecturer at Cranfield University’s Ministry of Defense Cyber Program and has been featured in media outlets such as The Wall Street Journal and Yahoo. Simo’s expertise lies in deploying innovative deception strategies to enhance situational awareness and protect digital infrastructures.
Alright. Welcome back to another episode of Modern Cyber. I know we've been coming to you pretty regularly, but it's actually been a little while since we've recorded, and I'm delighted to be coming back at you with a new guest, some new topics, some new focus as we've seen the 2024 play out and seen some of the developments in the cyberspace. Today, I am delighted to be joined by Simo Kohonen.
Always a pleasure for me to welcome a fellow compatriot, from my home country. Love to hear more about Simo, his career, what he's doing in deception technology. Simo, thank you so much for joining us. And for those who don't know you, just a few words of background on your bio. As far as I know, you're the founder and CEO of cyber deception company called Defuse, a guest lecturer on deception for Cranfield University, part of the UK's Ministry of Defense Cyber Program.
And, Simo, has, done multiple media appearances on The Wall Street Journal, Yahoo, New York Post, and other sites. Simo, thank you for taking the time. Oh, absolutely. My pleasure, Jeremy. Awesome.
Awesome. I'm really curious about that UK Ministry of Defense Cyber Program. Just before we kind of dive in, how did you get involved with an initiative like that? Oh, I guess it's like a lot of things in my life. It was pure luck by meeting the right people and being at, in the right place at the right time.
But it's but it's a bit fancier than it sounds right. So I only do, like, one yearly guest lecture on more technical matters of cyber deception. They basically run a program for their, like, kinetic military people who want to make a career transition to the cybersecurity space. And then they have a very wide spectrum of different things that they look at in the cyber cybersecurity space as a whole, and then cyber deception is one of these. And then I have a single lecture within the subcomponent of the subcomponent.
But it's very good. You know, you get to be very, very, very deliberate, very thorough about cyber deception, something which maybe is not possible to do when you're doing sales calls with enterprise prospects. They expect more, more brevity and quicker answers. Yeah. Totally.
There's a saying that I heard years ago, which is that if you ever want to really learn something, learn a topic really thoroughly, you should start teaching that topic. Because as you prepare to start teaching it, you learn all the things that you didn't know before, and you actually increase your own expertise. And I know I've actually found that to be true in my own life. So I used to teach English to migrant farm workers as one of my jobs years ago. And to be honest, I learned a lot more about the English language than I ever thought I would during that experience.
So that's a great experience. I love hearing about that background. I guess that leads me into my next question, which is tell us a little bit about what your own career path has been and how you got involved in deception technology in the first place. Yeah. I mean, it's not terribly, exciting.
So I basically jumped into the cybersecurity space straight out of university. My previous job before I was in the cybersecurity space was being a warehouse worker at a food warehouse. So the transition gap, if you wanna call it that, was fairly large. There was a lot of pain and learning. But I did study computer science at university.
So I did have a semi decent background in computing and understanding the systems and things like that, which was a good way to move into the cybersecurity space. And I I I guess, I'm I'm a bit bit like a a restless personality, so I figured, like, cybersecurity sounds very dynamic and cool, and you probably can't rattle sabers with with some some evil people, which which maybe didn't turn out to be entirely as as true as I thought it was. But, I'm still bad. It's a good discipline. Yeah.
And what made you pick you know, cybersecurity is a very broad field as well. Yes. It's very dynamic. It's always changing year to year, and there's new technologies coming all the time. But within the cybersecurity domain, there's a gazillion things.
I used to work in Cloud Security. Now I work in API Security. I've also spent time looking at SIMs and CNAP platforms and all these different things. What specifically drew you towards this field of deception technology? And I guess was there like an original problem that you saw where you thought, oh, yeah, deception technology would be really useful for this kind of problem?
Yeah. I think it was much, much more anarchistic than that. I thought well, so as as a background, like, a lot of people who who get into more operational, more technical cybersecurity matters, they tend to play around with with things like honeypots, which are very good like entry level calls that you you set one up and and, you know, you direct some traffic into 1. And then it sort of sort of get, like, intuitive idea of, okay, this is how network traffic looks like. This is how logging looks like.
This is what to expect when someone chambers with your systems. And it was a bit of the same thing. I was very naturally interested in this area, which is a horrible way to start a company. Why do you always wanna go for customer demand first instead of your own interests? But I disregarded that and decided to do what I perceived to be something that's very cool to work on.
But as it turns out, like, there are a lot of benefits in the cyber deception space, specifically which tend to be, like, also big enterprise problems for which it is a very good solution for. So very early in the trajectory of the company, we went to a couple of these different cyber accelerators, things like that. Got some early exposure into the customer space, started better understanding, you know, how it is that they actually operate within these kind of organizational cybersecurity units. It's very different, you know, when you read the theory from a textbook, you have all of these concepts like, you know, confidential confidentiality, integrity, availability, all of that. And then you jump into the operational side, the enterprise security teams or the long assistant administrator in the small and midsize companies.
And they deal with these things, but it's a very different reality. I think like cybersecurity and actually like operational cybersecurity and organizations are almost 2 completely different disciplines. But what's your original point? I digressed there a bit. It turns out that there are some good answers cyber deception can provide, which a lot of these organizations struggle with situational awareness being a huge thing.
A lot of people who walk into security teams inherit large amounts of infrastructure where they have absolutely no clue that there's no logging. They don't know what's happening. And, of course, in today's ever evolving and dynamic world of cyber threats, you know, you have to be quite quick to spot things. Actors tend to execute their attacks fairly fairly quickly end to end these days. It's not a matter of, you know, them taking 3 months anymore.
It's down to a few days or even even a few hours in some cases. So that there was a good match. And then from there, you know, it was a fairly standard start up, you know, introduce some concepts and get battered by the market, and then, you know, you reiterate that and do a bit of different things. So even though we've been working on the cyber deception space for quite a long time, the product has gone through a number of very thorough iterations making it popular. So yeah.
Well well, let's talk to you about some of the details there, and then I wanna come back to that point that you raised around situational awareness because I think I have an idea of what you might mean, but I wanna get into that in a little bit more detail. But before we do that, let's walk through this kind of step by steps to make sure that I understand this. Because honestly, neither honeypots nor deception technology are something that I know a lot about. I've heard the names for both. But, you know, just as a starter, how are they different from each other?
I guess, honey pots is something that I heard about earlier, so it's probably been around a lot longer. But, like, how are honey pots and modern deception technology different from each other? So the simple answer is that deception technology is or we can start from deception. Like, what is deception? It is essentially deliberately inducing erroneous sense making to a target or an on a top level, like, deception is a methodology of doing things.
It's not strictly a thing like a honeypot. I think a good analogy here is like a honeypot is a hammer and deception is the toolbox you keep it in. So a honeypot is one way to do deception, but then there's a ton of different things you can do in the deception domain so that you can fool the adversary and gain some advantage. And most of the time in the cybersecurity space, it tends to flow down into the detection domain. So you wanna place different types of deception elements in your infrastructure so that you can, with very high fidelity, pick up bad activity as early as possible, in the kind of broader attack chain.
Okay. And so then aside from honeypots, which I usually understand to be kind of, like, deliberately vulnerable targets that an attacker can kinda get to, but they're they're, like, separate from the actual, let's valuable data or the intellectual property that an organization might have. So the attacker somehow compromises your network, gets access to the network. They go towards this honeypot instead of going where you don't want them to go. Is that a kind of a correct baseline understanding of what a honeypot is or for the purpose of a honeypot?
I think that's a very good description. I mean, it is like mentioned in the description about deception. It is a lot about fooling and misdirecting the adversary, drawing them away from legitimate assets. These types of things. So definitely it's a good description.
And I've heard historically, and correct me if I'm wrong, but I've heard historically that this usually means some kind of dummy server or virtual machine that's running against some vulnerability or something like that that is easily discoverable on a network? Yeah. That is one kind of thing you can do in the deception domain. And that's that's kind of what the classical hood is and and was in that you know, they tended to be network level things, usual servers or maybe an open SSH port or a FTP server, things like that. But, of course, if we go back to the top level description of deception being that it's a methodology where you fool the adversary, then, of course, you can apply that to any part of your infrastructure.
You can create fake users on active directory. You can create fake files or Huddle files as some might call them on, on Windows endpoints or Linux endpoints, for example. So if somebody tampers with those, you have a good indicator that, you know, somebody's touching things they shouldn't be touching. You can definitely also do reception in the API domain. Like, as you well know, a lot of companies have trillions and trillions of API endpoints, and it's sometimes tough to keep track of what type of what type of activity is flowing into them.
Maybe not with if they're using your tool, but as a general principle. And, of course, you could expose, like, fake API endpoints to pinpoint if any, like, scanning activity or anything similar like that might be going on. It gives you a very good signal because it comes sort of overlaid with this with this ready made context, this ready made proof that there is a fake thing that somebody's touching that they shouldn't be touching in any case. And that sort of wraps it around into a high trustworthy context if you look at it from another point of view. So you can already Yep.
Already have this have this proof that, you know, this activity has this context of, you know, somebody tampering around with assets that aren't even real, which which from a situational awareness point of view, that gives you a very elevated position when you when you start to triage, activity on on the network or on an endpoint or, you know, wherever you're applying the deception. Got it. And so I guess the point is that for any of these things, whether it's an API endpoint or a file or a user or whatever, you know, an open SSH port, the point is that you then set that up to alert the security team that there has been access or there's been usage of this identity or this file has been opened or whatever. And under normal operating conditions, that would not be the case, but these things, if they get open, that's indicative that you've got some kind of intruder within your environment. That is so, at least in 99% of the cases.
Of course, there's sort of corner cases like, you know, you might have a SIS administrator who's scanning a network with a scanning tool and then hits everything. I think these are well known cases, and especially with the customers we work with. We can weed them out the upfront by filtering and sort of reduce the likelihood of that happening too close to the erosion. Gotcha. Gotcha.
And how has deception technology kind of, changed as, you know, as with the advent of the cloud, for instance, and with the advent of, like, very dynamic environments that are changing all the time? Yeah. So I think anytime there's, say, a large, what do you wanna call it, like a change in a paradigm change or or, you know, pick pick your business jargon to describe it. But Yep. Yep.
What it usually does is it introduces, new attack surface for the people operating it. And, of course, that opens up new questions about situational awareness. So if you have a large cloud infrastructure and, you know, it's fresh. You don't really have a whole lot of logging here. You don't necessarily even know how the cloud works.
You know, it's kind of in many places, like, it's a bit difficult to figure out how that stuff works internally, how the networks are mapped, all that. So it introduces all of these problems that, that any other type of on premise network, might have, which is, you know, do you know what's happening in your assets, or not? And, of course, as it is for on premise environments, also for cloud environments, you know, deception again can provide the same answers there, the same sort of high fidelity as they call it, like detection of bad activity. So, again, it's a methodology. You know, you can use the right tools, you can apply it anywhere.
The deception element itself, they will look different, you know, for something on premise, you might stand up, like fake deception elements for AD. And then for the cloud, they might be things like three buckets or Yep. Or or or any other cloud computing element. So that's kind of the difference. It moves into a technical domain, but the same deception principles can absolutely be applied also into the cloud side.
Okay. Okay. I'm kinda curious. I know you've been working on this for a couple of years. I'm curious if you have any fun stories of weird or curious things that you've seen in customer environments that you can talk about.
I always find this to be one of the most fascinating elements. For instance, we talked about this a little bit in our state of API security, that we released a few months back. But one of the craziest things that I've seen around APIs is that we're starting to see API payloads that include bash scripts, API payloads coming from nation state aligned actors. So we see traffic out of North Korea. We see traffic out of, you know well, traffic that we have a high level of confidence that we believe comes out of North Korea or Iran and Russia, for instance.
We see traffic including malware, you know, server side request forgery attempts to download malware to a local host and things like that within API payloads, which was super surprising to me. I always think of APIs as being these things that kind of transact data or maybe invoke a business function. You know, it's like creating this user profile, updating this user profile, triggering a payment, transaction, and things like that. Not like to act , not like to change directory, change permissions, go to this website, download this piece of malware. So that was something really surprising that really caught our eye as we were analyzing some of our customer data earlier this year.
But again, I'm curious what you've seen on the deception side. Well, that's a very good example. We can flow on from the same train of thought here, which is that, for instance, net network level, deception things, honeypots, or or we call them decoys to distinguish ourselves from the open source variants. But, of course, this same type of thing applies. You know, people tend to shoot all kinds of weird bash scripts into them that try to establish reverse shells and things like that.
Yep. And, of course, because a honeypot is essentially it. It can be like a fake computing asset, for instance, a printer or a router or anything like that. But then it also can be a web server, something more classical. Something that we do with the platform is, we also, in some cases, attach like a sandbox to it. So if somebody shoots a script or tries to abuse a vulnerability, which embeds an attempt to execute code remotely, we can actually do that in a very safe way inside the sandbox and watch what the attackers do in real time.
And this this is this is interesting because you can actually this jumps a bit into the threat intelligence domain, but there there is this concept of of, like, a a pyramid pyramid in the in the threat intel domain where at the bottom, you have these very basic indicators that in themselves, are are interesting, but they don't really tell you a story. And then it builds up more abstraction as it goes upwards. And at the top level, you have, like, the coolest thing you can derive from a, from a threat intel perspective. And I may be butchering this because I'm not a threat intel expert. So that's some way.
They recommend it to me. But at the very top, you have, like, ethical motivations and tools and techniques and these kinds of, like, higher level concepts you want to understand about the attacker. And if you have a suitable, deception environment with a good honeypot, you attach a sandbox to it, and you see an attacker coming in, and you replay their attack in the sandbox, you let them in, and you observe them. This is the type of stuff you can see. One one thing that we talked about publicly and we saw was that an attacker tried to exploit I forget which vulnerability, but we had a simulator for that.
So we were able to simulate the vulnerability exploit working. We let them into the sandbox. They had, like, a multistage attack where they downloaded first a kit of some kind, and then it executed a number of other things. It ended up being a trip loader, which is not super exciting. But the really cool thing was that you could see all of these steps coming into the sort of, like, larger alerting panel or at that stage, you know, we call multiple alerts like a larger incident.
So you have this pretty cool story built up almost automatically, which established the full context of, okay, you have this adversary. They tried to exploit this specific vulnerability. When they got in, they did, you know, steps x, y, zed. And it turns out their motivation was to exploit this server because they wanted to install a crypt miner and, you know, start abusing your computing resources for that. So that's one really cool thing you can do.
I think especially with the with the more advanced, deception solutions, I think and it goes also back to this this, you know, what distinguishes some of the the open source honeypots from from, like, larger, deception platforms is that you can build these larger capabilities into them for for educating educating the defenders better about what's happening. And there's a lot of valuable lessons to learn there. If you stop them at the first alert, you know, you might never never figure out, like, what the full motivation was. But if you have these deeper sandbox environments, you can build a lot of contextual knowledge about what they're trying to do and, you know, that will posture you better for future defenses. Interesting.
And so as you were watching this play out, you're literally seeing each command line kind of command being attempted and, I guess, successfully executed? Do you let the first stages of the attack kind of play out successfully? Yes. That's right. Because it's a fake environment.
You know, we've Okay. We've sandboxed it very well. It itself terminates. So you have this period of time where you can sort of let them freely play around and see what they try to do, and then you cut them loose. And depending on the defending organization, you know, they might have next steps from there.
They might quarantine systems or things like that. But there is this period where you sort of and it's fully optional, but you can let them sort of execute the bits so you can actually learn from the attacker's motivations. And do do you think organizations worry about things like initial access brokers and, you know, maybe they go through an exercise where they think they've compromised an organization based on the feedback of some, decoy, and then they go on to a dark web forum and they say, hey, we've compromised company x y z, who wants to buy access. You know, is that a risk or a threat to that organization? Or is it more like, you know, kind of laughing at the adversary in a sense that, you know, you think you got something and criminals buy from criminals, you get what you pay for.
Well, you know, from our perspective, it's one of the coolest things you can do. Like, you've preemptively, completely prevented any kind of attack happening. You know, the adversary exploited some edge devices, and they stole fake credentials. And, you know, what's better, they tried to sell them on the market. I think that's not the standard use case.
I think it's reserved to more, how do you say, more sophisticated security teams who have the time and resources to do these types of things. But I think it's an increasing trend. I think a lot more security teams are looking at this kind of thing almost like I don't know what you wanna call it, like preemptive detection or preemptive intelligence, things like that. Edge edge devices are a huge hugely popular exploit vector right now. I think alongside phishing, they are the most popular initial access vector right now.
So it makes a lot of sense for security teams to also look at the perimeter and, you know, what they can do there from a deception deception standpoint. And it's actually very easy to set up. Yeah. Yeah. And how do you see AI today?
Because I can imagine, you know, in the API security space, we talk about actually there's two sides of it. There's some positives that are coming for sure, but there's also a lot of negatives. You know, there's the fact that it's pretty easy to provide an AI with a piece of documentation, something like an API spec file, and ask it to craft queries for that. And with a little bit of prompting and a little bit of, let's say, prompt engineering or, let's say, providing a piece of malware to the prompt, you can get it to attempt a bunch of attacks pretty quickly against any API. And I can see that there would probably be both positives and negatives on this deception technology side as well.
But what are you seeing, and how do you think about this problem? I think from a deception standpoint, it's still kind of a Naskan area, but I do have some interesting ideas, I think, at least. And and one of the one of the sort of, I think, guaranteed things is, you know, the attackers will will get these the same benefits from from AI as as the defenders, which probably means that they will be able to execute, like, road attacks much faster than they they did previously if they have the the right tools, if they develop the right tools for that. And I think it's kind of an accepted truth that, you know, if you push aside the nation state actors, the 0 days and and they're like really, really, really, far out stuff, which most organizations, they don't really see. But it's, it's more like a lot of ransomware actors, for instance, they have fairly standard tools and methodologies that they apply in their attacks, and then they succeed because, you know, people don't pick up this stuff quickly enough.
And so if you apply the AI there, which for most, like, road computing work, it has the capability of speeding up quite a bit. Right. So I think that's sort of the big risk here. Like, I I I don't know as of today, like, what the average ransomware execution type is. I know it's floated between, like, 5 days, 3 days, end to end.
So from initial access, you know, how long do they take to actually ransom everything? And I think if you think about scenarios like that, which are a big problem, you might see something like the execution time dropping from 3 days into, like, 12 hours or even, like, 4 hours as the adversary tools get better. So I think the big danger is there. And, of course, again, I'm super biased. I will admit it instantly, but, of course, I think deception has a very good answer here for two reasons.
One of the big perks of deception is early detection because you can get these trustworthy signals of, you know, hey. There's some weird activity that's hitting all of our deception elements on the infrastructure. Like, we should immediately either investigate or maybe trigger a response from a SOAR platform or, you know, whatever is in the deferred tool stack stack. There's a definite benefit here, if attackers become massively faster, which is kind of a logical conclusion for the AIs. And I think the second interesting thing is, which we didn't talk too much about, but like if you have deception in place, it's effectively a bit like you get the ability to lie to the adversary.
And so if they are using an artificial brain like an LLM and you have the chance to feed them misinformation, you can use that to your advantage to actually point the very quick computing actions, for instance, in a completely wrong place. Like, you can tell them, you know, you might have a let's let's say, take a hypothetical. You have a LLM which is attached to, to an end map and it's trying to figure out, like, what kind of assets you have on the show about the network? And then you present it without deception, you know, it's gonna you know, this is the lay of the land and it's 100% correct. Then you apply deception, and you can put some, like, really, like, super high vulnerable things in there to try to entice it to initially go in this direction.
And once it goes there, then you can quickly again, depending a bit on what kind of posture you have, like, very quickly eliminate access to all of the other parts. So I think there's also that if if artificial braids, like LLMs become more prevalent in in attacker tool sets, then you cannot you can you can very definitely get a lot of benefit from from, you know, providing this information to them in a in a neatly packaged way like like Deception can do. So I think those are the sort of 2 very interesting future things we are looking at from a deception standpoint. On that second point, I'm really curious. I mean, do I understand you right if I'm thinking about this?
Like, okay. So let's say we're a 100 percent Linux shop in terms of our server infrastructure. We're Linux and primarily open source technologies for, I don't know, our databases and things like that. But maybe from a deception perspective, we'd actually want to present decoys that indicate that we're a Microsoft shop, and that we use SQL Server and we use Windows Server and so on in order to attract those types of attacks instead of the right attacks? Or is that not the right way to think about it?
I don't think there's a boiler play template to go towards. But I think at least if you again, I like to dip a lot into the ransomware attacks because they're kind of topical and highly disruptive. And if you look at what they do, they go a lot for the same type of popular targets, like very well known vulnerabilities, RDP, things like that. So if you put out the juicy targets there which are popular, that's usually a very good strategy. But then you can derive different types of data also if you put things that resemble your infrastructure.
So you can also, like, cover for visibility gaps if you have a lot of, like, networking gear, things like that where typically defenders don't have any kind of, like, monitoring capabilities installed. It's usually also a good good idea to replicate that a bit so you can pick up activity if, if there's an attacker who's who's, for instance, trying to avoid, like, endpoints and just trying to move laterally across across networking devices, which is also, I think, one very prevalent trend, avoiding the highly secured areas which which tend to be running, EDR solutions. So it's a bit of a mixed bag. You wanna put the apparent flow having fruit there, but you also wanna do a bit of replication of what you have so you can cover visibility as much as possible. Got it.
Got it. And do you think the coding copilot helped to create new decoys faster and give, you know, defenders the ability to kind of very quickly spin up new decoys of different types based on, let's say, like, newly discovered exploits or newly discovered vulnerabilities? Oh, very much so. At least we use it a lot because and this depends a bit on how you do deception. Like, it's a it's a whole whole, hole in your rabbit hole we we probably shouldn't go into, but you can you can, like, you can run full blown real systems and and try to record everything that happens in them, and that's that's sort of what you call a a high high interaction decoy.
And then you can do emulations, which are much much more lightweight. They have a slightly more narrow capability of picking up activity because, you know, their emulations, they have predefined actions built into them that that the attacker can go towards. But if it's not one of those, then, you know, usually log log some initial indicator and that's it. So especially if you go the emulation route, which is what we do because it reduces the risk of things going wrong dramatically. You can definitely use the co-pilots and chat GPTs to help sort of speed up that process.
Absolutely. Got it. And how do you see AI shaping this space over the next 1, 2, 3 years? Well, there's some really cool cool, cool ideas to explore that I think AI makes very, very viable in the deception, deception space. I think one of them, you know, is very, very prevalent for breaches to involve some kind of data theft component.
And, of course, AIs are very good at creating synthetic data. And so with suitable modeling of the infrastructure and appropriate data to place deception things, we'd like to imagine a scenario where an attacker infiltrates a number of gigabytes of completely synthetic data. Even though they managed to reach into the organization and they managed to steal something, it ends up being a fully fully deception based loop for them so they don't win anything. So I think there's a lot of these areas where deception can contribute very, very meaningfully. You know, if there's any, any developers listening to your podcast, everyone knows that previously, this was done with the Python Faker Faker library, but now you have a much more versatile set of options to generate synthetic data.
I think that's at least something that we're looking very, very close towards right now. Okay. Very cool. Well, Simon Cohen, and it's been a pleasure having you on Modern Cyber Today. I thank you so much for taking me through that topic and educating me on deception technology, kind of past, present, and future.
If people wanna learn more about you, about the work that you and your team are doing, what's the best place for them to visit? I think they can start at our website, defusedcyber.com. Okay. Feel free to link with me on LinkedIn. I'm also on Twitter quite a bit, this is kind of typical social media.
Very, very happy to always have conversations. And to your former point, an absolute pleasure to be a part of the podcast. Very much appreciated. Awesome. Awesome.
Awesome. Well, thanks for taking thanks again for taking the time to join us here on Modern Cyber to our audience. Stay tuned for the next episode out in a week or so. There might be a gap of a week or 2 coming up here as we get into the holiday season. But for myself and for everybody here at the FireTel team that helps put the podcast together, thanks again for taking the time.
Please do us a favor. Rate, review, share, like, subscribe, all that good stuff. You know what to do on your podcast platform of choice, and we will talk to you next time. Bye bye.