In this episode of Modern Cyber, Jeremy meets with Noah McDonald from Google Cloud to talk about the intricacies and best practices of incident response in cloud environments. Noah shares valuable insights into identifying and mitigating cyber threats, the importance of understanding your environment's architecture, and the critical role of logging and threat modeling.
In this episode of Modern Cyber, Jeremy meets with Noah McDonald from Google Cloud to talk about the intricacies and best practices of incident response in cloud environments. Noah shares valuable insights into identifying and mitigating cyber threats, the importance of understanding your environment's architecture, and the critical role of logging and threat modeling. The discussion covers the challenges of responding to breaches, the process of forensic analysis, and the importance of timely and transparent communication with clients. Filmed live at fwd:cloudsec 2024 in Arlington, Virginia, this is an episode you don't want to miss.
About Noah McDonald
Noah is an experienced security engineer at Google Cloud where he helps clients optimize and secure their cloud environments. He is also an Advisory Board Member at Fulton-Montgomery Community College, contributing to cybersecurity education. Previously, he held key roles at Palo Alto Networks Unit 42 and EY, where he provided advanced digital forensics and cybersecurity consulting services. With a strong background in both technical and advisory capacities, Noah is a respected professional in the cybersecurity industry.
Hello, welcome to another episode of the Modern Cyber Podcast. We are coming to you live from the sidelines of the fwd:cloudsec 2024 conference in Arlington, Virginia. And I've got a special treat. I've got somebody who is actually live at the event with us and going to talk to us today about a really interesting topic that I don't think we've covered before on Modern Cyber. Noah McDonald, thank you so much for taking the time to join us. NOAH MCDONALD, Google Cloud Security Engineer, over at Google Cloud. Been doing answer response for about three, four years now. Awesome. Prior to Google, I was over at Unit 42 by Palo Alto Networks. Yeah, it was a great time over there. OK, and just for clarity, for today's purposes, you're speaking as an individual not on behalf of Google, correct? That's correct, yep. OK, awesome. Well, regardless whether you're speaking on your own behalf or on behalf of Google, you've got a wealth of experience in this domain space, having gone through those companies that you've been with. So I want to start with just a general kind of high level question. So incident response. It can mean a lot of things, I think. Generally speaking, in a cyber scenario, it means something bad has happened, right? That's correct, yes. And so what are the first couple of things that happen when an incident response process kicks off? Yeah, so when a process actually gets kicked off, it's more about understanding from the client perspective what actually happened in their environment. It's understanding is there service down itself? Are the company able to basically not service its users anymore? Or are they basically running up a bill that they've noticed over time? There's kind of a question like, what happened here, you know, we have a small environment, and you know, our bill is 20, 30 grand more than it should be, right? Right. Okay. So with that said, initial assessment, what's the current state? Exactly. Right. So are we down? Are we racking up a huge bill? Yep. But I noticed you didn't say in there, have we been breached? Yeah, yeah. So where does that come into question? When that comes into question, I mean, I think companies always have that underlying understanding like... I probably have been breached at this point. They get the bill, their service is down. Usually if their service is down, they go through and speak with their engineers, like what have we pushed into production that might have brought our application down or our companies. Change analysis. When it comes into cost perspective, it's kind of something similar. Have we deployed a database or something that is costing us a ton? And then after going through that thinking, They pivot over to a third party or the CSP themselves to get a better understanding of what is actually happening in my environment and how can we understand, first, what's happening, and second, how can we remediate it and get back to where we're working as a normal again. But there's an interesting question that pops up in my mind related to that, which is going back to the CSP perspective. you've got the shared responsibility model. Sure. Right. And the cloud service provider is always going to say to you, well, we're responsible for the security of the cloud. You're responsible for your security in the cloud. Absolutely. And so to what extent can the CSP actually be helpful versus they're kind of by design not able to see everything going on in your account. So what kind of questions can you ask them? Or what kind of assistance can you get from them? Yeah, so there's a lot of security foundations that organizations can put into place so that Google specifically, right, can now look into what they're actually doing in the organization. And so at that instance, we have to actually ask for permission to either gain access into the environment so that we can gain an understanding of what's going on, or just give a general baseline of questions to get an understanding of when did this start happening, where did this start happening, how did you recognize it. Where do the actual things being impacted? And then give the understanding of from there, and then diving further into the analysis itself by actually doing the hands-on work, the log analysis, the image analysis, et cetera. But then how many customers? I mean, just kind of broadly speaking, not specific numbers, I just wonder about a couple things on those regards. I mean, one, you're asking the customer organization to say, when did this start happening? And I can imagine that a lot of the time, it's like, I don't know. You know, where did this start happening? Again, I don't know. How did this start happening? I don't know. And then by the way, like, do you have all the logs that you need to analyze this? There's a lot of things that in my mind could be uncertain and unclear. I don't know, am I thinking about this wrong? No, no, it's absolutely correct. It's like a lot of times when we start doing these scoping calls, right, getting a lot of the questions and answers from the client is a lot of them saying, we don't know, right? There's a lot of unknown when an actual company gets attacked. And what comes into place is like, what do we know? Any nugget of information is allowing us to use that for when we want to actually pivot into their environment, actually start taking a look around. They know what is down. And most of the time, it's not the entire company that's down. There are use cases where it's very unfortunate that the entire company does go down. Usually it's like some. Process or they have some indicator from the actual tractor that's doing their either the ransom or they've notified the actual organization That's got attacked. Yeah, you know something that they have, right? Okay, okay But then on the lines of logging Like is there a set of best practices or recommendations around that because I know for instance, you know for my own years working in cloud Security we did we did we provided software not service and in our software we've had a set of checks that was like hey you know, for instance, I'm much more familiar with the Amazon side of things. It would be like, hey, your cloud trail is not turned on in these environments. You better do that. It's either indicative that somebody has gotten in there and turned it off, or it means that like you just never turned it on in the first place. And so when there is a breach, it's going to be really hard for you to figure that out. So like, what would you tell people about logging that they need to understand for, you know, if and when this day comes? Yeah, there's, um, I mean, even now, you know, working with the clients I do, there's always that trade off of. cost versus the security, right? Right. And realistically, we have to like break it down to where the crown jewel is sitting, where is the most valuable asset sitting, and then focus around the logging, you know, around those assets first. Okay. Because those are like what is the most important for your company, right? It's the most important for not just like the application running, but also the clients that, you know, the company is dealing with. Yeah. And so when we take a look at like what's most important, we try to... Fixate on what logs would help us in X scenario. So for a GCP instance, if we're looking at a GCS bucket, we're talking about data access logs. We're talking about GCS usage logs. And we're talking about enabling those and retaining them for X period of time. A lot of organizations that I currently work with are regulated. So they have to retain these logs for a year. And that's a general practice that I give a lot of the clients, regulated or not. A year is a lot. generally what I would say is the best practice of running them. Now, we talk about logs and enabling logs that are outside of the crown jewels and stuff like that. It's like VPC floor logs, right? There's a lot of them that get generated. It's very costly to store them. And they're kind of useless. They're kind of useless, right? Yeah. But there are ways that you can still log those things but filter out the ones that you actually want, right? So a great example of this is like data. low balancer logs, right? You don't have to record all of these 400, 404, 403, right? All of these logs are going to be useless because even if an attacker is spraying and praying, right, they're not actually hitting anything that's very, well, they might get some reconnaissance, right, but it's not actually hitting your environment where it's going to be impacting it enough. So what we often recommend is although enabling the logs themselves can be costly, there's ways to reduce and still gain visibility while enabling these certain logs in the environment. So 401s and 404s, you say, throw them away? It depends on the use case that we're dealing with. But yeah, I would say most of the time, we can filter out some of these logs, especially if a cost is a big thing that we're using the company for actually enabling these logs in the first place. It's interesting, because one of the areas where this comes up, and I'm a little bit single-mindedly focused on APIs, because API security is kind of what we do here at Firetail. But one of the things that we see is every API that we put online gets probe traffic within minutes. And I'm saying less than three to five minutes, typically. And some of it is very stupid, like drive-by bots, whatever, who cares? But then some of it is a little bit more interesting in the sense that it's like, OK, got a response. Maybe it was a 404. But at least that says that there is some service of some type running at this IP address. And then you see the smarter series of 404 requests, where it's like not just kind of dictionary style attacks against whatever, but it's really trying to kind of enumerate the tech stack there and discover what's running and then possibly understand a set of known vulnerabilities associated to that tech stack. So you're saying like those from an incident response perspective, not hugely valuable. I mean, they are valuable, right? Because they are valuable, but when you're looking at the cost perspective from the client side of things. it's a way off that they're willing to trade. Now from a response perspective, of course, every log is, if it was our dream, if we wanted to completely do it, then there would be every log for every service that they have, right? Even if we're talking about like log for J, right? A lot of times those came in as like 400, and then they ended up being successful, right? So if a use case like that, yes, those logs are definitely valuable. But a use case where we're talking about probing or even just web crawlers in general. you're going to get so many of them that it's really just going to be more of a, hey, they're adding to the stack for a reactor to hide their needle in. Gotcha, gotcha. You kind of answered a question that I was going to ask relative to logs as well, which is when you think about those crown jewels, let's say I've got a workload of some type. And so I know it's in a GCP project that I've designated as my production project, for lack of a better term. And in there, I've got, I don't know, load balancer, some compute instances, and then maybe I've got a database as a service, and then I've got some object storage behind that. So you're saying if I say to you that the crown jewels are actually what's in the database and what's in the object store, then those logs I need to turn on, it's not necessarily the case that turning on every log service for every service that I use within that GCP project is you know, maybe the best practice. I would say it's more so when we talk about something like that, we need to understand the architecture that's in place. And then they also use cases of like, let's say the compute instance is the one that's accessing the resources, like the pound jewels. We need to understand at that case, like, is it actual people, right? Is it personas or a service account, right? There's individual ways that we can lock down each entity and each principal of that understanding so that we can secure those routes themselves rather than like, Enabling logs and running up like a building cost. Okay, so we could put in place things like, you know network ACLs We could put in place things like, you know deny policies that only explicitly and allow like either the compute instance of themselves or an I am role associated to the compute instance that Issues the calls to the back exactly. Yeah, it's really just evaluating Well, I think one of the biggest things is like evaluating the I am and the policies that were actually these things and yeah In the cloud, you could run fast. You could run super fast, right? And that's why a lot of people move to the cloud is because it scales so easily. And then security kind of just lags behind. And one of those things that lags behind on is auditing the actual I am around service counts, you use groups, et cetera. But yeah, there's a lot of things that you can put in place, like you mentioned before. ACLs, firewall, DPCSC, you can use probably Google app. There's a whole bunch of different things that you can put in place to audit how users are interacting with your current tools. All right, so that's on the logging side. I guess maybe last logging question, because then I want to get back to actually more of the day-to-day kind of, not the day-to-day, but let's say the incident response process. You mentioned something there that I think is really an important takeaway that shouldn't be just left alone as kind of a side comment, which is that, OK, so we've set in place this architecture. We've kind of analyzed what is the access route for compute service to Crown Jewels data in whatever store format they are. But then actually also logging the IAM credentials around who slash what has access to modify that is actually really critical. So that if we do find later on that it was access that came in through our own compute instances to these services, but it was unauthorized access because the IAM role was compromised, well, we need to understand where was that role compromised. IAN NI-LEWIS-LEE, CREDITING DIRECTOR, IAMS safety protocols that you could put around like auditing like the access around the I am the service count the user right yeah Arting those allows you to wait as an analyst like pivot off of it like kind of track down like the initial attack vector Yeah, I like the initial like how did they get access? You know where did they get access etc? Yeah, and oftentimes it's like Sorry, can you push again? Yeah, so if we find that the breach vector actually came in through the compute service, through an IAM role that got compromised or got utilized for that purpose, then we need to have audit around the IAM to understand how that was compromised in the first place. Yeah, audit around the IAM, and then also just looking at the infrastructure around the compute instance. What is specifically there? How is the user able to access the service kind of in the first place? Is it through an application? Is it through a? an exposed compute instance in the first place, right? Or was it, let's say, through a developer's IAM credentials off of their endpoint or something? It can most certainly be that. Yeah, I've dealt with that before, right? Yeah. Yeah, and it's also not always external users. It's also sometimes internal users, right? And that's something that we have to be aware of as well. Yeah, interesting. So let's get back to the process, right? So we've talked about logs. We've talked about the importance of having logs, because when that day, you know, Hopefully for your organization, all of you out there listening, hopefully for you it doesn't come. But even when it does, you need that. And then what actually kicks off? So we've talked about how we're going to check the availability of the services. We're going to assess our current state. We might reach out to the CSPs for more information, try to understand what they're seeing from their side as far as I don't know what new services we've deployed or what bills we're racking up here at the same time. And then what from an incident responders perspective happens next? Yeah, what happens next is, and it's weird that you're like, let's pivot off of logs, but I think incident response is really a lot about log analysis. Okay. And sure, there's also traditional forensics when it comes into like looking at compute instances and databases, et cetera. But in the cloud, like one of the biggest things about the cloud is you have so many logs you're able to gain visibility into through compute instances, through the different services in the cloud itself. Yeah. the analysis itself in the cloud is log analysis. And so what really kicks off is, okay, we already asked our client all the questions that we need to know, we got the initial scoping done, we have a general understanding of what the issue is, why did the client originally contact us? From there, what we actually do is we grab all the logs in their cloud environment, if they have them, right? And we start performing frequency analysis against these logs. We start understanding what is unique in these logs, and that's like looking at, user agents, IPs, the method names themselves, right? And then kind of just building a timeline out from there. So most of the time, we don't know when the initial access is, right? We always work our way backwards. We always need to understand, OK, we have this user account or service account that we see doing malicious activity. Right. So at the point that you start to respond. So you look at what's going on right now and then work back from there? Exactly. OK, OK. Sorry, I didn't mean to interrupt you. But we'll continue. Yeah, absolutely. I mean, that's the most obvious. You know it has been impacted. Let's say your server goes down. When you start looking at those logs, you start seeing who or what has been interacting with that server itself. You start looking at the actual API calls that it's been making. And you're able to be like, OK, is this an you do like a back and forth between the client. You get an understanding because this is, as an insur responder, your first time in the environment. You're not sure how things work, right? So there's a lot of back and forth and getting an understanding of like, what should this service account be doing? Who is this person and what is their activity in the organization? And once you start identifying an anomalous activity, you'll be able to be like, okay, we know it's from this service account, this user, this IP address. What other anomalous activity have we been seeing from this user? from this IP address because oftentimes it's not just a single identity that the actors are using or some IP or user agent, right? And so that's where we started to build our profile against like, okay, they've probably moved from user A to user B or they've took a time, you know, like a break of like performing reconnaissance to actually deploying the attack themselves. Yep. And once we grab all the analysis from there, we started looking at the actual assets that they've touched. Okay. So we're like... OK, we have a default service account that is normally attached to this compute instance over here, performing anomalous activity against this database or GCS bucket, right? Yep. And from there, what we do is we'll be like, OK, we know that this service account or user is often interacting with this compute instance. Let's take a look at that compute instance. Let's take a look at that container, right? And let's image this. And we'll do a light forensics first. What we'll do is we'll grab the most important logs from that actual instance itself. Before we create triage of this. And if we identify that there's a malicious startup script or there's something that's actually embedded into the compute instances themselves, we'll take a full image of it. Basically, at that point, it's traditional forensic analysis against the image itself. And that allows us to start getting an understanding of, OK, this may be one of the initial attack vectors. This is probably how the threat actor got in. And then from there, we start getting a general understanding of, OK, did they come from another inside method? Did they come through this low bouncer? Because low bouncer provides a lot of valuable information as well. It's usually sitting in front of applications or assets. And really, we just started to build out our timeline from that point on. And we just are able to, OK, we're starting to communicate with the client, like, this is what we're seeing. These are the services. This is how they got in. This is the exploit that they used during the attack here. So running through with the client, there's two different pathways here. It's like, if the product is still We often try to isolate the threat actor, and we try to not tip them off that we know that they're in the environment. That's the lesser use case, but it's happened before. The other use case is the threat actor has gone in, done their thing, and they've already ran it to me the client or their service is down. And from there, we started actually working on remediation with the client, like saying, all right, let's repeat these credentials, let's remove this user completely, let's try to stand back the infrastructure if we can. We wanna not rebuild the possible. I was going to ask that question because I'm dating myself a little bit here. And I think it's fair to say I might have been in the tech space a little bit longer than you. But going back to the days when I was hands on keyboard, one of the things that we took for kind of forgiven at that time was it was far cheaper to re-image something than to actually try to, let's say, remove every last remnant of. In these days, we're mostly dealing with viruses and kind of early malware, right? Nobody wanted to go back and try to figure out, well, what are the extraneous registry keys that have been planted on this system that nobody knows about and shouldn't be there? Nobody wanted to go through that process. So for us, it was far cheaper to just re-image the box. But it sounds like you're saying that's not the best practice on cloud environments. Well, I'm going to say it's not always best, because oftentimes rebuilding it, it takes a lot of time, and it's pretty costly. And from that point, we need to understand, OK, like, We need to weigh our pros and cons. And this also comes from the understanding of the analysis itself. If we know the true attack vector of how this reactor initially got in, then great. We can remediate that actual artifact itself or remediate that compute instance itself. And we don't have to rebuild. But a lot of times, it's hard to find the initial attack vector of how they came in. So sometimes we have to do the rebuilding process from there. And like I said earlier, in the cloud, you can run fast and quick. You can put up things really fast. And so what we do from there is we actually advise the client to build almost a golden image. We evaluate that golden image, and then we have them redeploy it back into their infrastructure so that they can get back up and running. IAN NI-LEWISOCKI Yeah, I mean, it's funny because again, my default inclination would have been, you know what? New project, new IAM roles, new, let's maybe even bake the image for the compute instance again. Let's rerun our IAC. Let's double check our IAC before we run it again. That totally would have been my default set of assumptions. Yeah, there's a lot of, it also depends on what asset you're looking at, right? Like oftentimes if you're looking at a server that's the main thing that's running your application, it's going to take a lot of time because a lot of time has already gone into building this, right? Yeah. Sure, they might have backers of this, but then the director will just get back in, right? Because the vulnerability is still present at that page. Yeah. So oftentimes we look at it and be like, okay, this server is really important to like this pipeline or this application, right? What's assessed like? the vulnerability that's actually happened here. And we should try to remediate this instead of rebuilding the entire thing. Yeah, yeah. And also, the other point is, if you're rebuilding, it also adds to the time that the company is down. Yeah. Right? Which time is money at that point. Yeah, yeah. Well, let's talk about timelines because there's something that's been kind of in the back of my head as we've been having this conversation. First of all, I think this has been super educational for me. I think there's a lot more that goes into instant response than people. I think a lot of people think it's like, oh, OK, I've got a run book, execute, right? Or I'm just going to take an image of things, hand it off to somebody, and we try to continue life as normal, which may sort of be the case if you're working with an outside firm. I don't know. But what it brings to my mind is a couple of questions. Number one is actually gathering the logs and then putting them into whatever system you're going to use for your analysis, that's going to take time. When you know prior experience when I'm working at like a trance response firm, there's a lot of automation and in-house tools that we build to automatically sync those tools over to either our cloud environment or to our sim. So that way we're starting to be able to perform the analysis while our users in the client environment and working around with what's going on in there. Okay, but these log files, A, they can be big. They could be big, yes. And B, then the analysis, let's say the queries that you're going to run across them like, You mentioned this kind of frequency analysis, and I know there's any number of flavors of different anomaly detection algorithms you might apply to them. Getting them, loading them, then deploying these algorithms on that data, that takes time as well. It does take time. And a lot of companies don't have, from experience, terabytes of data that's just sitting there as blogs that are tracking for forensic purposes. So And most, whatever seen in like actual cloudants response is maybe a terabyte of logs. Okay. And realistically we can get that over like an hour or two. Okay. So yeah, we are losing a little bit of time. Yeah. However, um, for the companies that do have a larger, uh, amount of like forensic logs or just logs in general, what we'll do is we'll start looking at the logs as they're coming in and start, you know, seeing what we can find. Yeah. In addition, we also pick out like the time frame that we know. the impacts happened and we grabbed those logs first. And once we know the tractor came before that timeline, then we'll start to pull logs further back. We don't pull all logs in the environment because it would just waste time, right? We look at the timeline, we usually do a week before and a week after the actual incident that occurred. And then from there, we start building our timeline. Okay, if it's still a week before that week, let's pull back another week of logs, right? Okay. And then what about the cases where, for instance, data shows up on a darknet forum and you don't really know. Obviously, the customer has been breached, but they don't know the timeline when they were breached. So how do you start that process? Yeah, that process, when you work at an actual instance, plus for like Uniport 2 or Mandiant or likewise, right, oftentimes you'll have the threat until that is identifying the actual data that's on the darknet. And that's working with the answer responders to be like, Hey, this is what we found from the client respective. And often that's like a source subject, you know, bringing it up to the client, whether they know or not. Right. If they don't know, which oftentimes they don't is also like really sitting down. We have to be transparent with the client. First, it's our job. And second is due diligence, right? Yeah. Because, um, not only for like cyber insurance reasons, but also for their users themselves, like we. we need to get ahead of these things so that the company is aware and can start building a process of how they're going to publicly talk about this. Yeah. And I mean, there may be things like identity protection and there may be financial implications and so on. Exactly. Okay. And so part of the reason this whole question of the timeline comes into mind is that these new guidelines from the SEC about reporting periods, and I think it's 96 hours, right? Yes. It's like four days. It seems to me that that's like a super high pressure, it is relatively short timeframe. Yeah, it is. I mean, when you work with the answer response, the answer responders, at least from my experience are, they know that, right? And so they're working hours on end to kind of uncover like what is happening. That way we can get the client information rapidly. Yeah. Most times we'll have like a live bridge between the client and the answer response firm themselves. Most of the time the analysts are a part of that. It's more of like the manager level, right? Or the C-suite or executives at that point. And then the analysts will start feeding the, you know, the data and then like information as we're finding it, right? But oftentimes we need to be sure and we work with like our threat intelligence community. We work with our malware analysis to make sure that we have all the facts about what we're doing before we present it to the actual product themselves. Got it, got it. I mean, this is super fascinating. I mean, we could probably keep talking about it for hours. But within the couple minutes that we got left, I guess if I had to kind of take away a few high-level themes that have come out of this, number one is, even before you just say, hey, turn on logs, you actually need to understand where the crown jewels are and what the key things that you really need to be worried about logging around, right? And then number two, if I hear you right, is you need to think about how are those crown jewel data kind of access. Absolutely. And then really it's like those critical paths around that. And then an abstract layer of audit on top of kind of the IAM and everything that kind of can access and make changes to those paths. Exactly, yeah. Oftentimes what I tell customers is like, before we implement architecture, as we're creating these major changes to our infrastructure, what we need to do is actually loop around our architecture and start running through the use cases, the personas, the requirements for the infrastructure. identifying what assets live where, how they're going to be accessed, and then doing a threat model against this, not only looking at external actors but also internal actors. I mean, the three biggest things about how people get infiltrated in the cloud is misconfiguration. The other one is actually doing the OAuth top 10, whether it's a zero day or something wrong with the application. The last thing is credential exposure, which I guess you can kind of bleed into misconfiguration. But you're right. People are doing GitHub repos, they're doing GitLab, and they're just submitting things because they want to get the infrastructure up and running. Yeah, I mean, the number of times you see an access key and secret pair in a committed code repo is pretty staggering. Exactly. So really just the whole thread model, and then building security recommendations or requirements around that to match with what you've identified as an attack vector through external or internal actors. Got it, got it. OK. And then if and when that day comes, let's say you've gone through that and so you're relatively prepared. When you engage an incident response firm as a customer, as an organization that's going through an incident, what's best practices for that customer organization to know when they first reach out to a firm like Amandient or whatever? Things like you've already been breached, right? I get that stuff's on fire right now. You're getting a lot of pressure from management. It's going to happen. I understand that. But you have to allow time for the analysts to go through and analyze with what is happening. So I think time is one. It's OK. It's not OK, but it is OK. It's happened, right? Unless the factor's live in the environment, we can worry a little bit more. The other thing is we need as much information as possible. whether it's relevant or not to what we're doing, the more information that we understand about the environment, how it's accessed, what has happened in your environment that you know of, will allow us to pivot on your logs a lot faster, right? If we know that the reason you're contacting us is because you got a GuardDuty or an SEC alert and this is like the first sign of an infiltration or an attack, let us know, because the fastest thing is for us to go into GuardDuty or SEC. start pulling out the alerts and related things that are associated with that. Yep. As we're grabbing the logs, as we're doing the energy process. Right, so you can correlate, you can look for timeframes. Exactly. And it's a response, it really is just like doing it as fast as possible so that we know what is going on and how can we remediate as fast as possible. So the more that we know, you know, the more that we can assist you in the fashion we can assist you. Okay, so customers need to be open. Exactly. Basically. Yeah. Well, Noah McDonald, this has been really a... an eye-opening and very interesting episode. I learned a lot of things and I had a lot of assumptions about an incident response that turned out to be wrong and I'm sure a lot of our listeners will have had the same experience so thank you so much for taking the time to share your expertise with us today. Yeah, I appreciate it. Thank you for having me. Awesome. Well, we will talk to you next time on the next episode of Modern Cyber. In the meantime, sharing is caring so please do, you know, share this with a coworker, rate, review, subscribe, all that stuff. You know what we're going to say and we'll talk to you next time on Modern Cyber. Thank you for joining us on this episode of the Modern Cyber Podcast. We hope you've gained valuable insights into safeguarding your digital world. Stay tuned for future episodes where we'll continue to explore the ever-evolving world of cybersecurity. Don't forget to subscribe and share with your network, and please take a moment to write a review. And if you know someone who should come on the show, let us know by sending an email to podcast at firetail.io.