In the this episode of Modern Cyber, Jeremy is 'down under' in sunny Australia for an in-person chat with Daniel Grzelak. Dan is the Chief Innovation Officer at Plerion, an agentless cloud platform that allows clients to identify, prioritize and remediate the risks that matter most.
In the this episode of Modern Cyber, Jeremy is 'down under' in sunny Australia for an in-person chat with Daniel Grzelak. Dan is the Chief Innovation Officer at Plerion, an agentless cloud platform that allows clients to identify, prioritize and remediate the risks that matter most.
Jeremy and Dan discuss Dan's journey in cloud security, finding unintended and interesting uses for technology, modern attack paths, dealing with incidents when they happen and the importance of principles.
About Dan Grzelak:
Dan Grzelak is Chief Innovation Officer at Plerion where he leads technical security reasearch and evangelism for the cloud security platform. A seasoned CISO, Dan previously worked at Linktree and Atlassian.
Dan's Linkedin - https://www.linkedin.com/in/danielgrzelak/
Plerion Website - https://plerion.com/
Plerion Blog - https://blog.plerion.com/
About Jeremy Snyder:
Jeremy is the founder and CEO of FireTail.io, an end-to-end API security startup. Prior to FireTail, Jeremy worked in M&A at Rapid7, a global cyber leader, where he worked on the acquisitions of 3 companies during the pandemic. Jeremy previously led sales at DivvyCloud, one of the earliest cloud security posture management companies, and also led AWS sales in southeast Asia. Jeremy started his career with 13 years in cyber and IT operations.
0:00
[Music]
0:08
hello welcome to another episode of the modern cyber podcast for a change we're actually recording in person for once
0:13
which I am excited for and we're here with today's desk Daniel gelic Daniel thank you so much for making the time to
0:18
join us appreciate it excited to do it awesome I've got a few things I want to get through today but just before we
0:24
kind of kick things off why don't you start with a little background on yourself what your current role is with PlayOn and then maybe some of the past
0:30
few positions you've been in and kind of how you got into cyber security okay um well I'm extremely weird okay my current
0:38
position is officially Chief Innovation officer Clon okay uh which is a cloud security company and there I do mostly
0:46
uh research and like technical security research and evangelism based on that
0:52
research and also trying to get some Innovative stuff differentiation into our product yeah before that I was sio
0:58
at link tree and before for that head of security at the last yeah okay I mean those are two organizations that are
1:04
kind of that I think of as being very Cloud forward and operating at scale so
1:09
it must have been a lot of learning along that those Journeys and those stops along the way yeah absolutely Lan
1:14
was the the company that really introduced me to the cloud like runs on AWS and that's where I got started with
1:21
learning everything I could about it gotcha gotcha it's really interesting you know some of the stuff that you guys
1:26
have been doing recently on I think AWS specifically the last couple of posts that I've seen from plon around um
1:34
discoveries and findings that you've had in security research I think I've been AWS specific right um for those who
1:40
haven't seen the articles there was one on kind of extracting account IDs and then another one on extracting metadata
1:47
of another type from instances or from tags or remind me from from policies
1:54
from policies right okay and so like H what first of all how did you find these things and second of all what LED you to
2:00
even go look for them so I I'm I'm not the kind of person that will find
2:06
vulnerabilities in things like deep technical vulnerabilities I'm just not smart enough for that okay um but I love
2:12
figuring out how I can manipulate things like that work their way that work their
2:18
intended way but then you can get them to do something that's unintended and interesting yeah um so for example with
2:26
the the policy one where you could basically read another company's tags
2:31
yeah so tags are a way you organize your stuff you put the department Etc on a
2:36
resource yeah um and the way that you can actually do that is by abusing an IM
2:43
policy in in AWS um because typically what you say is when you're accessing something um you you can write a policy
2:50
that checks something about you like right your username or your department or something like that yeah but in this
2:58
case there's also a couple of ways that you can check something about the thing that you're accessing right and in this
3:04
case you could check does it have the appropriate tag um and there's a special
3:09
condition that right the I am conditional statement yeah exactly there's a special condition where you
3:15
can start matching things character by character for instance does the tag value start with an S yeah and then does
3:22
it start with an s t and so on and so on and so you can enumerate the value of the tag and that's really interesting
3:29
because I know from my own experience and look I started at AWS in 2010 in those days there were no vpcs
3:37
most companies had one AWS account by the way like everybody using the root user um there weren't really you know
3:42
there was no IM am there wasn't a possibility for me to give a DG user versus a Jeremy user versus whatever
3:48
right um but tags were the main way that we let's say differentiated workloads from each other right everybody's living
3:54
in this one big control plane and network plane as well and all we really had to go on to differentiate my
4:01
instances in this Mass Fleet of ec2 from yours is tags and so the reason I bring
4:07
that up is that I think that that habit of using tags to Mark workloads and
4:13
various things about the workloads continues to this day but has also evolved in a way where I've seen
4:19
companies storing secrets and tags and using let's say a boot script to read in the tags via the instance metad metadata
4:26
calls and that might give me an API key that then I you know go use to access a
4:31
bucket or something so there's actually like potentially valuable information that you could extract or enumerate from
4:37
this yeah so AWS explicitly says in a big red box in their documentation do
4:43
not put conf confidential information in TXS they're just meant to be organizational elements like we talk
4:49
about Department whatever yeah billing code whatever yeah exactly um but you're right we've seen customers put secrets
4:57
in there um the one the one one mitigating Factor probably is well two
5:02
really um one is you have to know the name of the tag okay so if the tag is called something really obscure it's
5:10
very hard to find unless it's in code somewhere and you find it but AWS secret key is not a super obscure tag name sure
5:17
or password or something exactly yeah the other thing is the resource has to be accessible by the attacker okay in
5:24
some way so for example the most the most common and easiest example is if you make an S3 bucket accessible let's
5:30
say you publish your website on it then now you can enumerate the tags of that
5:36
bucket Etc yeah but there's also situations where for instance um and I
5:42
think this maybe ties to the second piece of your research the marketplace relationships and um establishing
5:49
something of a trust relationship between account a and account B in order for me to sell you something via the AWS
5:54
Marketplace would that make resources available for query or not to much so so one of the other ways that you can
6:01
expose a resource is um allowing an account a role to be assumed right so if
6:07
if you can assume a role into someone else's account um then you can enumerate the tags on it yeah yeah and I feel like
6:14
there's maybe more instances of that than people realize considering the number of SAS solutions that are
6:20
designed to integrate with your AWS environment and probably do create these kind of trust relationships with assum
6:27
roll capabilities and and you know way trust and things like that exactly and then there's there's it all depends on
6:33
how you build your applications if you expose a queue there's all sorts of weird and wonderful assets that you
6:39
intentionally expose to AWS yeah one of the interesting things I mean you made the point that there's this big red box
6:46
that says you know don't put secrets and tags you know we've also known don't store passwords and text files on
6:52
operating systems for a long long time but that happens you know X thousands of
6:58
times per day I'm pretty sure not everybody either sees or heeds that warning about don't
7:04
put secrets in tags absolutely yeah absolutely this is just one of those
7:09
almost like security behaviors that you know training hasn't really broken
7:15
through on I so in general I think what's what's happened with the AWS
7:21
ecosystem is it's proved out that training and documentation don't work at scale so if if you think about the S3
7:29
problem yeah for a long time people were exposing S3 through a variety of making
7:34
them public making them available to all authentic bucket policy no IP restriction whatever right and then and
7:41
it was all documented don't do it this way but the interface sort of LED you down that's the easiest way to do it and
7:48
eventually AWS took action at scale and really changed their user interface to
7:54
make it all off by default all blocked by default and you had to really try very hard to make a bucket public yeah
8:01
exactly even though they had the documentation and even oh and you've had five control mechanisms for eight years
8:07
now that that can make a bucket nonpublic exactly and so like I think
8:13
this has been proven out again like the S3 example is just one but it's been proven out over and over in AWS is if
8:20
it's easy y people will just do it yeah yeah yeah and the other place that I
8:25
find that that really kind of um that this kind of like anti-security Behavior
8:32
manifests itself again and again is in debugging and troubleshooting primarily in non-production environments right I'm
8:38
a developer I'm working on something that's not working um I could painstakingly go through the I am policy
8:44
permission by permission to try to figure out what is blocking it or I could Grant myself star get it to work
8:50
again and move on with life absolutely and now iws has tools to help you do exactly that I access analyzer will tell
8:57
you which policy you need but it it's hard yeah yeah yeah I am is really really hard I mean I I was working at
9:04
AWS when I am was introduced and I remember thinking at the time like oh this is great but then about a couple
9:11
months later I remember thinking I don't know if we properly understood the complexity that we were about to unleash
9:17
on the world I say we like I had anything to do with it that that the am tool was about to unleash on the world
9:23
and all the complexity and there was a there's a slack community that I'm in where um there was a great quote that I
9:30
screenshotted a number of years ago and it was something like every time I work with I am I want to rip my hair out I
9:36
would pay somebody literally anything they asked for to make this problem go away yeah the beautiful thing is you can
9:43
pretty much do anything you want yeah the terrible thing is you yeah so you guys have published
9:50
these two articles people can find those on the blog blog. pon.com or is it pon.com blog I can't remember blog.
9:56
pon.com okay good in plon p l r for those of you who are watching or listening um so do check that out I I'm
10:04
curious you know plon is maybe a couple years old as an organization right now little bit all little bit Yeah okay
10:09
young company young company and entering a cloud security space that you know I started working in Cloud Security in 2016 and I would say at the time it was
10:17
an early stage to work on cloud security customers generally didn't understand
10:23
Cloud security security teams that we went and talked to didn't understand Cloud security we would get a lot of
10:28
question questions like where's my firewall um we would get a lot of questions like what agent should I be installing on all of my instances things
10:35
like that and I would say that the understanding of cloud security is very different from what it was you know
10:42
eight years ago but by the same token there's any number of companies in the cloud security space and some of them
10:49
very large right you've got the likes of P Alta with the Prisma suite and checkpoint and on and on and on down the
10:55
list so I'm really curious from your perspective what makes plon different what's different about the approach that you're taking towards Cloud security
11:01
sure um so first it starts with our mission which is to simplify Cloud security one of the things we've already
11:07
talked about is just how insanely complex I am is but everything in the
11:12
cloud is complex AWS has hundreds of services yeah no no one person can
11:17
possibly understand that and in each service may have 10 15 20 configuration items some of which by the way
11:23
counteract each other and you know some have a default deny but then an allow and yeah so yes there's the complexity
11:31
on the resource or service by service basis as well and then you've got the complexity of the stuff that you're
11:36
building in the cloud you've got the complexity of all the vulnerability management and where's your data what
11:42
your permissions are ET it's just it's just really complic it's a lot yeah um and what we found is that you log into a
11:49
bunch uh some of the older tools and you just get a list of all the problems
11:55
right that you have right and now you're expected to figure do and what do you do
12:01
I don't I don't know and so our mission is to simplify that process and so help
12:06
you get to the top things that you need to do today okay top things that you
12:11
need to do after today okay what's going to reduce your risk the most right now
12:16
basically and help customers along that journey to to mature their coud security presence and when you think about
12:22
figuring out the things that are going to have the greatest risk reduction impact I I'm sure there's some Secret
12:28
Sauce there which I'm going to ask you about but I'm curious do you look at it from the standpoint of what has the
12:33
greatest blast radius or what has the um easiest fixability or is it like some
12:39
combination thereof or how do you think about that prioritization question right so we we use this concept of attack
12:45
paths okay and assets at risk okay so an asset at risk might be your production
12:51
database customer database S3 bucket with something my production RDS instance whatever yeah yeah exactly uh
12:58
and then we build and then so we build potential attack paths so starting from the edge yep to say a vulnerability
13:06
sitting inside a Lambda function that is accessible through API Gateway okay now
13:12
can that eventually through trust relationships or through other things go and touch that uh asset at risk or that
13:19
data at risk and if it can then we want to break that chain and we present that whole attack path to uh to the user and
13:26
give them an option to um fix some part of it or break that chain in some way so
13:31
that's really interesting because Breaking the Chain is not a response that I've seen from a lot of
13:38
solutions what I've seen from a lot of solutions is like okay here's the attack path here's all the implicated elements
13:45
maybe that vulnerability maybe the IM role that links the instance to the RDS or maybe the ACL between vpc1 and two
13:53
whatever that thing is but then it's really like up to the user to figure out what the remediation is and what the
13:58
remediation steps they want to take is are you saying that you guys kind of like intelligently suggest the most
14:04
effective remediation to the attack path or how do you think about that so so yes and no okay like we're we're still
14:11
working on that part but the thing we do is is prioritize the things within that
14:17
attack path so you can see the vulnerabilities that are in that attack path and you get the worst one at the
14:23
top for example Etc um and so we're still working on that part but absolutely theide IDE is to make it
14:30
super simple for the user so they don't have to make decisions they can if they want to they can see everything and like
14:37
maybe for their organization it doesn't make sense to patch that vulnerability for a reason maybe they want to change the trust relationship between the lamb
14:44
doing something else okay of trim down the am roll whatever yeah got it got it
14:49
I I think that to me that sounds very much like the not very much but there
14:55
are parallels to the vulnerability management problem that we as a cyber security industry have
15:00
been trying to eliminate for like 20 plus years I find it super depressing by the way that like um you know 20 years
15:07
ago when I started an it to kind of date myself I actually more like 25 but the
15:12
average time that a vulnerability lived on a system was more than six months and
15:18
it's still the case and I think that there's when I see organizations turn on
15:24
vulnerability management for the first time they're always overwhelmed by the number of vulner abilities that exist in
15:29
their environment and they have no idea about how to figure out which ones that they need to tackle is that kind of you
15:37
know is there a parallel there in what you're doing in Cloud security yeah exactly so so part of cloud security is
15:42
vulnerability management yeah of the stuff in the cloud and so absolutely we right we prioritize the assets at risks
15:50
and then we prioritize the vulnerabilities on those okay assets at risk okay okay yeah there's a there's a
15:56
lot there that um the there's a lot of data that has to be correlated to kind of calculate that attack path at least
16:02
from my own understanding of AWS so I think that's a really um powerful solution and probably something that a
16:09
lot of organizations aren't going to immediately understand like first time
16:14
they see it so when you think about walking somebody through that attack path you know do you kind of go step by
16:21
step explaining what's the link between step one step two step three yeah
16:27
absolutely and I mean they can see it visually visually sure yeah but they may not understand like how thing one
16:34
connects to thing two like why a non-obvious am role or a you know
16:40
secondary set of permissions that is attached to an IAM user might
16:46
accidentally create a secondary exposure that they were not aware of yeah exactly so um we we we describe all of the
16:54
relationships um for example uh if something might accessible through uh a
17:00
knack y right and so we'll say that there there's a network access available through this relationship another thing
17:06
might be that that a role can be assumed okay and so we'll describe that relationship okay but the idea over time
17:14
is we make again going back to the Simplicity we want to make all of this simpler and simpler over time so we
17:19
abstract all of that comp uh complication and give the user the simplest explanation that they can get
17:26
and then if they want to yeah they can go dig into they can press a button and see the policy and dig into it yeah and
17:32
how do you think about automated remediation as part of this whole world like is it something that you guys do or
17:39
is it something that like that you offer but customers have to turn on or do you think like customers just aren't quite there yet it's something we discuss a
17:47
lot okay um whether we should offer it or not yeah um but so personally I feel
17:54
like it's a bit of an antipattern okay um so in in principle what we want to do
18:00
is shift all of that stuff left to left right whether it's build the right
18:05
policy at the start or uh make sure that the container doesn't get into the
18:10
registry or make sure that the vulnerable code isn't vulnerable in the first place or or never gets committed
18:15
to prud yeah exactly um and and if it does um you want to find it and
18:21
establish a baseline so that it never happens again um so like in principle I
18:27
think we W to or or you might build uh a service control policy to prevent that kind of
18:33
thing ever again in the future so it's sort of like if it if it does happen um
18:40
we don't want it to be like a crutch where you just Auto mediate everything over and over and over and over we want
18:45
it to be fix your environment systemically so it never happens again
18:50
but we do we do we do discuss it I think C what what I found is that customers uh
18:57
in a discussion will say they they want that thing but when it goes to actually
19:02
putting it in place yeah they very rarely actually use the auto remediation
19:07
for for some making sure the production doesn't go down they're worried about the risks Etc you ever work in an
19:14
environment that used a web application firewall yes did you have the experience
19:20
that like only 20% of the rules that you in theory wanted to implement actually
19:26
got turned into blocking mode and production yeah absolutely like what you just described kind of describes or like
19:34
captures that Essence to me where I and I've heard this from organizations by the way around the world this is not a
19:41
geographic industry company size or anything specific this is
19:46
the nobody wants to be the person responsible for the one false positive
19:52
Block in production that prevented a e-commerce checkout successful partner
19:59
transaction or what have you and yet I feel like in some
20:04
way organizations don't learn to stop making those mistakes unless they actually put constraints on them I'm
20:11
curious like you know you worked at two large fast moving organizations how did you think about that in those context
20:17
like did you have to try to put up these walls to to force correct Behavior no okay I was lucky okay part
20:26
luck part by choice the executive teams in both of those both lry and atlin
20:31
really believed in security okay and so that fed through the entire organization
20:36
exactly and so um if we found something that was wrong we would go back and fix it I think where something like a w is
20:44
really really good is that first level triage response or like something's
20:50
something's bad and it'll take a day or two to fix it let's just put something to in place to block it temporarily
20:56
before we do the proper fix yeah I think that works that use case worked really well for us that's interesting because
21:02
that's not one of the primary blocking use cases that I've observed in in customers that I've dealt with which is
21:08
the customers that I've dealt with that are able to really Implement a blocking
21:14
rule in production it's because it's such a universal truth that it's really really easy it's like block all traffic
21:21
from North Korea easy right like that's universally 100% of the time there is no
21:27
valid business in this case ever where we would want to allow something but I find find beyond that it gets really it
21:33
gets very fuzzy very fast yeah absolutely and then the thing you find with that specific technology web
21:39
application firewalls is that bug bounties show over and over and over again that clever researchers will go
21:46
find a way around your rule because the rule is so static yeah look I mean this is something we you know not to flip the
21:51
conversation but we work on API security right at at firetail and um one of the questions that we run into very often is
21:57
like well why can't I just solve my API security problem with the web application firewall and to exactly your
22:03
point I think it's been proven I don't know how many times or anybody interested look on our blog for why
22:10
wafts aren't enough or something like that is the title of the the article and we link five instances from the last six
22:16
months maybe seven months by now that prove demonstrably that there are easy
22:21
workarounds and it could be as simple as you know VPN cycle IP address whatever but more often than not it's like oh
22:28
actually it's a business logic flaw it's not a demonstrable wrong type of call or
22:34
something that the the Waf can pick up so um so yeah I'm I'm
22:42
consistently surprised that they're not more universally implemented but also that
22:49
people go to them and think of them as a solution and I just see this kind of circular logic flaw where like you C you
22:55
know you can't implement it in production with the blocking that you want but then you think it's a solution
23:01
that is demonstrably proven not to solve the problem why are we having this conversation um so anyway a little bit
23:06
of a tangent there from my side and I want to come back to something from your past experience which is around incident
23:13
response and incident communication and um there's something that we've observed in the API security
23:20
space which is that um apis are really complex in a way because they kind of sit on a network so they have some
23:26
Network exposure they run on top of infrastructure so there's like infrastructure correlation they front
23:32
business logic and application calls and they usually also front data sets and so you have like all these moving parts
23:38
around them and when there is a flaw or a breach around an API organizations
23:44
really struggle to kind of understand what happened what the scale of the breach is and what the scope of the
23:50
breach is and so we do things from the product side to try to help simplify that but I'm curious as somebody who's
23:56
worked at you know some of these org organizations operating a scale as a practitioner how did you think about the
24:03
whole kind of incident response process and also spec and like importantly communicating that to key stakeholders
24:10
whether that's like the rest of the executive team or to customers boards Partners like what's important in that
24:15
whole process and how do you approach it in the investigation investigation Communications reporting right um and
24:22
and I know there's a lot there so there's a lot to unpack yeah uh I think one of the most important things
24:28
security incident response differs dramatically to um reliability incident
24:34
response okay the most important factor in a reliability incidence like your website's gone down is to get it back up
24:41
as quickly as possible security is kind of the inverse of that okay is you sacrifice speed to try and deeply
24:48
understand what's going on so you don't make any bad decisions okay so for example you might find that uh I don't
24:55
know an actor got into your environment through something and your first instinct might be might be to go immediately block that thing
25:03
right um but the actor maybe the actor been in there for months maybe they have persistence mechanisms in a different
25:09
direction and what you've done is told them that they now have to exfiltrate as quickly as possible right and so what
25:16
you're really trying to understand is scope the the incident before you start making decisions about what to do the
25:23
other thing you're trying to do is communicate as fast as you can to your customer customers what's happened and
25:30
what they can do to mitigate their risk okay um so so for example um like if
25:37
their passwords have left the building sure okay those passwords are going to be used by that actor or some other
25:42
actor in some other environment and we see this over and over again one SAS company gets compromised and the
25:49
username and password data immediately gets used to attack every other SAS compy every other SAS company with those
25:56
same users and so hackers have automation to they're going to try this everywhere absolutely um and so like
26:02
what we what you want to do is like tell that customers fast as possible while also making sure you have the
26:08
information to be able to explain it to them and control remediate that incident
26:14
contain that incident all of those fun things but there's a there's a couple things in what you said that I want to kind of dig deeper on because
26:21
there's some of that runs counterintuitive to I think what a lot of us would have experienced as consumers of organ ations that have been
26:28
breached I think the so first of all there's got to be business pressure to restore Services I know you know that's
26:34
a real thing that everybody faces on a day-to-day basis and you're saying like in a way you don't want that reaction
26:40
you want the time to kind of go do the incident research you know it's a matter
26:45
of tradeoffs okay or it's a matter of where on the Spectrum you are so you
26:51
absolutely want to go fast yeah like you don't want the actor to be rumaging around your infrastructure
27:01
but it's number one in a relability it's number one thing you concentrate on it's
27:07
not necessarily the number one thing like you want to know exactly what's happened and you see over and over
27:12
organizations that rush to put out incident Communications they'll be vague they'll
27:18
make some uh some statements about what happen and then a week later they'll come back and have to retract those
27:25
statements and explain why they were wrong about these things and it becomes completely embarrassing and you have
27:31
this Loop over and over that's I can think of some some examples as well yeah
27:37
yeah exactly and so you you want to avoid that and you want to avoid the actor taking actions that are going to
27:44
be detrimental to the whole incident response Yeah by by by the fact that they know that you know they're in there
27:50
but at the same time we're also entering an era where there's regulatory requirements around the incident
27:56
reporting right and so you know in the US I think 96 hours is is you know four
28:01
days is kind of what's becoming the standard now and I'm I'm curious like I
28:08
don't know you know not to call out any of your past employers but is that a reasonable time frame or is there there
28:14
there seems to me that there's still a chance that you're still doing forensics you may not understand how deeply you
28:20
might have been compromised and how long you might have been compromised at that point in time yeah so you're absolutely
28:25
right I think the intent is right yeah the pressure on organizations to move
28:32
fast is right you want to tell customers because the other end of the spectrum is you take forever and don't do anything
28:38
about it and then don't notify customers about everything that's happened so all of that is is right but on the flip side
28:46
sometimes you don't know after 3 four days what happened and you'll see like I've been involved in incident response
28:52
where it took weeks to unravel everything yeah and so yeah there's defin Ely balancing it and sometimes you
28:59
just you've got to keep investigating so in that case when you're when you're
29:05
trying to kind of not lose customer trust also not lose executive and board
29:11
support as you're going through weeks of Investigation what's the communication strategy because to me I can see very
29:18
good reasons why I would remain as as unclear and vague and kind of General in
29:23
everything that I put out to avoid the embarrassment that might come later for getting it wrong so I mean the the
29:29
employeers that I've been at and my philosophy in particular has always been trying to be as transparent as possible
29:36
and don't use weasel words like there is no evidence that is the one that's always used meaningless no evidence at
29:44
credit cards were compromised you didn't have any logs um but you you so it all depends on
29:51
the exact situation but like if the investigation is ongoing you should say the investigation is
29:58
um if we've taken some actions uh in the investigation and there's some conclusive results we should share that
30:05
I think that's perfectly reasonable if there's especially if there's things that we know customers have to do to
30:11
mitigate their risk we should tell them that as soon as possible whether it's privately through sort of like corporate
30:18
Communications or publicly uh whenst whatever yeah exactly yeah I mean
30:24
there's a lot of tension there right because you know the the loss of customer trust
30:30
the reputational damage like these are real risks to organization and you know
30:35
recently there was a um a uh micro blog I guess we're not supposed to use the
30:41
brand name for whatever the stupid bird company used to be called um you know a an upand cominging Wan toe usurper who
30:50
had an API that was just like horribly constructed and we've deconstructed a little bit on our blog if if anybody
30:56
wants to check it out but it had everything down to um password reset
31:02
codes returned through an API call that was pretty easily obtained even in an
31:08
unauthorized manner like I could obtain your user record including password reset codes which by the way we're not
31:13
encrypted only encoded and quite easily decoded with open- Source software so I
31:18
could you know kind of figure out the email address Associated to your account
31:24
potentially update it assert myself as admin use your reset codes reset your password probably take over your account
31:31
assume your identity on that platform and you know go make statements in your name and to your point the
31:38
communications around it were very much no evidence of this this is not a real risk there's no cases of this a actually
31:46
having happened in the wild but I'm pretty sure for that company it's it's you know got to be close to game over so
31:54
like how do you think about when you've had kind of a work worst case scenario not using weasel words not
32:00
backing out like what's an effective communication strategy in that case so so the way I've done it in the past is
32:07
probably the best way to put it is you've got to have principles that you've agreed on upfront okay so your
32:13
comm's team your legal team uh your your technical security team have agreed on principles like these going down to the
32:21
these are the words we're not going to use like your security is important to us yeah like
32:27
avoid those kinds of things and if you have those principles if you have those approaches laid out you can just pull
32:33
those off the shelf when a bad thing happens if what you do is try and craft all of the communication at the time of
32:40
the incident when it's completely chaotic where you've got hundreds of people working on the incident you're
32:46
almost guaranteed to do something silly yeah in the moment um but if you've got those guidelines principles templates
32:53
Etc you're more likely to to make a good decision this is these principles I'm I'm curious in your
32:59
experience are these things that you kind of agreed on in advance with the executive board and uh sorry with the
33:06
Executive Suite and the board yes and no it depends on the company how the company functions uh sometimes it's just
33:13
within the security and legal team sometimes it's just with the security and cons team I've actually had it work
33:18
where I my the team has had the principles or the outlines of things and when something bad has happened and
33:24
everyone's kind of running around trying to figure out what to say and what to do will provide those things and say hey
33:30
we've already thought about this up front use this as a starting point yeah awesome awesome well Daniel I know we're
33:37
kind of running out of time for today's conversation um I wanted to check I had a list of things that I wanted to ask
33:42
you here and I'm trying to pick one that we might have like two three minutes to focus on and and there's one that jumps
33:50
out to me from this list and this is I think a philosophical stance of yours which is build the security you expect
33:57
so when you think about that in some of your past experiences like what does that mean to you and like what are you
34:03
trying to say with that yeah so um I'm really big on principles right Umar
34:09
if uh just writing down all of the instructions for everything I I found in
34:15
my experience just doesn't work no like sometimes people follow the instructions but often they don't so like often what
34:21
you really want is like really I guess trivial or things that people can
34:26
understand and use uh and so for at link tree for example we had engineering principles okay um and one of those
34:33
principles was build the security you expect and the idea of that is okay you're an engineer or a designer product
34:39
manager whatever and you're trying to figure out like what is this thing going to look like what's the architecture
34:45
what's the design Etc um you're not we have a small security team you're probably not going to be able to get
34:51
their expertise but you want to make good decisions one of the ways that you can make good decisions is thinking
34:57
about okay if Google made this thing that we're building how would I expect
35:03
it to work would I expect the data to be encrypted in the database would I expect
35:09
um it to require me to have a long password whatever the thing is would it would I expect them to have logs if
35:16
there was an incident yeah all of those kinds of things and so like there's a really one trivial thing that you need
35:22
to remember and the person can hopefully make better decisions with that principle and so do you find that if you
35:28
put these principles out there and you kind of communicate educate get people to commit to them overall this should be
35:36
kind of a self-reinforcing behavior that actually improves the security quality of the organization both in the products
35:42
that it's building and in the way that you're kind of thinking about design and operations and incident response and so
35:47
on and and yes and actually what you wanted to do is you want it to become a communication tool um so often people
35:55
find it hard to challenge others in a corporate culture or maybe hard to
36:01
challenge their manager or P person in Authority yeah right but if they have a common language that allows them to do
36:07
that for example a principle like this instead of saying we shouldn't in uh we
36:13
should we should encrypt passwords or whatever we should encrypt this data because it's a good thing to do you say
36:18
well our principle is build the security we expect and I expect these kinds of
36:23
things to be encrypted if Google did as an example so like that that common language if you repeat it enough
36:30
actually becomes really powerful cu the security person or the security team can kind of step away and it takes a life of
36:36
its own yeah yeah makes a ton of sense and by the way I expect that password to be not only encrypted but salted with
36:41
the encryption there one way so awesome well Daniel gelik thank you so much for taking the time to join us on the modern
36:47
cyber podcast I've really enjoyed today's conversation if people want to find out more about you or the work that
36:52
you're doing the research where's the best place blog.com all right you heard
36:57
it here first thanks again that's it for today's episode talk to you next time
37:02
[Music]