Modern Cyber with Jeremy Snyder - Episode
37

Simon Wijckmans of c/side

In this episode of Modern Cyber, Jeremy Snyder speaks with Simon Wijckmans, CEO and founder of c/side, about the hidden complexities of client-side security in web environments.

Simon Wijckmans of c/side

Podcast Transcript

Welcome back to another episode of Modern Cyber. As always, I am your host, Jeremy, and I am delighted to be coming to you today with a guest who comes from the world of client side security, developer security, web security, WAF, all of it. I know he's got a long history in this space. I'm delighted to be joined today by Simon Wijckmans, the CEO of c/side. And c/side is providing security solutions to detect and block malicious client side execution.

So we're going to talk a lot about client side stuff. We're going to talk a lot about browser security related topics today. But first, Simon, thank you so much for taking the time to join us today on Modern Cyber. And thank you so much for having me. It's great to have chance to have a conversation with you.

Awesome. I want to start today's conversation with something that you mentioned to me that I honestly don't know anything about and that is the polyfill attack. First of all, can you explain to us what it is and then second maybe why we should be worried about it? Yeah. So polyfill.

Io, right? So c dn. Polyfill. Io was living on about half a 1000000 people's websites. That was a script originally created by the Financial Times.

They needed it, to basically polyfill all JavaScript across however many websites they have. The problem is those large companies often don't even know how many sites they have. Yep. But a team within their company was tasked to make something to essentially create consistent starting across old sites and new sites of theirs. Other people obviously used it because it was quite efficient at getting new web sites to work on Internet Explorer.

And so for many web for many years it was on people's websites. Suddenly in February this year there was a change of ownership between a private owner, one of the maintainers of the project, and a Chinese company. And then that was in February this year. Fast forward to June this year, 25th June, all of a sudden, people's traffic, like when people went to their websites and they had Polyfill loaded on that page, they would get redirected to adult content websites or online casinos. Depending on their user agents, only a subset of people, and then also, yeah, only once.

So you couldn't just open up your developer console and refresh again and see why it happens. So they were a little bit stealthy. That is what we know. I like to expand a little bit on that because there is a lot more there. Yeah, please.

Because client sites, fetches are happening between, well, the browser of user and a third party server, we don't actually know what is happening in the browser of the user necessarily, depending on the headers. So the refer header, the IP, the user agent, various other things. Perhaps it could even be a random response or depending on the time zone they're in, you could get a different response. What has happened more likely, and I think this is an important thing to recognize, is that between February June, that was quite a big amount of time, likely something else happened. And we simply weren't there to keep an eye on it and recognize the the real situation that happened.

So and here's sort of the turn off event that within the security community people are starting to understand probably has happened. There is a tiny amount of evidence that's backing this up, but we haven't quite gotten to the full case yet. So taking a step back, Financial Times owned this domain. They were trying to figure out what the future for the polyfill domain that wasn't so many websites was. They didn't really want to maintain it.

They tried to get, the folks at Fastly to take ownership of it. Fastly didn't really want to because they're already sponsoring sponsoring a project for traffic bandwidth, etcetera. And so it got trans like transferred to private ownership to one of the maintainers. Eventually, that maintainer, after maintaining it very well for many years, received a legitimate looking business case from a Chinese company and was offered life changing money for the domain name. Eventually he accepted because, well, it made sense.

And the business looked like a business case, it looked legitimate. Okay. This was on half a 1000000 people's websites, including news websites, including platforms like Hulu, even HR platforms. So this was a very valuable asset to a bad actor. Yep.

What we expect has happened is that during that time frame where seemingly nothing bad was happening, that they pull off a very specific attack against a very specific, company depending on the refer header, depending on, the time of the day, and they were successful at that. And then to blow up the domain and to basically kill the story, and in June, they made a very loud attack happen, and now everybody remembers that one, and that's the end of it. That's likely what happened here. The real problem here is that we will never know because nobody's keeping an eye on client side executions. There's tools out there that say that they do, but the reality is that they don't.

They try to fetch it after the fact or they're relying on thread feed until they tell them what a bad script is. Likely what happened here was way more serious, and we will never know why or how. Yeah. Well, I mean, we can certainly, like, come to logical suspicions about why and so on. But let me make sure I understand a couple points in there because there's a couple things that I'm not super, super familiar with myself.

But if we kind of think about how this likely played out, okay, so we've got this, you know, project and it's got this domain name associated with it, and it's distributing a JavaScript via the CDN. Right? So, that's the first thing I want to make sure I understand properly is that, like, the source of this script is from one central location controlled by whoever controls the domain name for polyfill. Io. Is that a correct starting point?

Originally, yes. After the change of ownership, it changed. So it wasn't pointing to the same CDN anymore. It changed to a normal web server that could have hosted dynamic content depending on the header of the request. Right.

But exactly. My point is that, you know, whoever controls that domain name can choose what the source or, like, what the origin of the of the client scripts is going to be. Yes. Okay. So then, you know, in a way, people would say, oh, supply chain attack, supply chain attack, and I, you know, maybe there's like a supply chain element to it.

But, effectively, it's like whoever controls the domain name can also control the source of authority and the root of the origin of the scripts that is going out to everybody else. Okay. So they changed the source, they put their own scripts in place. Now, according to what you're saying, if I understood you correctly, they probably were targeting 1 or more high value targets, probably based on domain names, probably based on headers, where they could figure out, like, they don't care about me browsing from my home address, browsing to a, just say Financial Times was the original owner. Like, they don't care about me looking at the Financial Times website.

They don't care about you looking at the Financial Times website. They definitely care about, I don't know, top 5 banks looking at the Financial Times website, or they care about government organizations looking at that website, etcetera. So they've got a target list, presumably, of 1 or more organizations that they're looking at based on, I don't know, AS number, IP address blocks, headers, whatever they want to use. The attack likely then was looking for what out of the client side? Was it probably looking to exfiltrate things like sessions and cookies and tokens cached within the browser that could have been extracted by this malicious script?

Or what what's our best assumption on that side? The problem is that we do not know. And the possibilities for a very wide scale or targeted attack were both there. What reasonably the attacker could have done was, well, we're living in an interesting time where people are getting influenced by media quite a bit. Right?

These are these scripts were on a lot of big news websites. They could have changed content. They could have changed anything that you saw on the page. They could have exfiltrated credit card information, could have exfiltrated people's login credentials. Because this was also on a couple of large HR platforms, dashboards, they could have stolen way more sensitive information like social security numbers, previous employment, criminal records, background checks, the whole shebang.

The problem is we don't know and it all could have happened. But the reality is that we don't keep an eye on client side security today because the community hasn't had a elegant solution to it and basically brushed it off as a non issue for some time. Reality is that we don't actually know. And we are quite certain that as we protect our infrastructure more and we protect our open source dependencies coming from registries, a very powerful attack vector is, of course, the browser. So we have to start doing things there as well.

Yeah. So it's kind of a logical progression, I would say, in the Tech Factors. Let me push back on something that you said there for a second and get your reaction to it. And so you say, like, okay, we're not doing anything on client side security, but that's not a 100% true, right? I would point to 2 things: I would point to the advent of things like secure web gateways that kind of, you know, have allow list or deny list domains and and things like that, and second I would point to like the rise of secure enterprise browsers.

I don't really know a ton about those myself, but, like, explain to me why those are not solutions for a situation like this. So there's a few more things that are happening that are helping the cause, but when you talk about secure web gateways blocking domain names, those are essentially just waiting on threat feed until to block a domain name. Okay. They will inspect the actual payloads of the scripts being served. They don't have access to that.

And that is where the real problem sits. Right? Because if I host a script from a web server somewhere and I see a certain request coming in with a certain referrer header with, like from a certain user agent on a certain IP, then I can give them different content. Secure Web Gateway is not gonna change any of that. And so that's the major issue.

The fact is that we do not actually keep an eye on the payloads of these scripts. We wait on threat feed intel to flag the domain name and when the domain name is flagged, that's when we start like blocking it, flagging it, etcetera. Now Yeah. Then there's a content security policy approach. So people do add content security policies to their websites in a small percentage of cases.

The adoption of it has been rather low. There's a few reasons, why that is. But I mean the thing about content security policies is that in general is the same issue. It is again coming down to trusting the source and not verifying what you get from them. And trusting the source isn't enough on its own.

So there's like 2 other things that people do. The third one, secure browsers. Well, there are secure browsers out there that focus mostly on data loss prevention. Yep. They do things with regards to SASE, there's a few secure browsers out there, but none of them have really built any detection capability specifically for third party JavaScript or third party executions.

And that's a problem. And it is a major hassle to build a good detection engine for these people, do all sorts of weird things with third party JavaScripts. Things that like honestly you would look at and be like, why? That looks pretty, pretty suspicious. But they do it for legitimate reasons for legitimate outcomes.

So it is a very noisy attack factor. And so it takes a certain level of dedication to solve a problem. And so a browser security company, has so many other things that they gotta build. They gotta support changes in w three c standards and new plugin things like manifest tree, etcetera. It would take a proper, like, 20 person team to build client side security well for, like, third party scripts, and none of them are dedicated to it to the extent that we are.

Got it. Got it. And just coming back to something that you said there about secure web gateways and relying on a threat intel feed for domain names. I guess in a case like this, this is probably considered a trusted domain name. So like the likelihood of, let's say, polyfill being flagged as a malicious domain name is super low.

And, in fact, that's probably what made it valuable to the attacker is that you've got this thing that is likely been, allow listed by lots of large organizations. And, yeah. Okay. So, effectively, the compromise of something that is already kind of trusted within many, many environments is is highly valuable to a threat actor. Well, the worst thing is that in February, both Fastly, CloudShare, and a bunch of other security companies start making noise about that domain ownership change.

And that resulted in absolutely nothing. So there was no threat feed flagging. That change of ownership looked very suspicious, but threat feed vendors decided not to take action, not to take, any like alerting action or anything like that. Yeah. And so and by the way, flag anything by source is just generally a bad idea.

I can write a bad script now and host it behind Google tag manager.com and give it to a whole bunch of people and that would bypass any type of threat feed. There you go. Threat feed based, like, blocking of third party scripts is fundamentally broken. And by the way, when polyfill.i0 did turn bad on the 25th of June, it took about 30 hours for any of the threat feeds to take action. And that was after Namecheap took ownership again of the domain.

So they removed the DNS records of the domain, they took ownership of it again. And then after that the threat feeds start flagging it. And the reason why is because the threat feed vendors didn't wanna cause, like false positives quote unquote because the script was still behaving fine on a certain percentage of requests. So they didn't wanna break websites, but that was in my opinion the wrong response. Yeah.

But it's funny, the the response that you described there and kind of let's say the hesitation around flagging something as a false positive, to me, that sounds very much like what I consistently hear about web application firewalls. Like, when we talk to customers about, like, hey, why didn't you go live in blocking mode in your production environment? It's the risk of blocking the one legitimate transaction based on a false positive detection of some type that usually keeps people from turning those on. So I wonder, like, short of just saying, hey. We do better with fidelity.

We do better with accuracy in terms of flagging false positives. Should we be thinking about this from a fundamentally different approach? I think so. So, historically, client side security was an area that we could look at and kind of accept and ignore to an extent because we had much bigger problems. Yep.

Nowadays, most websites have some type of proxy divvying in front of it. Yep. Web application firewalls became more accessible. You can get one for free or for $20 a month off Cloudflare. DDoS protection is quite accessible to people even though there's still a lot of DDoS attacks happening, UDP style, harder to detect.

But the reality is that we have kind of progressed as a security community and protected our web apps way better on various different levels. But the client side is lagging behind. And that is an area that is incredibly valuable to an attacker. And we do not actually look at the payloads. We, in the best case, when people adopt solutions from large security companies like Akamai's and Cloudflares and Fives of this world and those kinds of companies, well, the majority of those are based on Tread Threat Feed, Intel, or fully client side behavior or monitoring.

But you're essentially creating a list of behaviors not to do in a browser of a user. So guess what? A bad actor basically reverse engineers that in a heartbeat and affords those specific acts you're monitoring. So we've not really given a good solution to this problem and we should definitely start monitoring actual client side behaviors and payloads that get served to browsers as a result. And by the way, browser specifications have not catered to this incredibly well over the years.

Yeah. Which is why we had to join the W3C Foundation and make sure that the future gives us some better ammo to deal with client side executions and actually build better, more comprehensive security tools in the browser for user. That's why our solution today is a proxy to third party scripts because there are certain things that we need to do that we can't do in a browser that we should do as a proxy. Yeah. It's funny you mentioned, you know, joining w three c.

We faced a similar kind of challenge on the API security side, which is that, like, first of all, it's not something that people are super aware of. And if they are aware of that, a lot of it is kind of retrofitting past solutions and, you know, just saying, hey, we cover API security now as well. Honestly, like, the WAF is one of the chief examples of that. It's just saying, like, hey, we have a WAF. It's API security.

Great. But, you know, when we actually look at the nature of the threats and the nature of the attacks and the data exfiltrations that have happened on the API security side, the vast majority of them look like normal API security traffic. And so to that end, we ended up having to do a similar thing to you, which is that we joined the Center For Internet Security. We started a community working group around API Security. And just earlier this year, just about a month and a half ago from the time we're recording this, we released the first version of the API Security Guide, the CIS API Security Guide based on partnership with them, based on pulling other people in from the community, and having to create effectively, like, a new way to look at the problem from scratch.

So I'm pretty familiar. I'm curious from your side, like, working with w three c, how has that process been? So we only just joined very recently and we're gonna announce that soon. I mean, so it's funny. We record this now and probably by the time this is out, we would have announced it.

But Most likely. At Cloudflare, we also were part of w three c at Forcell Rewards. I feel like if you are taking your position in the market seriously and you want to actually solve the problem for real, that that is the right way to do it. These processes are not fast. These do not immediately lead to high impact, but they're the right thing to do for the future.

So I feel honestly a security company that is part of these communities and actually contributes to better specifications in the future, I hold them in way higher merit than just another company that fixes something with whatever is possible today without thinking about the future. By the way, this also is a very specific type of engineer, right. Most engineers they have to deal with a problem within the platform that they are given. This framework allows you to do things in x y z way. To find an engineer that can think beyond that and think wouldn't it be better if there were certain other things we can do and oh yeah here by the way is a solution to that.

That's a very specific way of thinking that is very hard to find. And I would say in a JavaScript and TypeScript community that is almost impossible. It's like very very very hard to find good people for that. Yeah. Yeah.

I mean, I know we've struggled to hire people with, like, the right the right domain knowledge combined with JavaScript slash typescript, capabilities or people with, like, good enough capabilities and a basic enough understanding of, let's say, the security problem around APIs that they can, you know, actively contribute to what we've done. There's something in you said there that I don't want to spend too much time on, but I definitely want to echo what you said, which is that, like, trying to solve the problem because it's the right thing to do is very often feels like a very uphill effort, even though like it's, you know, we know it's the right thing to do. Like from our perspective, you know, we faced a similar problem again. But we realized like, okay, you might start an API security journey from, like, almost any standpoint. You might start from, let's say, like, the code and design phase, or you might start with, like, an environment that's already live, and you're trying to understand what's going on.

And so we had to figure out, like, look, we just want the problem to go away and to be, you know, kind of solved. And so, for instance, we offer a free tier. We offer open source on a lot of what we're doing. Honestly, those things don't make us money. In fact, they cost us money to some extent.

And, like, they cost us money to maintain on the on the, open source side, and they cost us money to run the infrastructure on the free tier side. But if we can actually help organizations and and kind of improve the overall security of the situation, it's very much the right thing to do. So I I certainly echo your sentiment on that side. I I wanna come back to something you know, we've talked a lot about this kind of the nature of this script and the nature of this polyfills situation and so on. And you mentioned kind of the proxying solution to client side scripts and so on.

Are client side scripts the only threat against the browser, or are there other things that we need to think about as well? Well, so browsers became a lot more capable over the years and for all the right reasons. Right? You want browsers to be able to do more good things. And a lot of mobile apps are basically browsers under the hood, progressive web apps, webfuse, etcetera.

So there's a lot of really cool stuff you can do there. It usually starts with JavaScript doing a fetch to some other API and then at our API then causes a much bigger like like opportunity to do great stuff, but also bad stuff. WebAssembly and browsers is a major one. Generally doing all sorts of stuff inside of iframes is a really common one that we see. There's things like From an attack perspective, you mean?

Like that common attack vector. Okay. Yeah. And then there's features like, for instance, IndexedDB that are part of any browser nowadays that basically allow you to have literally a database in a browser, right? And especially, that's like especially popular now with like progressive web apps, like web views, things inside of like mobile apps because those have to work offline.

So more sensitive data is actually living in essentially glorified browsers. So there's a lot there. I feel 3rd party here is kind of a loose one when you use a node package manager file, like a node package or package manager, like installation of anything, like just any type of package. You have the ability to also inject things as a first party script. Yep.

Those basically have the same issue. We see a lot of attacks going through inline scripts. That's a whole different level of issue because detecting inline scripts and analyzing those well, that is a whole different thing to deal with. But yeah, I mean it all comes down to JavaScript to an extent starting the chain of bad events, because that is essentially what allows us to do dynamic things in browsers. So that is the beginning of it all.

It's not necessarily the end. Yeah. Got it. Got it. And so, what would be the ideal world scenario for the browser going forward? And is it pretty much a more secure browser with the same capabilities and then a proxy that scripts pass through? Or how do you see the ideal scenario going forward? So first things first, I think it's important to flag that individual developers that are often found responsible for implementing these 3rd party scripts are not actually the ones that want them there. They're usually not the biggest fan of them in the first place because they cause things like, so we're loading websites or overlapping classes and as a result things on a website breaking.

It's usually like a marketing team asking for a tool to be added. Sales teams, HR team asking for some type of special tracking for applicants or legal teams asking for cookie banners and things like that. So that's where the source of this issue sits. We're not gonna be able to fix that ever. The reality is that we need third party scripts for certain applications.

For instance, chat bots, captchas, ads, these types of things will basically always be third party screens. I don't think the secure browser approach is ever gonna work and the reason why is because, largely consumers will use whatever is on their device or whatever their nephew tells them to use and that is always going to be the most feature rich, fast, amazing browser that exists. Security is a very unsexy thing to people that like features because it limits. So I think that's going to be a harder thing to make happen. I think the better approach here is making a very accessible and easy to use tool to deal with the risk of malicious third party JavaScript and make it as accessible to people as possible.

And then also creating some really neat governance around it so that if say you're a small medium business and a consultant came in and helped you with some marketing and added a bunch of scripts to your website, that at least there is a log of it and you're keeping track of it over time or there's a tool keeping track of it over time. I think that is the better way of doing it. I would love for browsers to be more capable of detecting malicious acts, but the reality is that browsers are meant to be feature rich. People want to add more functionality to it and any new functionality you add can be exploited for doing something bad. So it would be very difficult for a browser like company, company that deals with browser, makes browsers to flag everything all the time.

Yeah. But a lot of what you're describing there to me sounds like effectively a browser plus an MDM solution. Right? It's like a kind of a managed browser where it has the ability to audit log all of the changes to the browser environment, to the plugins that the browser might have, to extensions that are added or removed or updated over time. And I've got, you know, kind of effectively like the EDR of my browser environment, right, where I've got some central point in an audit trail of all these changes and I can look for detected malicious potentially malicious changes to the browser of user Simon or user Jeremy or whatever the case may be.

Right? Well, so we try to position ourselves towards people that build websites or maintain the websites. So it's a different angle of users. We do Yeah. Like we're open to partnerships with companies that want to use some of our detection capabilities for their enterprise focused products.

But the majority of the traffic around the world that goes to websites is not coming from like company laptops. We as users of the internet in general we need to be protected. And so what these third party scripts are often on the lookout for are user credentials, Yeah. Credit card information, mining crypto in your 10 year old MacBook. All sorts of stuff can happen there.

Right? And it would be, I think, the wrong approach to make this an enterprise only accessible solution. The better approach is to basically make for safer websites, for safer web and essentially take away excuses for people to have bad things happening to their users. And therefore, having more accessible security solutions is the right thing to do. Of course, we do have an enterprise tier that is the most feature rich and that, like, enterprises will, like, happily use because of compliance.

But I don't think that web security is an enterprise only problem and we shouldn't paywall it to be an enterprise only solution, therefore. Fair enough. I'm just curious, like, as somebody who spent a lot of time looking at this space and looking at, you know, CDN, looking at large scale Internet, you know, DDoS protection and whatnot, are there other systems like, let's say, client side security that you think are kind of fundamentally broken? Like, I hear a lot about people people increasingly arguing that DNS is kind of fundamentally flawed and or has evolved issues over time. I'm just curious to get your Are there other things that you see that we kind of take for granted every day that you're like, oh, yeah.

Actually, that that really has big vulnerabilities or that really is due for an update and some changes. Well, so at c/side, we go all the way down the, like, ISO stack, and we find that there is a lot of things broken at the most essential layer, and that's even the IP layer. We spoke about the domain ownership change, but the reality is that if people can hijack a slash 24 because it's not protected by something called RPKI. So that being said, a router can basically announce that those IPs are theirs and the other routers around it would start trusting it. Well, that could also cause a major inch incident.

So the Internet is kind of an amazing thing and the fact that it still works today is pretty great. But there are major gaps. And so I would say if we look at the layer 3, more and more ISP's or people that own IP ranges should be adopting RPKI. That's a very basic thing that's been around for a long time. They should.

When we talk about DNS, yes, DNSSEC has been very poorly adopted, because it's also not a great thing. And there's a lot of things that can happen at the DNS level. For instance, if you go into a Starbucks and you connect to their free hotspot and they're on a DNS resolver level on that, like, local Internet, you can actually just hijack the DNS request. And there you go. DNSSEC would not try and stop that, because most websites don't have it enabled.

So there's a lot of areas where it's broken. The great thing is that there are solutions out there to this that I would say more security conscious companies are pushing. So for instance, if you buy a MacBook nowadays or an iPhone and you get there any level of Icloud premium, like thing, it starts at 99 US dollars cents, a month, you get this thing called private relay, which essentially proxies your traffic to a edge location and the keys are kept at Apple. So neither one of them actually have access to seeing your data. And as a result, that is fully encrypted.

They also use DNS over HTTP as a result. DNS over HTTPS is way is way safer, is a little bit slower than standard DNS. So there's a lot of things happening now within the space. The problem is not solutions. The problem is the adoption.

And we have to make sure that when great things like, come to market that there's adoption for it. If these are well intended projects and they, for some reason, are lowly adopted, we have to very quickly understand why that is and then come up with a fix. The NSX has been broken for many years. There should be a new specification for it that makes it easier for people to adopt. So, yeah, that's currently how I think about that.

I would say there's a lot of things broken. You could talk about the application firewalls as well. The fact that they are still very much just looking at network packets and not taking into account any of the behaviors that led to a request. That is a big that's a big issue that affects API security. Apparently, you really have to sell it as a firewall looking at network packets and it's way harder for people to adopt a client side package, that is basically giving more capabilities to API security to make sure that there's nothing bad happening.

There's a lot of things within the space I would like to change. But unfortunately the gap between security team and the developers is still quite big. And I understand why that is. We have to make better security products that are more accessible and do not create a hassle on engineers. Awesome.

Well, I think that's a great note to close today's conversation out on. Simon, for anyone who wants to learn more about you guys, your projects, what you're working on, what's the best place for them to check out? The website, csite. Dev, we've got a blog where we post multiple things a week. We do podcast recordings like these.

There's contact form. Feel like, feel free to reach out. And we've also got our free tier, so feel free to use our product. Awesome. Alright.

Well, Simon Wijckmans, thank you so much for taking the time to join us today on Modern Cyber. And for our audience, we are probably going to be going on a brief hiatus towards the end of this year. We've got a backlog of episodes recorded. So if you do see a little bit of a slowdown in the release schedule, know that that is the reason why. And we should be coming back to you in a couple of months time with more from Modern Cyber.

Thank you so much. Bye bye.

Discover all of your APIs today

If you can't see it, you can't secure it. Let FireTail find and inventory all of the APIs across your organization. Start a free trial now.