API Security in Serverless Computing: Risks and Mitigations

Serverless computing offers both benefits and downsides to API security. However, with proper knowledge of the best practices, we can reap these benefits while mitigating the threats.

API Security in Serverless Computing: Risks and Mitigations

Let's take a closer look at what serverless computing actually is, and what it promises to do. We’ll explore some specific benefits and drawbacks, as well as some best practices for implementing serverless at scale. We’ll also examine how serverless functions most commonly get deployed - to power API services - and the security risks associated with this.

What is Serverless Computing?

Serverless computing is a cloud computing paradigm in which services are provided on an as-needed basis according to a set of prescribed conditions or events. In essence, when a user issues a request, the cloud platform spins up and provides resources which exist for the duration of servicing that request. When the request is fulfilled, the resources are spun down and terminated, freeing up resources for other tasks.

The use of the term “serverless” is a bit of a misnomer in the sense that servers are very much still in play – instead, serverless means that the end user no longer has to be concerned with resource and server management, topics which the provider will handle. In effect, the experience for the end user becomes “serverless” despite servers still being utilized on the provider’s end. In fact, most “serverless” paradigms use a containerized compute environment to spin up underlying required operating system and server frameworks to then execute the code provided for the serverless function.

There are quite a few serverless providers currently in the market providing amazing support.

AWS Lambda is a great example, allowing developers to run code in response to an event utilizing resources hosted on the AWS platform. This decouples the business logic from the technical backing, allowing business to offer services without owning resources on-site or provisioning servers ahead of time.

Another strong example is Google Cloud Functions. This is a serverless execution platform that operates very similar to AWS Lambda with a slightly different pricing model and freemium tier limit.

Other providers in this space include Microsoft Azure Functions, OpenFaaS (Open function-as-a-service) and OpenWhisk, with each offering some features that may be more appropriate for certain kinds of environments. For instance, OpenWhisk is open-source and distributed, but comes with a very different pricing model and approach to resource centralization.

Benefits of Serverless Computing

Serverless boasts quite a lot of benefits, so it makes perfect sense why it has seen an explosion in adoption in recent years.

This figure shows the growth of Serverless cloud services in different categories.

First and foremost, serverless is highly efficient. It offers low latency due to the nature of dispersed and decentralized resourcing, allowing for local resources to be used for local requests. Resource allocation can be separated by function and need, and can be allocated according to factors including difficulty of request, revenue share, performance targets and even model of engagement for the end user.

Serverless also has some major cost benefits. It’s expensive to have in-house servers, and even external server farms can cost a pretty penny at scale. Even virtual machines on cloud platforms can become costly if they are running continuously. In fact, if cloud virtual machines are running on allocated compute instances, but not being utilized at an efficient rate, there is a lot of waste. Serverless allows you to spin up only what you need, and often operates on a pay-as-you-go model, allowing costs to be lower and aligned against actual use as opposed to projected demand.

The very nature of being serverless lends itself to being scalable, extensible, and flexible. Because resources are elastic to demand, this also means experimental features and modular functions can be tested with limited risk, easing the impact of iteration on the bottom line and making new development easier to deploy in context.

Finally, serverless is code-centric. Developers only have to focus on the code, ignoring infrastructure management. This is a huge element, and frees up a lot of team resources to new development and iteration in a way that non-serverless just can’t beat.

Drawbacks of Serverless Computing

Of course, there are potential downsides as well. The biggest of these is the fact that serverless systems experience cold starts. Cold starts are when you start a server from nothing – this takes time, and can be expensive in both time and effort. When you are constantly “shifting gears up and down”, this can cause inefficiencies at scale in both cost and efficacy.

There’s also a significant visibility impact with serverless systems. Because servers are turning on and off, monitoring and logging these distributed systems can become troublesome. You can resolve a lot of this with third-party integrations, but it’s a clear concern that should be factored into adoption. This is doubly true with the lack of a central authority – the shared responsibility model could introduce ambiguity.

Some of the compute cost savings can also be lost to network costs. Because you are spinning up new servers and sending requests, you don’t have the benefit of essentially “free” in-house networking (minus the cost of electricity and maintenance, of course) – if you own your server and network, data is cheap to push around, but if you rent out and have to pay for each new service, request or data transfer, this can get expensive.

Finally, there’s also the obvious fact that this is an entirely new compute paradigm. Serverless does not have a lot of the same common tooling that other more mature solutions have – endpoint agents and APM solutions are a good example, but, at every level, maturity is a core concern. This will change with time, but as of 2023, it’s still a consideration.

Security Implications

Serverless computing brings with it a variety of unique security risks.

First, the loss of centralized control can lead to serious ambiguity throughout an API ecosystem as to who owns and controls what. This lack of authority can be mitigated through domains of ownership, federation, etc., but ultimately that begins to undermine some of the value of serverless implementations. One best practice to ensure that serverless functions are deployed in the right locations, perhaps in dedicated workload-based accounts, and tagged with clear descriptive labels. Effective security often benefits from authority, so serverless complicates that.

There is also the very real changing nature of the attack surface area that comes with the adoption of any distributed system. With more servers spinning up and down, the attack surface shifts dramatically, and the additional ambiguity can often lead to a state where you’re not aware of your own exposure.

In serverless, small issues also become larger. Security misconfigurations, network faults, etc., which are typically simpler to resolve in non-distributed systems, become more significant and damaging in serverless environments. One such issue is the principle of least privilege. This principle is the idea that systems should have only the minimal amount of knowledge and ability to do that which they were designed to do. In a serverless environment, that can lead to an explosion of smaller systems that have much more limited access and functions. This ultimately makes for a less efficient system that nonetheless has a drastically expanded attack surface in a reduced-visibility environment.

Serverless also requires specialized skills above and beyond the typical API development skill set. These systems can be quite complex, and to pull it off effectively, you need to know what you’re doing and have the right tools on hand.

Finally, serverless makes for complex network topology. You can use a gateway for the API; you can expose the API directly; you can use a WAF, or not; you can use a load balancer, or not. This complexity in network topology leads to high variability and inconsistent implementation designs.

Serverless and API Considerations

Serverless has become a very popular option for API developers, as it moves the allocation and provision costs away from the developer and into the cloud. This ultimately results in a computational environment in which the developers can just code and not worry about everything else. It’s a pretty enticing solution – but it does come with its own set of security considerations.

Firstly, serverless abstracts a lot of the security practices away from the developer. Because a developer utilizing a traditional environment has much more granular control over each resource, security can be simple – in a serverless environment, security must be centered on each function. Since serverless uses these functions to interact with the service, they should be treated like any other piece of infrastructure.

The fact that these pieces of not permanent can create security concerns of their own. Cold starts, for instance, can delay authentication if improperly configured, and timeouts for functions that require instant response could create a situation in which you lose out some security for speed or you keep the service alive indefinitely – which is just server-full with extra steps.

Monitoring and logging can also be more complicated in this environment. Since serverless created a spinning up and spinning down of services at a mass level, there’s much more to document, and any more points where you can get it wrong. Thus, auditing becomes much more important than in non-serverless environments – and it’s already pretty important in those!

Finally, you should consider how much you actually trust your serverless provider. In server-full environments, you are asked to trust traditional infrastructure providers. This is typically not too difficult, but in serverless, you may be asked to trust startups and new entrants who may not have the proven experience or track record that larger infrastructure players might. That’s not to say any of these companies are bad – it is to say, however, that you need to do much more due diligence in researching providers and partners – adopting a serverless provider is essentially like hiring a management service for your infrastructural needs, and as such, you need to vet these partners as you would any other business partner with this level of integration into your systems and logic.

Best Practices for Mitigating API Security Risk in Serverless

All of that said, serverless is quickly becoming a default solution for the many woes faced by services with high resource demands that are not static. Accordingly, adhering to a few best practices can help you get the most out of this paradigm.

  • Leverage Gateways – this can simplify some of the complexity of distributed systems by making everything happen at the gateway level. When everything collects at a single point, complexity becomes easier to manage.
  • Establish and Maintain Domains – make sure your serverless APIs are doing only what they should be doing within a set domain. Additional complexity might save time in the short run and facilitate some design desires, but this added complexity at scale means you’re halfway serverless – just with none of the benefits and all of the risks.
  • Create a Culture of Security – this requires thinking about how everything connects and proactively designing your security posture and systems to prevent common issues. Adopting a culture of security at every level will pay huge dividends in any environment, but this is especially true in serverless.
  • Centralize Authority Where Needed – serverless can give you huge benefits without requiring you may every part of your system serverless. A central authority for authentication and authorization can absolutely coexist in serverless, and should be a good design model to utilize.
  • Leverage Heuristics – use good heuristics to find areas of broken implementation, inefficient use, and issues. Solving these issues at the base level can help said issues not compound the underlying issues with the severless paradigm into greater heights.
  • Establish Controls – cost runaway can be easily mitigated with proper planning and controls, thereby preventing DDoS and other such attacks. Establish effective controls based on real use scenarios and data.
  • Adopt Trusted Solutions – work with only trusted partners or solutions, but assume zero-trust in all interactions. This will give you a platform you can trust!

FireTail and Serverless

One highly effective option for security in a serverless environment is FireTail. FireTail offers incredibly powerful security features that are elevated beyond traditional WAFs or other solutions. Through the use of application layer visibility and real-time inline inspection, FireTail delivers security at every point in a call’s lifecycle.

Because serverless is so distributed, a strong solution that centralizes security while providing a strong and robust system for context is needed. FireTail provides this in spades through a variety of modules and features:

  • Alerting and Monitoring – FireTail allows you to set specific conditions, metrics, and thresholds, generating alerts precisely when, where, and how you need them. This provides for a much stronger security posture that does not contribute to fatigue or get lost in the complexity inherent in serverless.
  • Identifying and Eliminating Design Flaws – Because most serverless platforms are code-centric, the code for the APIs destined for deployment on serverless is normally available for retrieval and inspection. This allows FireTail’s unique API design flaws scanner to generate security Findings that suggest security design flaws in the APIs. These can be leveraged in a DevSecOps approach, either eliminating the flaws before they get to production, or in runtime, providing enforcement and detections.
  • Full Visibility – FireTail’s API Security Posture Management allows you to get a comprehensive look at your API posture, maturing your security program and identifying potential issues.
  • Audit Trail – the Audit Trail feature centralizes logging, allowing you to create a single source of truth for auditing API traffic. This is huge, as it allows serverless in a variety of situations and structures which otherwise would not permit it.
  • Agentless Integration – this feature allows you to discover and log API activity behind AWS’s (or other cloud providers’) native API gateway, which simplifies and centralizes logging in a serverless environment at the service level rather than the agent level.

There are so many more features on offer with FireTail, but it should be obvious from this small sample that it is an incredibly powerful tool for maintaining the security of APIs on serverless systems.

Conclusion

Serverless has some huge potential upsides, but to get those benefits without any of the drawbacks requires some clear forethought and planning. Utilizing a trusted solution like FireTail can help remove some of the complexity and fog around serverless implementations, aiding in effective deployment and management at scale.