The 12 Most Critical Risks for Serverless Applications
The 12 Most Critical Risks for Serverless Applications
Here's what you should know about the security risks that you run in 2019 with serverless applications.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
In January 2018, PureSec released the world’s first Serverless Security Top 10 Risks Guide, which was well-received by key players in the serverless industry and was covered by top news outlets. The report was based on preliminary data and feedback from serverless evangelists and thought leaders from leading companies.
Since this initial effort in 2018, serverless adoption has seen tremendous growth, providing access to more data regarding the ways organizations harness serverless, their approach to serverless development, and the most common recurring mistakes related to security and privacy of serverless applications.
In addition, in the year that passed, new mitigation approaches have surfaced and became standardized, suchas serverless security platforms, as well as new features offered by cloud providers, which can help with improving serverless security posture.
In February 2019, the Cloud Security Alliance (CSA) collaborated with PureSec to release a new guide titled "The 12 Most Critical Risks for Serverless Applications," which includes additional input and feedback from several dozens of serverless industry thought leaders, and is the most comprehensive effort to classify the potential risks for applications built on serverless architectures to date. The report was written for both security and development audiences dealing with serverless applications, and goes well beyond pointing out the risks. It provides mitigations, best practices and a comparison between traditional applications to their serverless counterparts.
The top 12 Risks listed in the document are listed below, with a short description for each of them:
1: Function Event-Data Injection
Serverless functions can consume input from each type of event source (AWS offers 47 different event sources that can trigger an AWS Lambda function), and such event input might include different message formats (depending on the type of event and its source). Various parts of these event messages can contain attacker-controlled or otherwise dangerous inputs. This rich set of event sources increases the potential attack surface and introduces complexities when attempting to protect serverless functions against event-data injections. This is especially true because serverless architectures are not nearly as well-understood as web environments where developers know which message parts shouldn’t be trusted (e.g., GET/POST parameters, HTTP headers, etc.). More information on event-data injection can be found in the following blog post, as well as in this post by Jeremy Daly.
2: Broken Authentication
Since serverless architectures promote a microservices-oriented system design, applications built for such architectures may contain dozens (or even hundreds) of distinct serverless functions, each with a specific purpose. These functions are weaved together and orchestrated to form the overall system logic. Some serverless functions may expose public web APIs, while others may serve as an “internal glue” between processes or other functions. Additionally, some functions may consume events of different source types, such as cloud storage events, NoSQL database events, IoT device telemetry signals, or even SMS notifications. Applying robust authentication schemes—which provide access control and protection to all relevant functions, event types, and triggers—is a complex undertaking, and can easily go awry if not executed carefully.
3: Insecure Serverless Deployment Configuration
Cloud services in general—and serverless architectures in particular—offer many customization options and configuration settings to adapt for specific needs, tasks or surrounding environments. Certain configuration parameters have critical implications for overall security postures of applications and should be given attention, and settings provided by serverless architecture vendors may not be suitable for a developer’s needs. Misconfigured authentication/authorization is a widespread weakness affecting applications that use cloud-based storage. Since one of the recommended best practice designs for serverless architectures is to make functions stateless, many applications built for serverless architectures rely on cloud storage infrastructure to store and persist data between executions.
4: Overprivileged Function Permissions and Roles
A serverless function should have only privileges essential to performing its intended logic—a principle known as "least privilege." Since serverless functions usually follow microservices concepts, many serverless applications contain dozens, hundreds, or even thousands of functions. Resultantly, managing function permissions and roles quickly becomes a tedious task. In such scenarios, organizations may be forced to use a single permission model or security role for all functions—essentially granting each function full access to all other system components. When all functions share the same set of overprivileged permissions, a vulnerability in a single function can eventually escalate into a system-wide security catastrophe.
5: Inadequate Function Monitoring and Logging
One of the key aspects of serverless architectures is the fact that they reside in a cloud environment, outside of the organizational data center perimeter. As such, “on-premise,” or host-based, security controls become irrelevant as a viable protection solution. This, in turn, means that any processes, tools, and procedures developed for security event monitoring and logging become obsolete. While many serverless architecture vendors provide extremely capable logging facilities, these logs in their basic/out-of-the-box configuration, are not always suitable for the purpose of providing a full security event audit trail. In order to achieve adequate real-time security event monitoring with the proper audit trail, serverless developers and their DevOps teams are required to stitch together logging logic that will fit their organizational needs. For example, they must collect real-time logs from different serverless functions and cloud services. Push these logs to a remote security information and event management (SIEM) system. This will oftentimes require organizations to first store the logs in an intermediary cloud storage service.
6: Insecure Third-Party Dependencies
Generally, a serverless function should be a small piece of code that performs a single discrete task. Functions often depend on third-party software packages, open-source libraries and even the consumption of third-party remote web services through API calls to perform tasks. However, even the most secure serverless function can become vulnerable when importing code from a vulnerable third-party dependency.
7: Insecure application secrets storage
One of the most frequently recurring mistakes related to application secrets storage, is to simply store these secrets in a plain text configuration file that is a part of the software project. In such cases, any user with “read” permissions on the project can get access to these secrets. The situation gets much worse if the project is stored on a public repository. Another common mistake is to store these secrets in plain text, as environment variables. While environment variables are a useful way to persist data across serverless function executions, in some cases, such environment variables can leak and reach the wrong hands.
8: Denial of Service and Financial Resource Exhaustion
While serverless architectures bring promises of automated scalability and high availability, they also come with limitations and issues which require attention. If an application was not designed to handle concurrent executions properly, an attacker may eventually bring the application to hit the concurrency limits and deny service from other users of the system or the cloud account.
9: Serverless Business Logic Manipulation
Business logic manipulation may help attackers subvert application logic. Using this technique, attackers may bypass access controls, elevate user privileges or mount a DoS attack. Business logic manipulation is a common problem in many types of software and serverless architectures. However, serverless applications are unique, as they often follow the microservices design paradigm and contain many discrete functions. These functions are chained together in a specific order, which implements the overall application logic.
In a system where multiple functions exist — and each function may invoke another function — the order of invocation may be critical for achieving the desired logic. Moreover, the design might assume that certain functions are only invoked under specific scenarios and only by authorized invokers.
Business logic manipulation in serverless applications may also occur within a single function, where an attacker might exploit bad design or inject malicious code during the execution of a function, for example, by exploiting functions which load data from untrusted sources or compromised cloud resources.
Another relevant scenario, in which the multiple functions invocation process may become a target for attackers, are serverless-based state machines. Examples include those offered by AWS Step Functions, Azure Logic Apps, Azure Durable Functions or IBM Cloud Functions sequences.
10: Improper Exception Handling and Verbose Error Messages
Line-by-line debugging options for serverless-based applications are limited (and more complex) when compared to debugging capabilities for standard applications. This reality is especially true when serverless functions utilizes cloud-based services not available when debugging the code locally. As a result, developers will frequently adopt the use of verbose error messages, enable debugging environment variables and eventually forget to clean code when moving it to the production environment.
11: Legacy/Unused Functions and Cloud Resources
Similar to other types of modern software applications, over time some serverless functions and related cloud resources might become obsolete and should be decommissioned. Pruning obsolete application components should be done periodically both for reducing unnecessary costs, and for reducing avoidable attack surfaces. Obsolete serverless application components may include:
Deprecated serverless functions versions
Serverless functions that are not relevant anymore
Unused cloud resources (e.g. storage buckets, databases, message queues, etc.)
Unnecessary serverless event source triggers
Unused users, roles or identities
Unused software dependencies
12: Cross-Execution Data Persistency
Serverless platforms offer application developers local disk storage, environment variables and RAM memory in order to perform compute tasks in a similar fashion to any modern software stacks.
In order to make serverless platforms efficient in handling new invocations and avoid cold-starts, cloud providers might reuse the execution environment (e.g. container) for subsequent function invocations.
In a scenario where the serverless execution environment is reused for subsequent invocations, which may belong to different end users or session contexts, it is possible that sensitive data will be left behind and might be exposed.
Developers should always treat the execution environment of serverless functions as ephemeral and stateless and should not assume anything about the availability, integrity and most importantly, the disposal of locally stored data between invocations.
Opinions expressed by DZone contributors are their own.