A First Look Into Java's New Serialization Filtering
A First Look Into Java's New Serialization Filtering
Serialization Filtering is the minimum that Oracle could provide in order to stop being blamed for not doing anything about the critical Deserialization attacks. It is a first step in the right direction but it does not completely solve the problem and is not suitable for enterprise production environments.
Join the DZone community and get the full member experience.Join For Free
Oracle has released the latest Critical Patch Update (Java SE 8 CPU Update 121) that provides security bug fixes for a wide range of its products. More specifically, this massive CPU contains 270 new security fixes across many of its product families, including Java SE 6, 7, and 8. The CPU also introduces a new security feature in Java SE that performs Serialization Filtering.
Serialization Filtering Overview
The Serialization Filtering feature was introduced by Oracle in order to improve the security of the Serialization facility of Java which is inherently dangerous. It is very common for Software Engineers to misuse this facility to design and implement applications not according to proven security guidelines   . Oracle introduced this mechanism into the JVM to provide a means to adhere to some of these security guidelines. If enabled and properly configured, it will allow serialized data to be validated before deserialization is performed. This validation is performed by a new filter that is essentially based on a white/black listing approach.
This new mechanism supports 3 types of filters:
Global Filter (Also Known as Process-Wide Filter)
The Global Filter can be defined and configured by either a system or a security property. To use the Global Filter no source code changes are required as it applies to all
ObjectInputStream instances in the JVM. Users can set and configure the Global Filter patterns using the
jdk.serialFilter that can be set as either a system or a security property.
For future projects and for active projects, developers can implement custom filters using the new
ObjectInputFilter class. These filters are implemented in the source code of the application and override the behavior of the Global Filter. It requires development effort to implement these filters.
There are two built-in filters that are specific for the RMI Registry and the Distributed Garbage Collection. These built-in filters implement pre-configured whitelists of classes and limits that are typical for the RMI Registry and DGC use cases. The pre-configured filter pattern is the following:
java.rmi.server.ObjID; java.rmi.server.UID; java.rmi.dgc.VMID; java.rmi.dgc.Lease; maxdepth=5; maxarray=10000
Users are allowed to add additional filter patterns to the Built-in Filter using the
sun.rmi.transport.dgcFilter system or security properties.
The values of the above filters are sequences of string patterns. There are 2 types of patterns:
Graph and Stream Limits
These limits constrain specific aspects of the Object Graph and the Stream to be deserialized. Currently, the following are constrained:
- the maximum depth of the object graph
- the maximum number of internal object references
- the maximum number of bytes in the input stream
- the maximum array length allowed
Class and Package Names
These are patterns for classes or packages. A user is allowed to specify which classes/packages are allowed or should be rejected during deserialization. The pattern syntax does not handle regular expressions. The syntax is limited to a list of strings that represent class or package names along with wildcards that match packages, subpackages or suffixes.
The Serialization Filters validate the input stream being deserialized and are called for each object in the stream. Note that filters are not applied on primitive values and on strings, which are allowed by default.
It is important to clarify that the Global Filter is disabled by default and security administrators should manually opt-in and configure it. Finally, note that this feature is available only if the CPU is applied and it is not known if it will be backported to older versions of Java.
Serialization Filtering Evaluation
Deserialization in Java is one of the most dangerous facilities and so far Java provided no way of protecting against Deserialization attacks. Undeniably, the Serialization Filtering is a feature that was missing from the JVM. It is the first time that Oracle has acted to harden the JVM and provide some protection against Deserialization attacks. Despite the attack’s criticality, it took Sun/Oracle more than 20 years to take such action!
To configure Serialization Filtering, the application needs to first be fully profiled. Profiling an app can be a complex process that requires specialized tools and has to be performed by domain experts. Typically, the process requires the app to run normally for a period of time in order for all its paths to be executed. A dynamic profiling tool can log the class names that are required for normal operation. This list of class names will then be the basis of configuring the white/black list of the Serialization Filters. And even after going through this process, there is no guarantee that all of the execution paths were run and all the required class names were logged. Of course, the same process needs to be performed every time a new release goes into production or even when a third-party library must be upgraded. The lifecycle of this process becomes even more complex since such any change in the Serialization Filters will first need to go through QA and UAT before it reaches production.
The Serialization Filtering mechanism follows a very similar approach to the Security Manager. The Security Manager also works based on a whitelist and suffers from the same scalability problems. Java’s Security Manager has proved to be unsuitable for enterprise, large-scale environments, given that it moves the responsibility of protecting the system to the user. The user is responsible for understanding the application’s security requirements and technical details and correctly configuring the security policy, which in essence is a whitelist of permissions. Such security policies are typically very complicated in enterprise applications that change frequently and integrate with numerous different systems and components. The operational cost of correctly configuring and maintaining such security policies is so high that Security Manager is rarely deployed in production environments  .
Not only has Oracle not learned the lesson from the mistakes and the complexity of the Security Manager policies, the complexity of configuring the filters (security policies) for Serialization Filtering is now even bigger. This is due to the fact that users now have to configure and maintain not just one policy but several. This includes the policy of the Global Filter, and the policy of each Specific Filter for every component, as well as tweaking the Built-in Filters according to their specific requirements. It will not be clear even to security experts which one of the several available filters needs to be configured for each use case and great confusion will arise. To make things even worse, defining filter limits correctly requires knowledge of OOP as well as Graph theory. Such skills are very rarely found in Security professionals. Based on such complexity, it is very likely that the Serialization Filtering mechanism will have the same fate as the Security Manager.
Apart from its complexity, its effectiveness is also doubtful. Due to the fact that Serialization Filtering allows users to define both blacklists and whitelists, it is up to the users to decide what kind of policy will be defined. If a user decides to blacklist dangerous classes, such as the InvokerTransformer, then such an approach can protect only against known attacks but cannot protect against unknown, zero-day attacks. Security teams need to update and maintain the blacklist every time a new deserialization gadget is found. Also, if the application’s normal functionality depends on such classes, then it is not possible to blacklist them, opening a big backdoor to the system. Therefore, a blacklist approach requires constant maintenance and it can never provide complete protection.
On the other hand, in many cases a whitelist approach cannot sufficiently protect against the so-called golden gadget chains, which are deserialization payloads that depend only on JRE classes. Whitelists also do not protect against zero-day vulnerability cases found in the whitelisted classes. For example, as we saw previously, the Built-in Filter blindly allows any instance of the
java.rmi.dgc.VMID class to be deserialized. If in the future a vulnerability is introduced in this class, any payload that exploits such vulnerability will have a free pass to compromise the system with Oracle’s blessings.
Another important aspect of this approach is that it is based on pattern matching. If incorrect patterns are configured then the applications are very likely to break or even crash resulting in major service disruptions. Additionally, a simple mistake in a filter pattern could easily generate hundreds of false positive log messages. Analyzing all these false positives by security analysts would be a waste of their valuable time.
Therefore, adopting Serialization Filtering could dramatically complicate the deployment of applications, can be expensive, and it is an error-prone and time-consuming activity that does not scale for enterprises with tens or hundreds of production applications.
On the bright side, this feature could be suitable for small apps with limited serialization scope that require only a quick workaround. Also, this feature now makes all the existing 3rd-party black/white list Java agents, such as Contrast’s rO0, SerialKiller, etc. pretty much redundant since all of them follow the same approach and Oracle's Serialization Filtering can easily replace such external agents.
Serialization Filtering is the minimum that Oracle could provide in order to stop being blamed for not doing anything about the critical Deserialization attacks. It it is a first step in the right direction but it does not completely solve the problem and, as we have described, it is not suitable for enterprise production environments.
A Non-Heuristic Solution Is Needed
It is clear that a better way of solving the problem is needed. Fortunately, there is a state-of-the-art solution that does not require any kind of black or white lists, does not require manual configuration, tuning, source code changes, and does not depend on heuristics such as pattern matching; an approach that guarantees false positives. This new solution is provided only by Waratek’s unique Application Security Platform. The approach of this solution  is based on an intelligent compartmentalization that takes place inside a RASP container in the JVM. The compartmentalization occurs at runtime and isolates the deserialization operations based on identified trust boundaries and contextual access control. This is achieved because the RASP container provides full visibility of the application and its execution context.
This new solution has many benefits compared to pattern matching and black/white listing required by Serialization Filtering and it does not depend on profiling of the application or understanding of its internal technical details. Because of its ease of use, it can be easily deployed even by non-security professionals. Also, because it does not produce false positives, it can be safely enabled in production environments. Waratek provides this solution for all versions and releases of Java; even for legacy versions. It is a solution that is definitely worth to be considered as part of a holistic application security strategy.
Opinions expressed by DZone contributors are their own.