Building a Java 17-Compatible TLD Generator for Legacy JSP Tag Libraries
Solving broken TLD generation in Java upgrades: an annotation-based, build-time approach that keeps JSP tags working and compatible with Java 17.
Join the DZone community and get the full member experience.
Join For FreeWhen TLD Generation Tooling Falls Behind Java 17
The vulnerabilities introduced by upgrades to the Java platform tend not to lie in the application code itself, but rather in the ecosystem of build-time tools that enterprise systems rely on. This was made clear by a migration to Java 17, in which a long-standing dependency on TldDoclet to generate Tag Library Descriptor (TLD) was compromised.
TldDoclet, a widely used tool for generating TLD metadata from Java tag handler classes, is no longer supplied or compatible with current Java versions. The effect of this gap was not so obvious. The application itself compiled and executed well with Java 17, and the underlying JSP tag handlers remained functional. But TLD generation did not come up with a congenial mechanism, consequently placing a hard blocker late in the build. What once was a constant and unseen component of the toolchain turned into a migration issue with a high risk.
Another solution was to rewrite all custom tags to use .tag files, which was technically feasible but operationally impractical. The system contained more than 700 custom tags, scattered across more than 6,000 production files, spanning years of production experience and implicit contracts that depended on several teams. Massive rewrites at this tier come with a great deal of regression risk, extended validation, and unpredictability that large organizations often cannot afford. This paper explains the origins of these constraints that prompted the design and development of a custom, annotation-based TLD generator, which is compatible with both Java versions, including Java 17, and none of the existing JSP tag libraries are altered. It will not focus on the replacement of JSP or tag libraries, but rather on bridging a tooling gap that remains in the promotion of modern Java, both in the context of the realities of large and long-lived enterprise codebases.
The Hidden Cost of Abandoned TLD Generation Tooling
TldDoclet handled a little yet very important issue, viz., to locate the metadata within Amazon Java tag handler classes and automatically create valid TLD files. The functions have, over time, been incorporated into the full build pipelines of many enterprise systems, in most cases without active maintenance or inspection. That dependence has been intangible ever since Java versions had not acquired much speed, and the backward compatibility assumption had been introduced.
With Java 9 and later releases, this assumption no longer holds. The Java Platform Module System and changes to compiler APIs, limitations on the use of reflection and doclets, among others, were fundamental changes that transformed the landscape on which the TldDoclet tools would operate. To a large extent, TldDoclet must be modified in order to be able to make use of Java 17, as old-fashioned doclet APIs and classpath behavior no longer work with Java 17 anyway.
There is scarcely any such negligence regarding the cost of a tool. It was due to the inability of a TLD generator to obtain the desired results in this case that led to a re-shift in thinking in terms of the generation and validation of tag metadata. The building failures were late in the pipeline, as they were already mostly done with the migration work, and the building failures were by then a point in the architecture decision rather than a mechanical upgrade.
At the enterprise scale, these situations also give rise to unwanted requirements, such as halting Java version upgrades, internal maintenance of obsolete tools, or high-risk refactorings that provide no business value. The challenge was, then, not only to rid herself of TldDoclet, but to do it such that they are caused with minimal uproar, which will be sustainable as long as the next iterations of Java will persist.
Why Rewriting .tag Files Was Not a Viable Option
From a modern Jakarta perspective, tag files are a clean and supported way to define custom JSP tags. For small systems or greenfield projects, migrating to .tag files can be an attractive solution. In large enterprise systems, however, the cost profile is very different. The application in question contained hundreds of custom tags, many with complex attribute definitions and custom data types. These tags were referenced across thousands of JSP files, often with implicit assumptions about attribute behavior that were not fully documented. Rewriting these tags would have required coordinated changes across multiple teams, extensive regression testing, and prolonged stabilization periods.
More importantly, the migration effort was not driven by a desire to modernize the presentation layer, but by a Java platform upgrade. Introducing widespread functional changes unrelated to the original goal would have increased risk without delivering proportional value. In regulated or business-critical systems, this level of uncertainty is often unacceptable. The guiding constraint, therefore, was clear: existing JSPs and tag handlers had to remain unchanged. Any solution that required widespread rewrites was considered non-viable, regardless of its theoretical elegance.
From a modern Jakarta perspective, .tag files give a subtle and authorized way of formulating custom JSP tags. In the case of small systems or greenfield projects, the temptation to migrate to .tag files may become viable. But at very large-scale enterprise systems, the profile is quite different. The application under question had hundreds of custom tags, many of which contained the complex definition of the attributes and custom data types. The tags were referenced in thousands of JSP files only half as often, due to implicit assumptions about the behavior of unspecified attributes. It would have required both cross-functional coordination and extensive regression testing, as well as an extended shelf life, to reimplement these tags.
A desire to modernize the presentation layer was not the purpose of undertaking the migration exercise, but rather a Java platform upgrade. The threat of making pervasive functional modifications that were unrelated to the original rationale would have been greater when there was not a corresponding value increment. Uncertainties of this kind are not allowed among regulated or business-critical systems. The motivation force was very self-evident, though: the very existence of JSPs and tag handlers could not be limited. A solution that required a wholesale overhaul was considered, in any case, non-viable regardless of its theoretical loveliness.
Defining the Design Goals for a Replacement TLD Generator
It was not supposed to be feature-to-feature, but a replacement under these constraints that would suit modern realities in the Java world and business requirements. Among the basic design goals, which were also formulated at the initial stages, the following are:
- Java version independence: The solution should be compatible with Java 17 and all prior versions of Java, without relying on deprecated APIs.
- Zero JSP change: Base Existing JSP program and tag usages do not have to be changed.
- Accurate TLD compatibility: Built TLD files must be the same as the old descriptors without semantic differences.
- Integration of builds: Build TLD will be added to the existing build pipeline.
- Scalability: The solution must have the capacity to serve hundreds of tags and thousand files.
These targets removed the heavy-exercise reflection run-time solutions and favoured a deterministic build-time model with explicit metadata.
Using Annotations to Model Tag and Attribute Metadata
The solution essentially includes an annotation-based metadata model. Tag and attribute information is provided as explicit custom annotations rather than as output from Javadoc parsing or doclet APIs.
Higher still, a tag handler class is annotated with @Tag, which stores tag-level information, such as the tag name. The individual attributes are modeled with the assistance of the annotations as the @Attribute on the setter methods level primarily, which is one way of implementing JSP tags.
An annotation is a capture of each annotation attribute, which captures:
- Attribute name
- The attribute may be compulsory or not.
- The attribute type
- Metadata expression: runtime support.
One of the more complex aspects of this type of approach is type resolution. The names of types are not most often physical primitives (or strings), but domain types of a custom type. The generator is not entirely dependent on reflection; however, it exploits a symbol and type resolver, which is used depending on compiled class files to arrive at the conclusion of the appropriate nature of attributes on module boundaries.
The explicit annotation model puts the developer intent rather than the tooling heuristics in the metadata and makes the metadata more explicit, maintainable, and less delicate due to the parsing logic of the metadata.
Build-Time Class Scanning in a Modern Java Environment
To keep up with current Java, the generator is entirely build-time, that is, it scans compiled class files and not source code. The generation stage is carried out as part of a Maven phase and coordinated by an Ant task that can be easily integrated with an existing Jenkins pipeline.
Compiled classes have a number of benefits:
- Java compatibility: Compatibility across Java versions.
- Accurate type resolution
- No reliance on the outdated compiler or documentation APIs.
The generator first searches the corresponding classpath and purports the classes that are annotated with the annotation of Tag, analyzes method annotations that have been annotated, resolves attribute metadata, and builds the corresponding TLD representation in a deterministic manner.
Since the process uses a bytecode format and annotations, there is no reason it should be affected by changes to the source format or documentation style.
Generating Deterministic and Backward-Compatible TLD Files
One of the most important conditions was that the generated TLD files had to match the legacy descriptors. This contains tag names, attribute definitions, ordering, and the namespace declaration. Even the slightest variations might lead to some unexpected behaviour or make validation harder in enterprise settings.
The generator uses a controlled, deterministic process that generates TLD files to be compatible. The results are compared to the available descriptors to establish the semantic equivalence. As a matter of fact, the TLDs generated coincided with the legacy files with zero difference, and thus, they could be dropped and left to rest without any additional modifications.
This fidelity was a requirement of adoption. Teams might believe that the new generator made no difference, only the means by which metadata was created.
Integrating the Generator Into Existing CI/CD Pipelines
The generator is included in the typical construction, and it is automatically run when CI builds are being done. Its requirement to run on compiled classes as well as to generate artifacts identical to those generated previously meant that it did not need much modification to existing pipelines.
This combination meant that the generation of TLD remained:
- Automated
- Repeatable
- Open to tag consumers.
The generator acts like any other build step, which, in terms of CI/CD, eases the cognitive and operational load of the migration.
Migration Outcomes at Enterprise Scale
The effects of such a strategy became visible when implemented on a large scale. There was support of over 700 custom tags in more than 6000 files without ever having to rewrite a JSP or tag handler. Migration to Java 17 was done without presentation-layer risk, and the same generator still remains compatible across Java versions. Probably the most crucial argument is that the solution was future-proofed. Since it is based on annotations and scanning of the bytecode, it is not sensitive to most of the changes that have occurred with previous tools.
Lessons for Enterprise Java Modernization
The experience provides several general lessons:
- Tooling gaps can be the real impediments to platform upgrades.
- Large-scale rewrites are not often acceptable in the enterprise setting.
- The explicit metadata model is more sustainable than the tooling on the basis of a heuristic.
- Whereas it is not always safe, build-time solutions can be safer than the runtime adaptations when migrating.
To manage such issues, tooling should be treated as a first-class issue and not as something that happens afterwards.
Looking Ahead
JSP and tag libraries are not trendy, but they are ingrained in numerous enterprise systems. As Java is still evolving, it will probably become the case that other areas will have similar tooling gaps. The method outlined below, which is annotation-driven, build-time, and backward-compatible, provides a model for addressing such gaps in a pragmatic manner.
Instead of making them rewrite portions that need not be rewritten, teams are able to maintain the steady functionality and conform their tooling to the current platform restrictions. In large, long-serving systems, this equilibrium can lead to successful modernization or to a nonprogressive effort.
Opinions expressed by DZone contributors are their own.
Comments