Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Several Years Later: A Case of the Telephone Game

DZone 's Guide to

Several Years Later: A Case of the Telephone Game

A Zone Leader recalls a great number of challenges when modernizing an application, as if the developers were playing the telephone game.

· Web Dev Zone ·
Free Resource

Remember the Telephone Game? 

You can read all about it here, but the gist is the first person is whispered a message. That person then whispers the same message (to the best of their recollection) to another person. The process continues until the last person receives the message. At that point, the last person announces the message for all to hear. The fun part of this game is to hear how much the story changes as the message is passed from person to person.

I recall encountering a project which seemed to resemble the telephone, 10+ years after the initial release was deployed.

Below are some things our team encountered while navigating through the system in an effort to modernize the application.

Several Apps for the Price of One

As we opened the source for the project, instead of a single application being stored in the source repository, there were several different applications living in the single repository. 

The code was interweaved to the point to where all of the applications were required to be deployed and run from the same application server. Of course, any time one of the modules were to change, the entire suite of applications had to be deployed again. This led to an involved code deployment process, taking far more time than if multiple repositories were used to manage the source code for each module.

As part of our modernization, we were able to introduce a shared module, which was moved into its own repository. From there, we proceeded to break out the modules into their own repositories. Those modules all would utilize the shared library for aspects that are shared by more than one module.

Picking the Wrong Package

In order to keep our application container as lean as possible, our team wanted to use OpenJDK over the Oracle counterpart. From a quick look, this seemed to be an easy transition from Oracle Java to OpenJDK. However, we encountered issues when our build kicked off. The JavaFX library could not be found.

JavaFX? We were confused.

Why was the Swing replacement focusing on a standard GUI library being included in this application server project?

We traced the error and found that a Pair class was needed. Instead of picking one that is included with the Apache Commons framework, already on the classpath, the developer opted to use the Pair class which is part of the JavaFX. Since JavaFX is not part of OpenJDK, the error was being thrown. We made the change to use the Apache Commons Pair class.

This was probably an example of where the developer's IDE gave a list of options and the wrong choice was made. However, this probably should have been caught during a code-review.

Transactions Without Boundaries

We noticed that transaction boundaries were not really established, when one servlet would often find itself calling another servlet on another service. 

Imagine this pattern:

Client -> Servlet A -> Servlet B -> Servlet C

The client makes a request to the application server running Servlet A. The underlying code realizes it needs to make a call to Servlet B, which is running on another module within the application. During the processing by Servlet B, a call ends up getting made to Servlet C — on yet another module running within the application. 

While this pattern appears to be working for the client when a happy path occurs, there was really no thought about the state of transactions if an exception were to occur. We found that generating an exception on Servlet A after processing completed from Servlet B and Servlet C, the resulting data would be in an incomplete state — since the updates from Servlet B and Servlet C were persisted, but the updates from Servlet A were discarded due to the exception.

Deciding to Fork a Library

At some point, the development team decided that one of the open-source frameworks they were using was having issues working in their code base. The decision was made to fork that repository, make the necessary updates to the open-source library, and build a jar version of their own. That was several years ago.

As our team attempted to modernize the application, we quickly found issues with the, now very stale, version of that library. While the client forked the repository to make things work in their environment, they failed to keep their version up to date with updates that were made to the source library.

The effort was required to extract the original custom changes and apply them to the latest version of the framework. Going forward, we recommended that time be taken to periodically review the original source updates and keep them in sync with the forked version they are using — if the decision was made to not pull-request their changes into the actual source code for the framework.

Shared Code That Wasn't Really Shared

Another aspect we encountered from that original code design is shared code that wasn't really shared anywhere else within the application. In the original namespace, there was business-layer program logic that was housed in what became the common module. Yet, only one instance of one module was using that code.

I could understand if the intention was that the code would be placed in this location to facilitate easy sharing across modules. However, this logic was very proprietary for a given business need. Even after 10 years, there was never a second implementation of this logic.

Hardcoding... Everywhere

When I find myself hard-coding anything, even when quickly prototyping something, I often feel somewhat ashamed in taking this approach. After working within i18n applications, having only one instance of properties makes me feel like I am developing things the wrong way.

This application was filled with several hard-coded examples. The biggest challenge we faced with the hard-coded efforts was to help determine the location of these application modules when one service or request had to route to another module.

Unfortunately, the DRY (don't repeat yourself) principal was not in play here and every time we felt like we had all of the hard-coded URLs resolved, we would encounter another variation of the same not-so-favorable pattern.

What I did not understand is how this logic wasn't at least captured into some type of properties file, or at least sent in as parameters when the application was started.

The More Things Change, the More They Remain the Same (A == A)

At one point, we encountered logic where two different constants were used in an if/else statement that were actually evaluating the same exact condition:

if (isCallFromFOO) {
  newURI = SomeUtil.rewriteServletPath(baseURI, Constants.EXAMPLE_ONE, Constants.SERVLET_BASE);
} else {
  newURI = SomeUtil.rewriteServletPath(baseURI, Constants.EXAMPLE_TWO, Constants.SERVLET_BASE);
}

While the example above has been changed to protect the actual project, at some point the EXAMPLE_ONE   and EXAMPLE_TWO constants were set to the same value. As a result, the newURL is always returning the same value. 

Here is another example, where a simple code review should have caught this issue... if the developer making the change didn't try to understand what was really being compared in this case.

Unit Tests That Cannot Be Run

The project actually had an impressive test package, complete with tests that appeared to handle each of those modules that were stored in the single repository noted above. We were pleased to know that the developers had included tests to make sure their code was working as expected — helping to identify issues when a bug was fixed or an enhancement was made.

Unfortunately, running the tests only provided a massive list of failures and very few successful tests.

It turns out the tests were written for the initial deployment, but were no longer being maintained due to a lack of available time by the production support team.

As a result, none of the tests were in a state where they could be modified to work... over ten years later.

I am a big fan of unit test coverage. In fact, on my current project I have fought (and won) in making sure all aspects of our API have considerable unit test coverage. Just last week, a unit test helped us uncover some missing logic in how our API was responding. Without the test, a bug would have made it into the hands of the tester, slowing down our delivery back to the customer.

Overall Code Challenges... the Telephone Game

Finally, the biggest challenge with the application was the point to where it seemed like the Telephone Game was being played. What started out as an unfavorable, but functional approach, established a pattern that was entered into the code stream. Then, as more work was completed, this anti-pattern was further repeated within other classes or modules. 

From there, when a defect was encountered with this pattern, instead of refactoring the code, a continuation of bad patterns was employed to fix the problem. This only further built upon a not-so-great design. As time passed by and bugs were discovered, the original patterns remained in place. From what I understand, new developers to the team were warned not to change the anti-pattern, since there was a fear of not understanding the background which led to the design decisions that were made.

If it were the Telephone Game, the first message would probably include a high level design and a warning to add "this was the only way I could get it to work." At the end of the game, the final message would likely be something like "because of all of the issues the team encountered, the code must stay this way or else the system will crash."

Conclusion

When our team started the project, the backlog on the production support's Kanban board had so many bugs the team really didn't know where to begin. This is largely to do with some of the aspects I noted in this article.

While I understand that every application will have design decisions that could be improved upon, I believe there is an accountability for senior developers to be able to take a step back and review the prior work that has been delivered. This is no different than a retrospective or a Lessons Learned session. 

  • The code should have been broken into multiple repositories when the second module was introduced.

  • Code reviews should validate that the right imports are being used before the code is pushed into test/production environments.

  • Setting transaction boundaries should always be top-of-mind when complex events need to be processed, including exception handling.

  • A commitment needs to be made when making the decision to modify/maintain an open-source library.

  • Shared code that lives in only one place should probably be refactored to avoid confusion.

  • Hardcoding should be avoided, as well as the DRY principal.

  • With every fix, a review of what is happening with said code should be completed to avoid A == A comparisons.

  • Unit tests should be considered just as important as the functional code itself.

  • Finding and fixing bad designs and anti-patterns should be handled with 15-20% of a given Sprint reserved for technical debt updates.

Ten years is a very long time for an application to be running in today's world. While I would expect some challenges to be introduced as stay members participate in the project, I would expect the bullet points noted above to be kept in mind in order to avoid the scenario that my team faced when modernizing this application.

Have a really great day!

Topics:
retrospective ,lessons learned ,web dev ,web application development ,code refactoring

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}