When to Go for Two and When to Play It Safe
When to Go for Two and When to Play It Safe
Using the two extra point options in American football, Zone Leader John Vester wonders how many times project teams attempt to go for two, instead of taking the safer route on their projects.
Join the DZone community and get the full member experience.Join For Free
Discover how you can take agile development to the next level with low-code.
Since 1994, the National Football League (NFL) has implemented the two-point conversion option, allowing teams to advance their score to 8 points, instead of 7, following a touchdown score. There are some commonly used decision-making concepts that often drive when a team should go for two instead of kicking an extra point and getting only one additional point. A couple of major concepts are: if (after the touchdown) your lead is five points or if (after the touchdown) you trail by 2.
The problem with the two-point conversion is that is not as successful when compared to the one-point extra point. Even after the NFL decided to make one-point extra points more difficult, the difference is a 50% success rate for two-point conversions to ~95% for one-point attempts. Quite a difference.
In the most recent Superbowl between the Philadelphia Eagles and the New England Patriots, the decision to go for two (following the common decision-making logic) by the Eagles failed—ultimately giving New England a chance to win the game on their final drive. Had the Eagles opted to kick the extra-point, the game would have been out of reach. Since 1994, I have personally seen more of these situations harm the team attempting to go for two, instead of taking the higher probability route.
This made me wonder, how many times do we try the less probable two-point conversion on our projects, instead of taking the one-point (and far more successful) route?
Over-Engineering at Its Finest
At the highest level, I believe this is where over-engineering a solution acts as a metaphorical two-point conversion attempt on projects. I typically see over-engineering at three different levels.
Experienced developers take pride in their program code. Those developers are quick to update existing methods in order to promote code re-use and avoid duplication of code. This fundamental approach lessens the opportunity for unexpected results to occur in the code stream. However, over-engineering from a code perspective is often a temptation that developers need to keep in check:
Refactoring - when you find yourself refactoring for the sake of refactoring. Implementing this practice not only provides an opportunity for functionality to stop working, but places extra demand on the QA team to re-test something that has already been tested.
Redesign - breaking up existing service classes into sub-classes follows the same rule. Perhaps you are trying to make an existing service class smaller, so the decision is made to break-up the code into several smaller services. Most IDE's automate most of these tasks, but these changes will require testing.
New Functionality - as a developer you might review a piece of code and discover a way in which it could be expanded for future usage. The functionality is added, but there is no current business requirement (yet) to utilize the functionality. The problem with this approach is that often times future requirements change, so time was taken to start building something that wasn't actually needed...even in the future.
I am a big fan of a normalized database structure. Like good program code, I am not a fan of duplication across tables in a database design. However, where I am not a huge fan is when tables become over-abstracted.
As an example, I worked on one project where it looked like the object model was driven from the back-end application server code. At the time, I remember reading how Hibernate could read the object model within the Java application and build a database design from the code. However, I had never seen anyone take this approach...until this project.
In this instance, there was a high-degree of related tables, because of how the Java code extended parent classes. So, if all classes inherited from a class called BaseWidget, all the data for those objects would have a record stored in the BaseWidget table. Now, consider when Account extends BaseWidget and Sale extended Account. The Sale table would include a reference back to Account, which included a reference back to BaseWidget.
Over time, the client experienced performance issues which were tied to many levels of abstraction. Additionally, from a development perspective, it was challenging to build queries to obtain the necessary data while adding features to the application.
I remember a commercial a few years ago, where a company revealed their new web-site for ecommerce. After the site went live, the release team was watching the order count from the ecommerce solution. They cheered when the first order came in, cheered even louder when the orders reached 100 and then 1,000. Then, they started to panic when the order count exceeded 25,000. While the commercial was related to a shipping company being able to meet the aggressive demands of your business, I believe that infrastructure designers too often put this fear in the forefront of their design.
On one project I worked on I was in a meeting with the infrastructure team, who were talking about the design of the production instance of the application. I was expecting a short answer, perhaps taking a few minutes to explain. What I reviewed was a presentation that spanned multiple slides and required nearly twenty minutes to annotate.
In my mind, I had an understanding of the number of users and the CPU/memory demands on the system—which I feel like was not fully translated to the infrastructure team. When that question was raised, the response provided is that the infrastructure team decided to use their own metrics to design the system.
From a project team perspective this might sound like a great idea. The infrastructure behind the application will be ready to handle any need. However, from a corporate perspective, there was an impact to the cost of the project. Unfortunately, the application was delayed due to issues getting every aspect of the complex solution to work together. Not too long after going live, the infrastructure team struggled when updates were required to components within their design, as well.
In each of the three examples above, time and effort were invested into a project to provide benefits that were not required. While some may justify the amount of time was a wise investment that will pay off in the future, without solid requirements dictating the extra work, the end-result is simply added time and cost to the project. In fact, I would feel confident in estimating the percentage of time the extra effort pays off is less than the 50% success rate of the two-point conversion in the NFL.
In an era of trying to turnaround features and functionality in a more agile fashion, teams must maintain a laser-sharp focus to provide solutions that meet the needs stated by the product owner or business representative. While it might seem innovative and exciting to add more functionality at the code-level, database design or within the infrastructure, most of the time, the actual need can be met by using the more probable one-point extra point.
Have a really great day!
Opinions expressed by DZone contributors are their own.