15 Tools Java Developers Should Use After a Major Release
Join the DZone community and get the full member experience.
Join For Free
the ultimate survival kit for new deployments
unlike toying around with zombie apocalypse
scenarios, debating the machete versus the shotgun, troubles in java production environments are quite real, especially after new deployments (but it’s good to be ready for zombies
as well
). taking this a step further, it’s much easier to get into trouble today than ever before when new code shipping cycles are cut down to weeks and sometimes days or even multiple times a day. to avoid being run down by the zombies, here’s the survival kit setup you need to fully understand the impact of new code on your system. did anything break? is it slowing you down? and how to fix it? here’s the tool set and architecture to crack it once and for all.
logging
other than shrinking release cycles, another property of the modern development lifecycle is ever expanding log files that can reach gbs per day. let’s say some issue arises after a new deployment: if you’d like to produce a timely response, dealing with gbs of unstructured data from multiple sources and machines is close to impossible without the proper tooling. in this space we can essentially divide the tools to the heavy duty enterprise on-premise splunk, and its saas competitors like sumo logic, loggly and others. there are many choices available with a similar offering so we wrote a more in-depth analysis of log management that you can read right here .
takeaway #1:
set up a sound log management strategy to help you see beyond the pale lines of bare logfiles and react fast after new deployments.
one logging architecture we’ve found to be super useful after deploying new code is the open-source elk stack. it’s also worth mentioning since it’s open-source and free.

the elk stack: elasticsearch, logstash and kibana
so what is this elk we’re talking about? a combination of elasticsearch’s search and analytics capabilities, logstash as the logs aggregator and kibana for the fancy dashboard visualization. we’ve been using it for a while, feeding it from java through our logs and redis, and it’s in use both by developers and for bi. today, elasticsearch is pretty much built-in with logstash, and kibana is an elasticsearch product as well, making integration and setup go easy peasy.
when a new deployment rolls out, the dashboards follow custom indicators that we’ve set up about our apps health. these indicators update in real time, allowing close monitoring when freshly delivered code takes its first steps after being uploaded to production.
takeaway #2:
search, visualization and the ease of aggregating logs from multiple sources are key factors in determining your log management strategy.
takeaway #3:
from a developer perspective, evaluating the impact of a new deployment can include bi aspects as well.
tools to check:
1. on-premise: splunk
2. saas:
sumo logic
3. saas:
loggly
4. open source:
graylog2
5. open source:
fluentd
6.
the elk stack
(open source):
elasticsearch
+
logstash
+
kibana
performance monitoring
so the release cycles are cutting down and log files are becoming larger, but that’s not all: the number of user requests grows exponentially and they all expect peak performance. unless you work hard on optimizing it, simple logging will only take you this far. with that said, dedicated application performance management tools are no longer considered to be a luxury and rapidly become a standard. at its essence, apm means timing how long it takes to execute different areas in the code and complete transactions – this is done either by instrumenting the code, monitoring logs, or including network / hardware metrics. both at your backend and on the users’ devices. the first two modern apm tools that come to mind are new relic, who just recently filed their ipo, and appdynamics.
appdynamics on the left, new relic on the right - main dashboard screen
each traditionally targeted a different type of developer, from enterprises to startups. but as both are stepping forward to their ipos and after experiencing huge growth the lines are getting blurred. the choice is not clear, but you could not go wrong – on premise = appdynamics, otherwise, it’s an individual call depends on which better fits your stack (and which of all the features they offer are you actually thinking you’re going to use). check out the analysis we recently released that compares these two head to head
right here
.
two additional interesting tools that were recently released are ruxit (by compuware) and dripstat (by chronon systems), each coming from larger companies with their own attempt to address the saas monitoring market pioneered by new relic. looking into hardcore jvm internals, jclarity and plumbr are definitely worth checking out as well.
takeaway #4:
new deployments may affect your application’s performance and slow it down, apm tools can provide an all around overview of your applications health.
8.
new relic
new players:
9.
jclarity
10.
plumbr
11.
ruxit
12.
dripstat
debugging in production
release cycles are down, log files grow large, user requests explode, and… the margin for error simply doesn’t exist. when an error does come - you need to be able to solve it right away. large-scale production environments can produce millions of errors a day from hundreds of different locations in the code. while some errors may be trivial, others break critical application features and affect end-users without you knowing it. traditionally, to identify and solve these errors you’d have to rely on your log files or a log management tool to even know an error occurred, let alone how to fix it. with takipi, you’re able to know which errors pose the highest risk and should be prioritized, and receive actionable information on how to fix each error.
looking at errors arising after new deployments, takipi addresses 3 major concerns:
1. know which errors affect you the most
- detect 100% of code errors in production, including jvm exceptions and log errors. use smart filtering to cut through the noise and focus on the most important errors. over 90% of takipi users report finding at least one critical bug in production during their first day of use.
2. spend less time and energy debugging
- takipi automatically reproduces each error and displays the code and variables that led to it - even across servers. this eliminates the need to manually reproduce errors, saves engineering time, and dramatically reduces time to resolution.
3. deploying without risk
- takipi notifies you when errors are introduced by a new version, and when fixed errors come back to haunt you.
takeaway #5:
with takipi you’re able to act quickly to resolve any issue and no longer in the dark after a new release.
tools to check:
13. takipialerting and tracking
release cycles, log files, user requests, no margin for error and… how you’re going to follow up on it all? you might think this category overlaps with the other’s and the truth is that you’re probably right, but when all of these tools have their own pipelines for letting you know what went wrong - it gets quite cluttered. especially in the soft spot after a new deployment when all kinds of unexpected things are prone to happen.
one of the leading incident management tools that tackles this is pagerduty: collecting alerts from your monitoring tools, creating schedules to coordinate your team and deliver each alert to the right person through texts, emails, sms or push notifications.
takeaway #6:
consider using an incident management system to handle information overload.
a specialized tool we really like using here is pingdom (which also integrates with pagerduty). what it does is quite simple and just works: tracking and alerting on our website’s response times 24/7. answering a crucial question that seems trivial: is the website available? probing it from different locations all over the globe.
all systems are go!
another angle to tackle information overload is error tracking that goes beyond the features of log analyzers: smart dashboards to manage your exceptions and log errors. aggregating data from all your servers and machines to one single place, either through your log events or other plugs coming from your code. for a deeper dive to the error tracking tools landscape,
check out this post
that covers the most popular options.
takeaway #7:
code errors come in all shapes and sizes, it’s worth giving them some special treatment with an error tracking tool (and smash some bugs while we’re at it, muhaha).
15.
pingdom
conclusion
we’ve experienced first hand how modern software development affects the release lifecycle and zoomed in on how you can assess the impact of new rapid deployments - when new code can come in before you even fully understood the last update’s impact. in the grand scheme of things, any tool you consider should address these 5 characteristics:- shrinking release cycles
- expanding log files
- growing user requests
- smaller margins for error
- information overload
Release (agency)
Software development
Java (programming language)
Open source
Opinions expressed by DZone contributors are their own.
Trending
-
Grow Your Skills With Low-Code Automation Tools
-
Opportunities for Growth: Continuous Delivery and Continuous Deployment for Testers
-
4 Expert Tips for High Availability and Disaster Recovery of Your Cloud Deployment
-
What Is React? A Complete Guide
Comments