DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • DORA Metrics: Tracking and Observability With Jenkins, Prometheus, and Observe
  • Tactics and Strategies on Software Development: How To Reach Successful Software [Video]
  • Mastering Shift-Left: The Ultimate Guide to Input Validation in Jenkins Pipelines
  • Integrating Jenkins With Playwright TypeScript: A Complete Guide

Trending

  • Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks
  • Chat With Your Knowledge Base: A Hands-On Java and LangChain4j Guide
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  • How Can Developers Drive Innovation by Combining IoT and AI?
  1. DZone
  2. Culture and Methodologies
  3. Career Development
  4. Extracting Metrics From Jenkins Job Output

Extracting Metrics From Jenkins Job Output

Learn how to extract data from Jenkins jobs to produce useful metrics.

By 
Akshay Ranganath user avatar
Akshay Ranganath
·
Oct. 16, 18 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
10.8K Views

Join the DZone community and get the full member experience.

Join For Free

When working with Jenkins, you may be running jobs that provide some kind of metrics. For example, on a website, you may be monitoring the page load time at every hour, the median / 90th percentile CPU load time, etc. If you are running this as a Jenkins job, the output may be stored as a flat file, JSON or some such format. Typically, this is dumped into the archive folder. In this blog post, I will show how to extract the data and get some meaningful metrics.

Initial Setup

In this case study, I am using a website's performance information. Specifically, I am pulling the median page load time for this blog. The data is pulled at every hour and stored in a flat file called summary.json. The format of the JSON file is as follows:

{
    "p98": "2611", 
    "median": "2611", 
    "p95": "2611", 
    "moe": "0.0", 
    "n": "1"
}

For this workflow, we only care about the "median" metric. We could as well swap the metric and compute our data for other percentiles as well.

I am actually using the Query API for mPulse. I’ll have a follow-up post on the exact workflow.

This Jenkins job runs every hour. Jenkins then moves this JSON file to a folder like $JENKINS_PATH/jobs/$JENKINS_PIPELINE/builds/$BUILD_NUMBER/archives/summary.json.

Computing Variance

The next step is to use this summary.json that is generated every hour to run a check: Is the current performance any different from the past performance?

In this use case, past performance is determined by three parameters:

  • median/mean of the past 20 runs
  • variance of the past 20 runs
  • standard deviation of the past 20 runs

If you’d like to brush up the stats, please refer to this MathIsFun page on Standard Deviation, Variance and more.

Here’s my algorithm to compute the moving performance benchmark:

  • Find all the Jenkins job output folders.
  • Sort the job folders by their numeric job numbers. For this, I had to adopt the logic discussed in this Human Sorting blog post by Ned Batchelder.
  • Within the archive folder, read the summary.json and pull out the value for the node named median
  • Store this value in an array.
  • After completing this process, extract the last 21 values — ignore the latest. Basically, take an array splice [-21:1].
  • Compute the standard deviation for the last 20 values.
  • If performance from the latest run is greater than or lesser than 1 standard deviation away, alert this as an outlier.

Code Snippets

The first part I described earlier is to read the summary.json and extract the median metric. We can do this with this kind of code.

jobs_path = $JENKINS_PATH+$JENKINS_PIPELINE+'mPulse/builds'
directories = os.listdir(jobs_path)
directories.sort(key=natural_keys)
page_load_times = []
for each_directory in directories:
each_job = jobs_path + '/' + each_directory
if os.path.isdir(each_job)==True and os.path.islink(each_job)==False:
each_summary_file = each_job + '/archive/summary.json'
if os.path.isfile(each_summary_file):
with open(each_summary_file) as f:
data = json.loads(f.read())
if 'median' in data and data['median']!=None:
page_load_times.append(int(data['median']))

The next step is to get the last 20 values, without the latest run.

last_20_values = page_load_times[-21:-1]

Next up is to compute the stats.

median = statistics.median(last_20_values)
stddev = statistics.stdev(last_20_values)
variance = statistics.variance(last_20_values)

Finally, compare and find the anomaly.

if (last_median < (median - stddev)) or (last_median > (median + stddev)):
print("***** ALERT ********")
print("Performance anamoly detected: " + str(last_median) )
#force a build failure on jenkins
sys.exit(1)

In this case, I am failing the build when the current median is more than 1 standard deviation away. This will give me an easy visual indication of error.

By doing this, you now have an easy to use “anamoly detection” code! If you want to make it more interesting, we could replace the simple 1 standard deviation rule by more involved computation like the Nelson’s Rule.

Hope this has been fun! I’ll have a follow-up blog that explains the entire Jenkins pipeline and the demo of mPulse API that was used to extract the raw information.

(Note: I had originally published this blog here: https://akshayranganath.github.io/Python-Stats-From-Jenkins-Job-Output/)

career Jenkins (software) Metric (unit)

Published at DZone with permission of Akshay Ranganath, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • DORA Metrics: Tracking and Observability With Jenkins, Prometheus, and Observe
  • Tactics and Strategies on Software Development: How To Reach Successful Software [Video]
  • Mastering Shift-Left: The Ultimate Guide to Input Validation in Jenkins Pipelines
  • Integrating Jenkins With Playwright TypeScript: A Complete Guide

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!