DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Python Bags the TIOBE Language of the Year 2021 in a Row
  • Python Packages for Validating Database Migration Projects
  • How To Use Pandas and Matplotlib To Perform EDA In Python
  • How to Use Python for Data Science

Trending

  • Artificial Intelligence, Real Consequences: Balancing Good vs Evil AI [Infographic]
  • Kubeflow: Driving Scalable and Intelligent Machine Learning Systems
  • Measuring the Impact of AI on Software Engineering Productivity
  • Comparing SaaS vs. PaaS for Kafka and Flink Data Streaming
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Using Python Pandas for Log Analysis

Using Python Pandas for Log Analysis

In this quick article, I'll walk you through how to use the Pandas library for Python in order to analyze a CSV log file for offload analysis.

By 
Akshay Ranganath user avatar
Akshay Ranganath
·
Mar. 06, 19 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
33.6K Views

Join the DZone community and get the full member experience.

Join For Free

Python Pandas

In this short tutorial, I would like to walk through the use of Python Pandas to analyze a CSV log file for offload analysis. This is a typical use case that I face at Akamai.

Background

Python Pandas is a library that provides data science capabilities to Python. Using this library, you can use data structures like DataFrames. This data structure allows you to model the data like an in-memory database. By doing so, you will get query-like capabilities over the data set.

Use Case

Suppose we have a URL report from taken from either the Akamai Edge server logs or the Akamai Portal report. In this case, I am using the Akamai Portal report. In this workflow, I am trying to find the top URLs that have a volume offload less than 50%. I've attached the code at the end. I am going to walk through the code line-by-line. Here are the column names within the CSV file for reference.

Offloaded Hits,Origin Hits,Origin OK Volume (MB),Origin Error Volume (MB)

Initialize the Library

The first step is to initialize the Pandas library. In almost all the references, this library is imported as pd. We'll follow the same convention.

import pandas as pd

Read the CSV as a DataFrame

The next step is to read the whole CSV file into a DataFrame. Note that this function to read CSV data also has options to ignore leading rows, trailing rows, handling missing values, and a lot more. I am not using these options for now.

urls_df = pd.read_csv('urls_report.csv')

Pandas automatically detects the right data formats for the columns. So the URL is treated as a string and all the other values are considered floating point values.

Compute Volume Offload

The default URL report does not have a column for Offload by Volume. So we need to compute this new column.

urls_df['Volume Offload'] = (urls_df['OK Volume']*100) / (urls_df[

We are using the columns named OK Volume and Origin OK Volumn (MB) to arrive at the percent offloads.

Filter the Data

At this point, we need to have the entire data set with the offload percentage computed. Since we are interested in URLs that have a low offload, we add two filters:

  • Consider the rows having a volume offload of less than 50% and it should have at least some traffic (we don't want rows that have zero traffic).
  • We will also remove some known patterns. This is based on the customer context but essentially indicates URLs that can never be cached.

Sort Data

At this point, we have the right set of URLs but they are unsorted. We need the rows to be sorted by URLs that have the most volume and least offload. We can achieve this sorting by columns using the sort command.

low_offload_urls.sort_values(by=['OK Volume','Volume Offload'],inplace

Print the Data

For simplicity, I am just listing the URLs. We can export the result to CSV or Excel as well.

First, we project the URL (i.e., extract just one column) from the dataframe. We then list the URLs with a simple for loop as the projection results in an array.

for each_url in low_offload_urls['URL']:
print (each_url)

I hope you found this useful and get inspired to pick up Pandas for your analytics as well!

References

I was able to pick up Pandas after going through an excellent course on Coursera titled Introduction to Data Science in Python. During this course, I realized that Pandas has excellent documentation.

  • Pandas Documentation: http://pandas.pydata.org/pandas-docs/stable/

Full Code

import pandas as pd

urls_df = pd.read_csv('urls_report.csv')

#now convert to right types
urls_df['Volume Offload'] = (urls_df['OK Volume']*100) / (urls_df['OK Volume'] + urls_df['Origin OK Volume (MB)'])

low_offload_urls = urls_df[(urls_df['OK Volume'] > 0) & (urls_df['Volume Offload']<50.0)]
low_offload_urls = low_offload_urls[(~low_offload_urls.URL.str.contains("some-pattern.net")) & (~low_offload_urls.URL.str.contains("stateful-apis")) ]

low_offload_urls.sort_values(by=['OK Volume','Volume Offload'],inplace=True, ascending=['True','False'])

for each_url in low_offload_urls['URL']:
print (each_url)
Pandas Data science Python (language) Database Log analysis

Published at DZone with permission of Akshay Ranganath, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Python Bags the TIOBE Language of the Year 2021 in a Row
  • Python Packages for Validating Database Migration Projects
  • How To Use Pandas and Matplotlib To Perform EDA In Python
  • How to Use Python for Data Science

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!