Data Visualization of Healthcare Expenses by Country Using Web Scraping in Python
Data visualizations help us to understand data more deeply
Join the DZone community and get the full member experience.Join For Free
I know that the data is pretty clear about who spends the least and who spends the most but I wanted to take the idea further using this table. I had been looking for a chance to practice web scraping and visualization in Python and decided this was a great short project.
Although it almost certainly would have been faster to manually enter the data into Excel, I would not have had the invaluable opportunity to practice a few skills! Data science is about solving problems using a diverse set of tools, and web scraping and regular expressions are two areas I need some work on (not to mention that making plots is always fun). The result was a very short — but complete — project showing how we can bring together these three techniques to solve a data science problem.
Generally, web scraping is divided into two parts:
- Fetching data by making an HTTP request
- Extracting important data by parsing the HTML DOM
Libraries and Tools
- Beautiful Soup is a Python library for pulling data out of HTML and XML files.
- Requests allow you to send HTTP requests very easily.
- Web Scraper will help us to scrape dynamic websites without setting up any automation browser.
- Pandas is a Python package providing fast, flexible, and expressive data structures
- matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python.
Our setup is pretty simple. Just create a folder and install Beautiful Soup and requests. For creating a folder and installing libraries, type the commands given below. I am assuming that you have already installed Python 3.x.
Now, create a file inside that folder by any name you like. I am using scraping.py. Then, just import Beautiful Soup and requests in your file as shown below.
What we are going to scrape:
- Name of the country
- Expense per capita
Now, since we have all the ingredients to prepare the scraper, we should make a GET request to the target URL to get the raw HTML data.
This will provide you with an HTML code of that target URL.
Now, you have to use
BeautifulSoup to parse HTML.
I have declared two empty lists to store the country names and expenses of each country in 24 hours.
As you can see each country is stored in an
“item” tag. We’ll store all the item tags within a list.
Since there are 190 countries in the world, we are going to run a for loop for each of those countries.
I have divided the expense by 365 because I want to see how these countries spent money on an everyday basis. Obviously, this could have been easier if I have directly divided the given data by 365 but then there is no point in learning right?
Now, this is what “
data” looks like:
Before even starting to plot a graph we have to prepare a data frame using Pandas. Now, if you don’t know what a
DataFrame is a two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). I know you didn't get it right? So, just read this article, this will help you a lot.
Creating one is very simple and straightforward.
This project is indicative of data science because the majority of time was spent collecting and formatting the data. However, now that we have a clean dataset, we get to make some plots! We can use both
seaborn to visualize the data.
If we aren’t too concerned about aesthetics, we can use the built-in data frame plot method to quickly show results:
I know the names of the countries are pretty small. But you can download and analyze it. Now, the main thing which you can see is that many countries are spending way less than a dollar, which is pretty shocking. The majority of these countries are from Asia and Africa. In my opinion WHO should focus more on these countries rather than developed countries in the west.
This is not necessarily a publication-worthy plot, but it’s a nice way to wrap up a small project.
The most effective way to learn technical skills is by doing. While this whole project could have been done manually by inserting values into Excel, I like to take the long view and think about how the skills learned here will help in the future. The process of learning is more important than the final result, and in this project, we were able to see how to use 3 critical skills for data science:
- Web Scraping: Retrieving online data
- Beautiful Soup: Parsing our data to extract information
- Visualization: Showcasing all our hard work
Now, get out there and start your own project and remember: it doesn’t have to be world-changing to be worthwhile.
Feel free to comment and ask me anything. You can follow me on Twitter. Thanks for reading!
And there’s the list! At this point, you should feel comfortable writing your first web scraper to gather data from any website. Here are a few additional resources that you may find helpful during your web scraping journey:
Opinions expressed by DZone contributors are their own.