Over a million developers have joined DZone.
Platinum Partner

HtmlUnit vs JSoup

· Web Dev Zone

The Web Dev Zone is brought to you in partnership with Mendix.  Discover how IT departments looking for ways to keep up with demand for business apps has caused a new breed of developers to surface - the Rapid Application Developer.

In continuation of my earlier blog Jsoup: nice way to do HTML parsing in Java, in this blog I will compare JSoup with other similar framework, HtmlUnit. Apparently both of them are good Html parsing frameworks and both can be used for web application unit testing and web scraping. In this blog, I will explain how HtmlUnit is better suited for web application unit testing automation and JSoup is better suited for Web Scraping.

Typically web application unit testing automation is a way to automate webtesting in JUnit framework. And web scraping is a way to extract unstructured information from the web to a structured format. I recently tried 2 decent web scraping tools, Webharvy and Mozenda.

For any good Html Parsing tools to click, they should support either XPath based or CSS Selector based element access. There are lot of blogs comparing each one like, Why CSS Locators are the way to go vs XPath, and CSS Selectors And XPath Expressions.


HtmlUnit is a powerful framework, where you can simulate pretty much anything a browser can do like click events, submit events etc and is ideal for Web application automated unit testing.

XPath based parsing is simple and most popular and HtmlUnit is heavily based on this. In one of my application, I wanted to extract information from the web in a structured way. HtmlUnit worked out very well for me on this. But the problem starts when you try to extract structured data from modern web applications that use JQuery and other Ajax features and use Div tags extensively. HtmlUnit and other XPath based html parsers will not work with this. There is also a JSoup version that supports XPath based on Jaxen, I tried this as well, guess what? it also was not able to access the data from modern web applications like ebay.com.

Finally my experience with HtmlUnit was it was bit buggy or maybe I call it unforgiving unlike a browser, where in if the target web applications have missing javascripts, it will throw exceptions, but we can get around this, but out of the box it will not work.


The latest version of JSoup goes extra length not to support XPath and will very well support CSS Selectors. My experience was it is excellent for extracting structured data from modern web applications. It is also far forgiving if the web application has some missing javascripts.

Extracting XPath and CSS Selector data

In most of the browsers, if you point to an element and right click and click on “Inspect element” it can extract the XPath information, I noticed Firefox/Firebug can also extract CSS Selector Path as shown below,

HtmlUnit vs JSoup: Extract CSS Path and XPath in FireBug

HtmlUnit vs JSoup: Extract CSS Path and XPath in FireBug

I hope this blog helped.

The Web Dev Zone is brought to you in partnership with Mendix.  Learn more about The Essentials of Digital Innovation and how it needs to be at the heart of every organization.


Published at DZone with permission of Krishna Prasad , DZone MVB .

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}