DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Data Engineering
  3. Databases
  4. Dirty Way to Read the Hacker News Homepage

Dirty Way to Read the Hacker News Homepage

Denzel D. user avatar by
Denzel D.
·
Feb. 27, 12 · Interview
Like (0)
Save
Tweet
Share
4.20K Views

Join the DZone community and get the full member experience.

Join For Free

I am developing Hackernator - a Hacker News client for Windows Phone. One of the challenges I faced at the start of the project was the fact that I needed to somehow get Hacker News data. There are two ways to do it. First, I could use the RSS feed. A pretty good solution, but it only gave me the ability to read the titles of the links posted. I wanted to get the number of comments and points as well. Add to that the domain and the author of the link. The second potential way would be using the iHackerNews API. It would be really easy to handle, but the API itself sometimes experiences availability issues.

So I decided to go the hard way and actually read the page manually, by going through the HTML content. It ended up being a much more interesting task than I thought. This is what I had to work with:

Classic work with tables. Do you see anything some elements have in common? You get bonus points if you looked at td class="title" and td class="subtext". Those are the primary indicators for areas in HTML that I should pay attention for. The first sector is the link title, the second one is connected to metadata. For a single link, I came up with this model implementation:

namespace Hackernator.Models
{
    public class Link
    {
        public string Title { get; set; }
        public string ID { get; set; }
        public string Domain { get; set; }
        public string Url { get; set; }
        public int Points { get; set; }
        public string Author { get; set; }
        public int Comments { get; set; }
    }
}

Every single property here is self-explanatory, other than ID. That is the unique identifier assigned to every link on HN. For example, Gigabit Internet for $80 has the ID value of 3637599. By using the ID, I am able to read comment data for a given link directly on Hacker News instead of going to the URL.

Here is the class that parses the content:

using System;
using System.Collections.Generic;
using Hackernator.Models;

namespace Hackernator.HNAPI
{
    public class HackerNewsParser
    {
        public IEnumerable<Link> GetLinksFromRaw(string content)
        {
            List<Link> links = new List<Link>();
            string titleFlag = "<td class=\"title\">";
            string closingFlag = "</td>";
            string dataToManipulate = content;
            int titleLocation = dataToManipulate.IndexOf(titleFlag);
            while (titleLocation != -1)
            {
                Link hLink = new Link();
                dataToManipulate = dataToManipulate.Remove(0, titleLocation);
                int closingIndex = dataToManipulate.IndexOf(closingFlag);
                string subParsedItem = dataToManipulate.Substring(0, closingIndex + closingFlag.Length);
                dataToManipulate = dataToManipulate.Replace(subParsedItem, string.Empty);
                int index = subParsedItem.IndexOf("href");
                if (index != -1)
                {
                    subParsedItem = subParsedItem.Remove(0, index);
                    index = subParsedItem.IndexOf("\">");
                    hLink.Url = subParsedItem.Substring(6, index - 6);
                    subParsedItem = subParsedItem.Remove(0, index + 2);
                    index = subParsedItem.IndexOf("</");
                    hLink.Title = subParsedItem.Substring(0, index);
                    if (hLink.Title != "More")
                    {
                        subParsedItem = subParsedItem.Remove(0, index);
                        index = subParsedItem.IndexOf("(");
                        if (index != -1)
                            hLink.Domain = subParsedItem.Substring(index, subParsedItem.IndexOf(")") - index + 1);
                        else
                            hLink.Domain = "HackerNews";
                        index = dataToManipulate.IndexOf("<td class=\"subtext\">");
                        if (index != -1)
                        {
                            dataToManipulate = dataToManipulate.Remove(0, index);
                            index = dataToManipulate.IndexOf(closingFlag);
                            subParsedItem = dataToManipulate.Substring(0, index + closingFlag.Length);
                            dataToManipulate = dataToManipulate.Replace(subParsedItem, string.Empty);
                            index = subParsedItem.IndexOf("score_");
                            if (index != -1)
                            {
                                subParsedItem = subParsedItem.Remove(0, index);
                                index = subParsedItem.IndexOf(">");
                                hLink.ID = subParsedItem.Substring(6, index - 6);
                                subParsedItem = subParsedItem.Remove(0, index + 1);
                            }
                            index = subParsedItem.IndexOf("points");
                            if (index != -1)
                            {
                                hLink.Points = Convert.ToInt32(subParsedItem.Substring(0, index));
                                index = subParsedItem.IndexOf("\">");
                                subParsedItem = subParsedItem.Remove(0, index + 2);
                                index = subParsedItem.IndexOf("<");
                                hLink.Author = subParsedItem.Substring(0, index);
                                subParsedItem = subParsedItem.Remove(0, index);
                                index = subParsedItem.IndexOf("\">");
                                subParsedItem = subParsedItem.Remove(0, index + 2);
                            }
                            else
                                hLink.Points = 0;
                            index = subParsedItem.IndexOf("comments");
                            if (index != -1)
                                hLink.Comments = Convert.ToInt32(subParsedItem.Substring(0, index));
                            else
                                hLink.Comments = 0;
                            links.Add(hLink);
                        }
                    }
                }
                titleLocation = dataToManipulate.IndexOf(titleFlag);
            }
            return links;
        }
    }
}

To sum it all up - I am directly manipulating the string data by iterating through sections and removing content that I already worked with. My plans of using a XML serializer or even go as far as try using XDocument.Parse for specific fragments failed because of the way HttpWebRequest was returning data - in fact, I suspect that the guys running HN are in the middle of this problem too. Some attributes use quotes, and some don't. This throws the XML parser off and I get an exception, failing to get the content I need.

For some items I had to make sure that those exist in the first place. The following items can be non-existent:

  • domain
  • full URL
  • number of points
  • author
  • number of comments

A good example of such an outlier can be found here. So far, I managed to successfully avoid exceptions connected to these links because of index verification. If you are curious to see a visible result of the work done by the parser shown above, look at this screenshot:

PS: This was a quick implementation. An alternative parser might be implemented via RegEx, and I will write about it once I complete it. If you have other suggestions on parsing the HN home page, let me know.

Hacker News Database

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Building a REST API With AWS Gateway and Python
  • Tackling the Top 5 Kubernetes Debugging Challenges
  • Getting a Private SSL Certificate Free of Cost
  • Microservices 101: Transactional Outbox and Inbox

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: