# PageRank in 5 minutes

# PageRank in 5 minutes

Join the DZone community and get the full member experience.

Join For Free**Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development.**

PageRank is one of the base algorithms of Google Search, which uses it to order web pages for importance and authority independently from the current query to satisfy. After it was patented in 1998, PageRank is now taught in university courses, much like Quicksort or RANSAC. I promise I won't take more than 5 minutes of your time to explain the fundamentals of how it works.

## The web seen as a directed graph

I'm sure you already know the content of this paragraph, or that you can have already guessed most of it.

The web can be modelled as a directed (oriented) graph, where each page A is a node and it is connected with an arc (aka directed edge) to B, if and only if there is an outgoing link to B on A.

Search engines follow most of the links to get to know a large subset of the graph; they don't follow forms and submit buttons, or targets tagged with the *nofollow* directive.

Crawling is not a simple process, and not-so-important pages may be overlooked for a while. It takes some weeks for Google to index a new website, but significantly less if the website is already known by the crawler.

## What Lycos and Yahoo! were doing

The search engines before the advent of Google were **full text search on the set of web pages** they have previously crawled. The issue of this kind of search is that there is a gap between the external information provided by the graph and the single web page: being able to use something more than the HTML code of a page would get better results. After all, the web is a **graph** and not simply a **set** of documents.

For example, the number and text of the links to a page carry some information on how famous that page is.

The problem then becomes how to confirm that these links have some authority, and are not simply a bunch of pages created to make the linked one seem important. Google is criticized today for an excess of spamming in the top results, but in the last 10 years has reduced very much the issue thanks to PageRank. Many Google bombs still work today.

## What Google contributed

PageRank is a query-independent system that can be applied offline to get a basic score, that can be merged at query time to select the most relevant pages from a set of results matched with full text a/o link analysis.

To counterattack spamming, **ingoing links are weighted with the score of the pages containing them**. And those pages will have a score depending on other pages' score too; there is a recursion in this model, although recursion is not used to solve it.

In practice, **if your blog is linked by adobe.com, it is a big difference from gaining a backlink from a Geocities website** (not sure if Geocities exists anymore).

Moreover, **each page score, when used for voting other pages, is divided between all its outgoing links**. If your page tries to game the system with some thousands links, it would only boost a little bit the score of each of them.

## Mathematically speaking...

If we denote with x the vector containing the PageRank of web pages, each *x*j is equal to the sum, for i in I, of *x*i/*n*i; j is linked by the pages in I, and *n*i is the number of links in page *i*.

if we define Aji as the number of links going from i to j, we get the equation to solve in matrix form:

x = Ax

if you have an engineering background, you may agree with me in saying that this is a well-known problem: this matrix supposedly has a eigenvalue equal to 1, and we have to find a corresponding eigenvector. That eigenvector is the PageRank.

The equation can be solved numerically, for example with the power iteration method. Theoretically, this method converges if the graph is connected (which means there are no "islands" consisting of pages which do not share links with the main part of the web); so we should add a damping factor *m* divided between *n* pages, in order to simply avoid any 0 value in A; the matrix is then normalized again.

Values of PageRank are usually presented on a logarithmic scale, for example in the Google toolbar, due to the large range of possible values for *x*i.

## The random surfer interpretation

The *Random surfer interpretation* tries to explain PageRank with a human-powered example:

- place a user on a page with PageRank
*x*. - He will stay on that page for
*x*time. - He will move on one of the outgoing links, with equal probability (this is a model, and a model that has to be scaled all over the web, remember it. It's a bit simplified.)
- The random surfer can always jump away to another unrelated web page, each time with a little probability m (damping factor). For example, he grow tired of following links and he entered an Url in the location bar. With probability
*1-m*he will continue to follow links.

## Resources

The Wikipedia page has an example that involves a small web of 4 pages.

The 25 Billion Dollar Eigenvector explains more of the mathematical side of PageRank, which is linear algebra you could learn at every graduate level.

In The Myths of Innovation Scott Berkun recounts how Google founders ideas were rejected by Yahoo! and decided to start on their own.

**Take a look at an Indigo.Design sample application to learn more about how apps are created with design to code software.**

Opinions expressed by DZone contributors are their own.

## {{ parent.title || parent.header.title}}

## {{ parent.tldr }}

## {{ parent.linkDescription }}

{{ parent.urlSource.name }}