Over a million developers have joined DZone.

Search at the Guardian Newspaper

DZone 's Guide to

Search at the Guardian Newspaper

· Java Zone ·
Free Resource

Search at the Guardian Newspaper

I had the privilege of attending a Search at the Guardian event a while back, organised by Tyler Tate of the Enterprise Search London MeetUp group and Martin Belam of the Guardian newspaper. I say “privilege”, as it seems all 60 places were snapped up within a matter of days, so I consider myself fortunate to have grabbed a place. Seems like Search has gone viral this last week … anyway, it was well worth the trip as the Guardian put on a great show, consisting of talks from their technology development team about the challenges in providing a searchable archive of all their content.

To add an extra note of personal interest, they are actually in the process of migrating from Endeca (for whom I currently work) over to Apache Solr, and with it embracing the wider opportunities of providing open access to their data and search services. One of their key goals in doing this is to enable the development community at large to create value-adding apps and services on top of their data and API, thus transforming the Guardian’s role from publisher to content platform.

By their own admission, they haven’t got the best out of their Endeca investment, and have allowed their installation to get wildly out of date and unsupported. So what they have on their live site is hardly representative of a typical Endeca deployment. But that said, I think there are some basic user experience issues they could improve, regardless of platform. In particular, I think there are significant issues around their implementation of faceted search and the overall design of their results pages. In addition, I think there are some missed opportunities regarding the extent to which the current site supports a serendipitous discovery experience (something which a site like this, if designed appropriately, should really excel at). If I get chance I’ll provide a fuller review, but for now it is probably instructive to refer to the Endeca UI Design Pattern Library, in particular the entries for Faceted Navigation: Vertical Stack, Search Box, and Search Results: Related Content. These patterns provide much of the background necessary for addressing the immediate issues. (NB although these patterns are published by my colleagues at Endeca, the guidance is essentially platform-agnostic and applies to search and discovery experiences in general).

But let’s get back to talking about the event itself. All the half dozen or so presentations were valuable and instructive, but as a UX specialist I did particularly enjoy Martin Belam‘s talk, who discussed “Why news search fails…and what you can do about it“. I have a lot of sympathy with Martin’s observations about the Guardian’s site users and their expectations that the search engine should be able to “read minds”. In particular, he cited the classic problems caused by underspecified or incomplete queries (i.e. should a search for a single word such as “Chile” return stories of mining accidents or football reports?). Interestingly, this type of phenomenon is exactly the sort that should be reduced through features such as Google Instant – if you can see the mix of results your query will return before you hit enter, you are more likely to provide the context needed for adequate disambiguation.

Martin also talked about the “long tail” of search queries, i.e. the hapax legomena that occur in any search log. Search logs, like most natural (language) phenomena display a Zipfian distribution, i.e. term rank and frequency are inversely related by a power law. In the Guardian’s case, this means a typical day can produce some 17,000 unique queries, most consisting idiosyncratic edge cases. However, a few common patterns do re-occur, including:

  • People’s names (which are often incomplete, as alluded to above)
  • Dates (which Martin argued were highly generative and therefore not easily matched by regular expressions, but based on my experiences with named entity recognition at Reuters, I’d be more optimistic about the prospects for this)
  • Misspellings and typographic errors (which in many cases I’d argue are addressable through Auto-correct and Did You Mean techniques, i.e. string-edit distance against a cumulative list of known terms)

Also intriguing was the observation that only 1% of their current page views are search-driven – I wonder how this will change as consumption of their content increasingly occurs in a mobile context, with users engaging in highly goal-driven, spontaneous or impulsive tasks, for which search is the obvious entry point? He also outlined some of the ways in which their site search exploits context and metadata to deliver a richer experience (than web search), and uses manually assigned tags to dynamically generate topical landing pages for arbitrary query combinations (e.g. “chess” and “boxing”). Martin also alluded to a vision of using “multiple search boxes” to infer the user’s intent based on local context (but I’d prefer to think of this as multiple instances of a single search box).

One final point – surely all that manual tagging is insanely time consuming an non-scalable? I understand of course the need to apply human editorial quality control, but at Reuters even back in 2002 we were using semi-automated text categorization solutions to successfully tag over 11,000 stories a day (and had been doing so for many years previously). I’m a bit surprised the Guardian appears to be so reliant on manual methods, and am curious to know how they view the trade-off between efficiency, accuracy & throughput.

So all in all, a very productive an enjoyable evening – thanks again to Tyler and Martin for making this happen.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}