Measuring Search Quality: Retrieval and Relevance Metrics
Join the DZone community and get the full member experience.
Join For FreeI am trying to put together a framework for search quality evaluation for a specialist information provider.
At the moment quality is measured by counting the number of hits for certain key docs across various queries, and monitoring changes on a regular schedule. I’d like to broaden this out into something more scalable and robust, from which a more extensive range of metrics can be calculated. (As an aside, I know there are many ways of evaluating the overall search experience, but I’m focusing solely on ranked retrieval and relevance here).
We are in the fortunate position of being able to acquire binary relevance judgements from SMEs, so can aspire to something like the TREC approach:
http://trec.nist.gov/data/reljudge_eng.html
But of course we are running just a single site search engine here, so can’t pool results across runs to produce a consolidated ‘gold standard’ result set as you would in the TREC framework.
I am sure this scenario repeats the world over. One solution I can
think of is to run your existing search engine with various alternative
configurations, e.g. precision oriented, recall oriented, freshness
oriented, etc. and aggregate the top N results from each to emulate the
pooling approach. Can anyone suggest any others? Or perhaps an
alternative method entirely?
Jason Hull one of DZone's MVBs recommended looking at this article for some answers.
Published at DZone with permission of Tony Russell-rose, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments