DZone
Java Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Java Zone > How do You Measure the Impact of Tagging on Search Retrieval?

How do You Measure the Impact of Tagging on Search Retrieval?

Tony Russell-rose user avatar by
Tony Russell-rose
·
May. 31, 12 · Java Zone · Interview
Like (0)
Save
Tweet
5.19K Views

Join the DZone community and get the full member experience.

Join For Free

A client of mine wants to measure the difference between manual tagging and auto-classification on unstructured documents, focusing in particular on its impact on retrieval (i.e. relevance ranking).  At the moment they are considering two contrasting approaches:

  1. Create a list of all the insertions and deletions (i.e. instances where the auto and manual tags differ for a given document), and sort by frequency. Take those that appear more than given number of times (say 20), and count how often they appear as search terms in the top 1000 queries for the past 6 months. Include exact matches (where a tag and a query term are identical), and partial matches (where a tag is wholly included in a query), but exclude everything else. For tags that don’t appear in the top 1000, assume a notional frequency of say 70. Then divide the figure you get by the total number of queries over the past 6 months. This gives you a measure of how important those insertions and deletions are, and thus the impact of manual tagging on retrieval.

  2. Run a controlled experiment in which the tagging condition is the independent variable and the relevance ranking is the dependent variable. Use a benchmark set of queries and relevance judgements, and calculate precision and recall.

Surprisingly (to me, at least) there seems to be some debate as to which is the best approach.

Which one would you choose, and why?

Measure (physics)

Published at DZone with permission of Tony Russell-rose, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Testing Schema Registry: Spring Boot and Apache Kafka With JSON Schema
  • Debugging Java Collections Framework Issues in Production
  • Toying With Kotlin’s Context Receivers
  • A Guide to Understanding Vue Lifecycle Hooks

Comments

Java Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo