Once upon a time, as a young database developer, I ran into the Soundex algorithm and felt like I had experienced a miracle.
Years later, I tried to use Soundex on a much larger and more varied dataset, and was disappointed: the vast majority of similar items wasn't caught.
More recently I discovered Freebase, whose ability to integrate many different kinds of data, very flexibly, from many different sources, impressed me again. Freebase's data is user-maintained, with completely flexible schemata (folksonomy); so how did it avoid massive loads of inaccessible or poorly linked data?
Well, Google was impressed too, and about a year ago turned Freebase's technology into one of their own products: Google Refine (see intro videos on the official site if you don't know the product already).
Freebase is fun to browse, and can easily eat up too much of your time (it's like a semantic Wikipedia), which is why I've just closed it. But how does it work? More specifically, how does Freebase / Google Refine solve the clustering problem, the very specific data-centric subset of the massive general 'what is similarity' problem in AI (as Douglas Hofstadter's Copycat famously tried to address; see also the follow-up Metacat)?
Google Refine provides a very brief introduction to their clustering methods here. Or find some deeper discussion, with technical papers and a library, at the Vicino project. (I enjoyed The Similarity Metric, which applies the basic information-theoretic notion of Kolmogorov complexity to the problem of informational proximity.)
If you prefer to learn about Freebase's Graphd database system through its query language (which usually helps me, and which kind of lies behind the whole idea of 'NoSQL'), look over the manual for Freebase's Metaweb Query Language (MQL).