Eric Fischer used CLD to create the colorful Twitter language map, and since then further language maps have appeared, e.g. for New York and London. What a multi-lingual world we live in!
Suddenly, just a few weeks ago, I received an out-of-the-blue email from Dick Sites, creator of CLD, with great news: he was finishing up version 2.0 of CLD and had already posted the source code on a new project.
So I've now reworked the Python bindings and ported the unit tests to Python (they pass!) to take advantage of the new features. It was much easier this time around since the CLD2 sources were already pulled out into their own project (thank you Dick and Google!).
There are a number of improvements over the previous version of CLD:
- Improved accuracy.
- Upgraded to Unicode 6.2 characters.
- More languages detected: 83 languages, up from 64 previously.
- A new "full language table" detector, available in Python as a separate cld2full module, that detects 161 languages. This increases the C library size from 1.8 MB (for 83 languages) to 5.5 MB (for 161 languages). Details are here.
- An option to identify which parts (byte ranges) of the text contain which language, in case the application needs to do further language-specific processing. From Python, pass the optional
returnVectors=Trueargument to get the byte ranges, but note that this requires additional non-trivial CPU cost. This wiki page shows very interesting statistics on how frequently different languages appear in one page, across top web sites, showing the importance of handling multiple languages in a single text input.
- A new
hintLanguageHTTPHeadersparameter, which you can pass from the
Content-LanguageHTTP header. Also, CLD2 will spot any lang=X attribute inside the
<html>tag itself (if you pass it HTML).
In the new Python bindings, I've exposed CLD2's debug* flags, to add verbosity to CLD2's detection process. This document describes how to interpret the resulting output.
detect function returns up to 3 top detected languages. Each detected language includes the percent of the text that was detected as the language, and a confidence score. The function no longer returns a single "picked" summary language, and the
pickSummaryLanguage option has been removed: this option was apparently present for internal backwards compatibility reasons and did not improve accuracy.
Remember that the provided input must be valid UTF-8 bytes, otherwise all sorts of things could go wrong (wrong results, segmentation fault).
To see the list of detected languages, just run this
python -c "import cld2; print cld2.DETECTED_LANGUAGES", or
python -c "import cld2full; print cld2full.DETECTED_LANGUAGES" to see the full set of languages.
The README gives details on how to build and install CLD2.
Once again, thank you Google, and thank you Dick Sites for making this very useful library available to the world as open-source.