Simple Photo Search with Solr and Tika
Recently we had a change to help with a non-commercial project which included search as its part. One of the assumptions, although not the key ones, was the photo search functionality, so that the user could find the pictures fast and accurately. Because the search had to work with meta data of JPEG files, the idea was simple – use Apache Solr with Apache Tika.
Assumptions were quite simple – the user should be able to find photos by their file name, author and other data available in EXIF, like aperture, shutter speed, focal length or ISO value. Another thing was that Solr should take care of grabbing the meta data from JPEG files, so this was definitely something we wanted use Solr cell for. As You can see, those assumptions were simple.
Index structure was very simple and contained only most needed fields. The fields section of the schema.xml file looked as follows:
<field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" /> <field name="author" type="text" indexed="true" stored="true" /> <field name="iso" type="text" indexed="true" stored="true" multiValued="true" /> <field name="iso_string" type="text" indexed="true" stored="true" multiValued="true" /> <field name="aperture" type="double" indexed="true" stored="true" /> <field name="exposure" type="string" indexed="true" stored="true" /> <field name="exposure_time" type="double" indexed="true" stored="true" /> <field name="focal" type="string" indexed="true" stored="true" /> <field name="focal_35" type="string" indexed="true" stored="true" /> <dynamicField name="ignored_*" type="string" indexed="false" stored="false" multiValued="true" />
The dynamic field was added to ignore the data we weren’t interested in. Also the copyField was introduced to copy the iso field value to iso_string field to enable faceting.
The following handler definition was added to solrconfig.xml file:
<requestHandler name="/update/extract" class="solr.extraction.ExtractingRequestHandler"> <lst name="defaults"> <str name="uprefix">ignored_</str> <str name="lowernames">true</str> <str name="captureAttr">true</str> <str name="fmap.stream_name">name</str> <str name="fmap.artist">author</str> <str name="fmap.exif_isospeedratings">iso</str> <str name="fmap.exif_fnumber">aperture</str> <str name="fmap.exposure_time">exposure</str> <str name="fmap.exif_exposuretime">exposure_time</str> <str name="fmap.focal_length">focal</str> <str name="fmap.focal_length_35">focal_35</str> </lst> </requestHandler>
A few words about configuration. The uprefix parameter tells Solr which prefix it should use for the fields that were not mentioned explicitly in the handler configuration. In the above case, the fields which were not mentioned will be prefixed with the ignored_ word. That means that they will be matched by the dynamic field and thus they won’t be indexed (stored=”false” and indexed=”false”). The lowernames parameter with the value of true will cause all the field names to be lowercased. The captureAttr parameter tell Solr, to catch file attributes. The next parameters in the above configuration is mapping definition between fields returned by Tika and fields in the index. For example, fmap.exif_fnumber with the value of aperture says Solr to place the value of Tika exif_fnumber in the aperture index field.
Additional, needed libraries
In order for the above configuration to work we need some additional libraries (similar to the ones described in language identification). From the dist directory that is available in Solr distribution we copy the apache-solr-cell-3.5.0.jar file to tikaDir directory that should be created at the same level as the webapps directory in Solr deployment (of course this is an example). Next we add the following like to the solrconfig.xml file:
<lib dir="../tikaLib/" />
The above tell Solr to include all the libraries from the given directory. Next we need to copy all the jar files from the contrib/extraction/ Solr distribution directory to the created tikaDir directory. Additional solrconfig.xml changes are not needed.
The assumptions were, that there will be about 10.000 new photos a week that will need to be indexed. Those photos will be stored in a shared file system location. A simple bash script was responsible for choosing the files that were needed to be indexed and during its work it run the following command for each file:
curl 'http://solrmaster:8983/solr/photos/update/extract?literal.id=9926&commit=true" -F "myfile=@Wisla_2011_10_10.JPG"
The above command sends a file names Wisla_2011_10_10.JPG to /extract handler and says to run commit command after its processing. In addition to that, the unique id of the file is set (the literal.id parameter).
I addition to some standard filtering by author or other attributes of the photo it was also desired for the search to work. Yeah, just work We decided, that if we were the users of the application, we would like the fields like author or file name to be important. So, we decided to start with the following query:
As you can see, the query is simple. Two fields in the index are more valuable then others – name of the photo and its author. The value of those fields were set up by adding query time boosts. The rest of the fields are without boost, so the default boost of 1 applies.
To sum up
The described deployment is really simple. The applications works as so the search The next steps that will have to be done is the JVM and Solr tunning. One of the most important things would be looking at the users behavior and tune up searches to make search experience as good as possible. But let’s leave it for other solr.pl post.