How I Made a Neural Network Web Application in an Hour with Python
I decided to rapid prototype an image recognition web application that used a neural network with computer vision.
Join the DZone community and get the full member experience.
Join For Freecomputer vision is an exciting and quickly growing set of data science technologies. it has a broad range of applications from industrial quality control to disease diagnosis. i have dabbled with a few different technologies that fall under this umbrella before, and i decided that it would be a worthwhile endeavor to rapid prototype an image recognition web application that used a neural network .
i used a deep learning framework called caffe , created by the berkeley vision and learning center . there are several other comparable deep learning frameworks like chainer , theano , and torch7 that were candidates, but i chose caffe due to my previous experience with it. caffe has a set of python bindings, which is what i made use of for this project. if you’re interested in more theory behind deep learning and neural networks, i recommend this page by michael nielsen.
to begin, i installed all the caffe dependencies onto an aws t2.medium instance running ubuntu 14.04 lts. (installation instructions for 14.04 lts can be found here .) i elected to use cpu-only cuda, because i’m not training my own neural network for this project. i obtained two pre-trained models from the bvlc model zoo , called googlenet and alexnet . both of these models were trained on imagenet , which is a standard set of about 14 million images.
now that i had all the prerequisites installed, i opened up the exaptive ide and started a fresh xap (what we like to call web applications built in exaptive). i started by creating a new python component for writing the caffe code necessary to identify an image. i named the new component “googlenet” after the neural net model i want to use first.
my new googlenet component in the ide, ready for coding.
then i wrote the caffe code in python.
first, we instantiate a caffe image classifier.
net = caffe.classifier(
reference_model,
reference_pretrained,
mean=imagenet_mean,
channel_swap=(2, 1, 0),
raw_scale=255,
image_dims=(256, 256))
the reference_model is a filepath to a set of config options for the network. caffe provides a stock model for this. the reference_pretrained is another filepath that points to the pretrained googlenet model from the model zoo.
we grab the input image filepath and use caffe methods to load it.
image_file = inevents["image"]
input_image = caffe.io.load_image(image_file)
then we simply call predict on our image classifier with the input image as an argument.
output = net.predict([input_image])
predictions = output[0]
predicted_class_index = predictions.argmax()
then we get the top three predictions for our image.
ind = np.argpartition(predictions, -3)[-3:]
then make some nice html to return for the text component.
pretty_text = "<h3>googlenet:</h3>"
for i in range(0, len(ind[np.argsort(predictions[ind])])):
pretty_text += "#%d. %s (%2.1f%%) <br>" % (
i + 1,
name_map[ind[np.argsort(predictions[ind])][2-i]],
predictions[ind[np.argsort(predictions[ind])][2-i]] * 100)
return pretty_text
note that we’re grabbing the id and using a name_map, which corresponds to the image class’s imagenet ids. then the pretty_text will be returned for the user.
now that the python was written, i needed to wire up a user interface. to get the image into the xap, i chose a file drop target, which is one of exaptive’s commonly used javascript components to handle file input.
the drop target will be used to hand an image to the neural net component.
the file drop target, ready to accept our images.
all that was left at this point was to create a text display for the html that will be generated inside the neural net component. for that, i chose a javascript component named “text”, which will render the html.
three components later, we’re ready to identify some images.
at this point, the code was done. i added some html and inline styles, then i saved this xap and opened it in another tab. here is page when we load it.
then we drag in a picture. i used a picture of a bunny and a kitten. the app processes for a few seconds and then i see:
it works! you can see the neural net's predictions (and their imagenet id numbers) along with the % certainty that the neural net gives us. so from here, we’ve laid the ground-work for plenty of other applications. now we can use any pre-trained neural net model. for example, if a model existed for a life-sciences application, all we’d need to do is upload that model and the component we just wrote could point to it instead of the googlenet model and give us results from this web app.
to illustrate this, i added a second component that uses the alexnet model such that i will get results for the same image from two separate neural net models that were trained on the same set of images.
the alexnet component only differs from the googlenet by the model filepath we use in the code.
running the same image through both neural nets, we now get:
all told, this process of writing the code and wiring up components took me just under an hour. as i wrote before, we can substitute any caffe neural network model and use it through this basic xap. from here, i think it’d be interesting to create a neural network training interface as a xap. it would be helpful to have a nice front-end for training neural networks, from specifying the number of hidden layers, to the composition and configuration of those hidden layers and visualizing the test scores of the new models. perhaps a followup blog post will be in order once that’s done.
Published at DZone with permission of Mark Wissler, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments