Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Faster AI Development With Serverless

DZone's Guide to

Faster AI Development With Serverless

The two most trending technologies are AI and serverless — and guess what? They even go well together. Learn the basics and check out some cool examples.

· AI Zone
Free Resource

Insight for I&O leaders on deploying AIOps platforms to enhance performance monitoring today. Read the Guide.

AI involves a learning phase in which we observe patterns in historical datasets, identify or learn patterns through training, and build machine-learned models. Once the model has been created, we use it for inferencing (serving) to predict some outcome or to classify some inputs or images.

Traditional machine learning methods include a long batch or iterative process, but we’re seeing a shift towards more continuous processes — or reinforced learning. The inferencing part is becoming more event-driven; for example, a bot accepts a line of text from a chat and responds immediately; an e-commerce site accepts customer features and provides buying recommendations; a trading platform monitors market feeds and responds with a trade; or an image is classified in real-time to open a smart door.

AI has many categories. Different libraries and tools may be better at certain tasks or only support a specific coding language, so we need to learn how to develop and deploy each of those. Scaling the inferencing logic, making it highly available, addressing continuous development, testing, and operation makes it even harder.

This is where serverless comes to the rescue and provides the following:

  • Accelerated development

  • Simplified deployment and operations

  • Integrated event triggers and autoscaling

  • Support for multiple coding languages and simplified package dependencies

Serverless also comes with some performance and usability drawbacks (mentioned in an earlier post), but those are addressed with the new high-performance and open-source serverless platform — nuclio.

We wrote a few nuclio functions using TensorFlow, Azure APIs, and VADER to demonstrate how simple it is to build an AI solution with serverless. These solutions will be fast, auto-scalable, and easy to deploy.

nuclio’s stand-alone version can be deployed with a single Docker command on a laptop, making it simpler to play with the examples below. These functions can be used with other serverless platforms like AWS Lambda with a few simple changes.

Sentiment Analysis

We used vaderSentiment in the following example, a Python library for detecting sentiments in texts. We input a text string through an event source like HTTP and respond with a classification result.

In nuclio, we can specify build dependencies by adding special comments in the header of the function. As demonstrated below, this can save quite a bit of hassle. Notice the built-in logging capability in nuclio, which helps us in debugging and automating function testing.

# @nuclio.configure
#
# function.yaml:
#   spec:
#     runtime: "python"
#     build:
#       commands:
#       - "pip install requests vaderSentiment"

from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer

def handler(context, event):
    body = event.body.decode('utf-8')
    context.logger.debug_with('Analyzing ', 'sentence', body)

    analyzer = SentimentIntensityAnalyzer()

    score = analyzer.polarity_scores(body)

    return str(score)

The function can be activated using an HTTP post with a body text and will probably respond with the sentiment in the following format:

{‘neg’: 0.0, ‘neu’: 0.323, ‘pos’: 0.677, ‘compound’: 0.6369}

To test the function, run nuclio’s playground using the following Docker command and access the UI by browsing to <host-ip>:8070 (port 8070). Find this and the other examples in GitHub or in the pre-populated functions list. Modify it according to your needs and then push deploy to build it.

docker run -p 8070:8070 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmpnuclio/playground:0.2.1-amd64 

nuclio playground UI (in port 8070)

For more details on using and installing nuclio in different configurations, see nuclio’s website.

Image Classification With TensorFlow

One of the most popular AI tools is TensorFlow, which was developed by Google. It implements neural network algorithms which can be used to classify images, speech, and text.

In the following example, we will use the pre-trained inception model to determine what’s in a picture. See the full source in nuclio’s examples repository.

TensorFlow presents us with a few challenges, as we need to use a larger baseline Docker image like “jessie” with more tools (nuclio uses a tiny alpine image by default to minimize footprint and function loading time), and we need to add requests, TensorFlow, and numpy Python packages.

The AI model can be a large file; it is more practical to load it and decompress it once than doing it per event. We will load the model into the function image through build instructions by adding the following comment/declaration to the header of the function. Alternatively, we can specify the build instructions in the function configuration UI tab:

# @nuclio.configure
#
# function.yaml:
#   spec:
#     runtime: "python:3.6"
#     build:
#       baseImageName: jessie
#       commands:
#       - "apt-get update && apt-get install -y wget"
#       - "wget 
http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz"
#       - "mkdir -p /tmp/tfmodel"
#       - "tar -xzvf inception-2015-12-05.tgz -C /tmp/tfmodel"
#       - "rm inception-2015-12-05.tgz"
#       - "pip install requests numpy tensorflow"

We will also use a thread to load the model into memory during the first invocation and keep it there for following calls. We make our function flexible by using optional environment variables to specify various parameters.

The function’s main part (see link for the full code):

def classify(context, event):

    # create a unique temporary location to handle each event,
    # as we download a file as part of each function invocation
    temp_dir = Helpers.create_temporary_dir(context, event)

    # wrap with error handling such that any exception raised
    # at any point will still return a proper response
    try:

        # if we're not ready to handle this request yet, deny it
        if not FunctionState.done_loading:
            context.logger.warn_with('Model data not done loading yet, denying request')
            raise NuclioResponseError('Model data not loaded yet, cannot serve this request',
                                      requests.codes.service_unavailable)

        # read the event's body to determine the target image URL
        # TODO: in the future this can also take binary image data
        # if provided with an appropriate content-type
        image_url = event.body.decode('utf-8').strip()

        # download the image to our temporary location
        image_target_path = os.path.join(temp_dir, 'downloaded_image.jpg')
        Helpers.download_file(context, image_url, image_target_path)

        # run the inference on the image
        results = Helpers.run_inference(context, image_target_path, 5, 0.3)

        # return a response with the result
        return context.Response(body=str(results),
                                headers={},
                                content_type='text/plain',
                                status_code=requests.codes.ok)

    # convert any NuclioResponseError to a response.
    # the response's description and status will appropriately
    # convey the underlying error's nature
    except NuclioResponseError as error:
        return error.as_response(context)

    # in case of any error, respond with internal server error
    except Exception as error:
        context.logger.warn_with('Unexpected error occurred, responding with internal server error',
                                 exc=str(error))

        message = 'Unexpected error occurred: {0}\n{1}'.format(error, traceback.format_exc())
        return NuclioResponseError(message).as_response(context)

    # clean up regardless of whether we succeeded or failed
    finally:
        shutil.rmtree(temp_dir)

The classify function is executed per event with the nuclio context and event details. The event can be triggered by various sources (i.e. HTTP, RabbitMQ, Kafka, Kinesis, etc.) and contains a URL of an image in the body, which is obtained with:

image_url = event.body.decode(‘utf-8’).strip()

The function downloads the image into a newly created temp directory, classifies it (Helpers.run_inference), and returns the top scores and their probability in the response.

We delete the temp file at the end of the invocation by calling  shutil.rmtree(temp_dir) to make sure the function doesn’t waste memory.

Notice the extensive use of structured and unstructured logging in nuclio functions, as we can log every step at one of the levels (debug, info, warn, error) and add parameters to the log events. Log entries can be used to control function execution and to debug during development and production. We can define the desired logging level at run-time or per function call (i.e. via the playground); for example, print the debug messages only when we are diagnosing a problem, avoiding performance and storage overhead in normal production operation.

Log usage example:

context.logger.debug_with(‘Created temporary directory’,  path=temp_dir)

Note: Having a structured log simplifies function monitoring or testing. We use structured debug messages to auto-validate function behavior when running regression testing.

nuclio supports binary content. Each function can be modified to accept the JPEG image directly through an HTTP POST event instead of the slower process of getting a URL and fetching its content. See this image resizing example for details.

We plan on simplifying this process further with nuclio and adding Volume DataBinding, which will allow mounting a file share into the function and allow us to change models on the fly. We also plan on adding GPU support with Kubernetes, enabling faster and more cost-effective classification. Stay tuned.

Using Cloud AI Services from nuclio (Azure Face API)

Leading cloud providers are now delivering pre-learned AI models that can be accessed through APIs. One such example is the Azure’s Face API, which accepts an image URL and returns a list of face objects which were found in the picture.

We created a nuclio function that accepts a URL, passes it onto Azure’s Face APIs with the proper credentials, parses the results, and returns them as a table of face objects sorted by their center’s position in the given picture (left-to-right and then top-to-bottom). For each face, the function returns the face’s rectangle location, estimated age, gender, and emotion, and whether it contains glasses.

We used build instructions to specify library dependencies and environment variables to specify required credentials. Make sure you obtain and set your own Azure keys before trying this at home (see instructions in the code comments).

The sources are available here, or you can just find them on our list of pre-installed playground examples (called “face”).

Summary

Serverless platforms such as nuclio help test, develop and productize AI faster. We plan on adding more AI examples to nuclio on a regular basis, allowing developers to take pre-developed and pre-tested code and modifying it to their needs.

Help us expand nuclio’s bank of function examples by signing up for our online hackathon and building the greatest serverless application on nuclio. You just might win a Phantom 4 Pro drone… more details on the nuclio web site. Also give nuclio a star on GitHub and join our Slack community.

Special thanks to Omri Harel for implementing and debugging the AI functions above.

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Topics:
serverless ,ai ,sentiment analysis ,image classification ,tensorflow ,neural networks ,algorithms ,azure ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}