paint-brush
Tutorial: Faster AI Development with Serverlessby@yaronhaviv
2,567 reads
2,567 reads

Tutorial: Faster AI Development with Serverless

by yaron havivDecember 6th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The two most trending technologies are AI and serverless and guess what? They even go well together. Before getting into some cool examples, let’s start with some AI basics:

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Tutorial: Faster AI Development with Serverless
yaron haviv HackerNoon profile picture

The two most trending technologies are AI and serverless and guess what? They even go well together. Before getting into some cool examples, let’s start with some AI basics:

AI involves a learning phase in which we observe patterns in historical datasets, identify or learn patterns through training and build machine learned models. Once the model has been created, we use it for inferencing (serving) to predict some outcome or to classify some inputs or images.

Traditional machine learning methods include a long batch or iterative process, but we’re seeing a shift towards more continuous processes, or re-enforced learning. The inferencing part is becoming more event driven, for example a bot accepts a line of text from a chat and responds immediately; an ecommerce site accepts customer features and provides buying recommendations; a trading platform monitors market feeds and responds with a trade; or an image is classified in real-time to open a smart door.

AI has many categories. Different libraries and tools may be better at certain tasks or only support a specific coding language, so we need to learn how to develop and deploy each of those. Scaling the inferencing logic, making it highly available, addressing continuous development, testing and operation makes it even harder.

This is where serverless comes to the rescue and provides the following benefits:

- Accelerated development

- Simplified deployment and operations

- Integrated event triggers and auto scaling

- Support of multiple coding languages and simplified package dependencies

Serverless also comes with some performance and usability drawbacks (mentioned in an earlier post), but those are addressed with the new high-performance and open source serverless platform — nuclio.

We wrote a few nuclio functions using TensotFlow, Azure APIs and VADER to demonstrate how simple it is to build an AI solution with serverless. These solutions will be fast, auto-scale and easy to deploy.

nuclio’s stand-alone version can be deployed with a single Docker command on a laptop, making it simpler to play with the examples below. These functions can be used with other serverless platforms like AWS Lambda with a few simple changes.

Sentiment Analysis

We used vaderSentiment in the following example, a Python library for detecting sentiments in text. We input a text string through an event source like HTTP and respond with a classification result.

In nuclio we can specify build dependencies by adding special comments in the header of the function as demonstrated below, this can save quite a bit of hassle. Notice the built-in logging capability in nuclio which helps us in debugging and automating function testing.

# @nuclio.configure






# function.yaml:# spec:# runtime: "python"# build:# commands:# - "pip install requests vaderSentiment"

from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer



def handler(context, event):body = event.body.decode('utf-8')context.logger.debug_with('Analyzing ', 'sentence', body)

analyzer = SentimentIntensityAnalyzer()  

score = analyzer.polarity\_scores(body)  

return str(score)

The function can be activated using an HTTP post with a body text and will probably respond with the sentiment in the following format:

{‘neg’: 0.0, ‘neu’: 0.323, ‘pos’: 0.677, ‘compound’: 0.6369}

To test the function, run nuclio’s playground using the following Docker command and access the UI by browsing to <host-ip>:8070 (port 8070). Find this and the other examples in GitHub or in the pre-populated functions list. Modify it according to your needs and then push deploy to build it.

docker run -p 8070:8070 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp nuclio/playground:stable-amd64

nuclio playground UI (in port 8070)

For more details on using and installing nuclio in different configurations see nuclio’s web site.

Image Classification with TensorFlow

One of the most popular AI tools is TensorFlow, developed by Google. It implements neural network algorithms which can be used to classify images, speech and text.

In the following example we will use the pre-trained inception model to determine what’s in a picture. See the full source in nuclio’s examples repository.

TensorFlow presents us with a few challenges, as we need to use a larger baseline Docker image like “jessie” with more tools (nuclio uses a tiny alpine image by default to minimize footprint and function loading time) and we need to add add requests, TensorFlow and numpy python packages.

The AI model can be a large file, it is more practical to load it and decompress it once than doing it per even, we will load the model into the function image through build instructions by adding the following comment/declaration to the header of the function, alternatively we can specify the build instructions in the function configuration UI tab:

# @nuclio.configure












# function.yaml:# spec:# runtime: "python:3.6"# build:# baseImageName: jessie# commands:# - "apt-get update && apt-get install -y wget"# - "wget http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz"# - "mkdir -p /tmp/tfmodel"# - "tar -xzvf inception-2015-12-05.tgz -C /tmp/tfmodel"# - "rm inception-2015-12-05.tgz"# - "pip install requests numpy tensorflow"

We will also use a thread to load the model into memory during the first invocation and keeps it there for following calls. We make our function flexible by using optional environment variables to specify various parameters.

The function’s main part (see link for the full code):

def classify(context, event):

# create a unique temporary location to handle each event,  
# as we download a file as part of each function invocation  
temp\_dir = Helpers.create\_temporary\_dir(context, event)  

# wrap with error handling such that any exception raised  
# at any point will still return a proper response  
try:  

    # if we're not ready to handle this request yet, deny it  
    if not FunctionState.done\_loading:  
        context.logger.warn\_with('Model data not done loading yet, denying request')  
        raise NuclioResponseError('Model data not loaded yet, cannot serve this request',  
                                  requests.codes.service\_unavailable)  

    # read the event's body to determine the target image URL  
    # TODO: in the future this can also take binary image data  
    # if provided with an appropriate content-type  
    image\_url = event.body.decode('utf-8').strip()  

    # download the image to our temporary location  
    image\_target\_path = os.path.join(temp\_dir, 'downloaded\_image.jpg')  
    Helpers.download\_file(context, image\_url, image\_target\_path)  

    # run the inference on the image  
    results = Helpers.run\_inference(context, image\_target\_path, 5, 0.3)  

    # return a response with the result  
    return context.Response(body=str(results),  
                            headers={},  
                            content\_type='text/plain',  
                            status\_code=requests.codes.ok)  

# convert any NuclioResponseError to a response.  
# the response's description and status will appropriately  
# convey the underlying error's nature  
except NuclioResponseError as error:  
    return error.as\_response(context)  

# in case of any error, respond with internal server error  
except Exception as error:  
    context.logger.warn\_with('Unexpected error occurred, responding with internal server error',  
                             exc=str(error))  

    message = 'Unexpected error occurred: {0}\\n{1}'.format(error, traceback.format\_exc())  
    return NuclioResponseError(message).as\_response(context)  

# clean up regardless of whether we succeeded or failed  
finally:  
    shutil.rmtree(temp\_dir)

The “classify” function is executed per event with the nuclio context and event details. The event can be triggered by various sources (e.g. HTTP, RabbitMQ, Kafka, Kinesis, etc.) and contains a URL of an image in the body, which is obtained with:

image_url = event.body.decode(‘utf-8’).strip()

The function downloads the image into a newly created temp directory, classifies it (Helpers.run_inference) and returns the top scores and their probability in the response.

We delete the temp file at the end of the invocation by calling shutil.rmtree(temp_dir) to make sure the function doesn’t waist memory.

Notice the extensive use of structured and unstructured logging in nuclio functions, as we can log every step at one of the levels (debug, info, warn, error) and add parameters to the log events. Log entries can be used to control function execution and to debug during development and production. We can define the desired logging level at run-time or per function call (e.g. via the playground), for example print the debug messages only when we are diagnosing a problem, avoiding performance and storage overhead in normal production operation.

Log usage example:

context.logger.debug_with(‘Created temporary directory’, path=temp_dir)

Note that having a structured log simplifies function monitoring or testing. We use structured debug messages to auto validate function behavior when running regression testing.

nuclio supports binary content. Each function can be modified to accept the JPEG image directly through an HTTP POST event instead of the slower process of getting a URL and fetching its content. See this image resizing example for details.

We plan on simplifying this process further with nuclio and adding “Volume DataBinding” which will allow mounting a file share into the function and allow us to change models on the fly. We also plan on adding GPU support with Kubernetes, enabling faster and more cost-effective classification. Stay tuned.

Using Cloud AI Services from nuclio (Azure Face API)

Leading cloud providers are now delivering pre-learned AI models which can be accessed through APIs. One such example is the Azure’s Face API, which accepts an image URL and returns a list of face objects which were found in the picture.

We created a nuclio function which accepts a URL, passes it onto Azure’s Face APIs with the proper credentials, parses the results and returns them as a table of face objects sorted by their center’s position in the given picture, left-to-right and then top-to-bottom. For each face, the function returns the face’s rectangle location, estimated age, gender, emotion and whether it contains glasses.

We used build instructions to specify library dependencies and environment variables to specify required credentials. Make sure you obtain and set your own Azure keys before trying this at home (see instructions in the code comments).

The sources are available here, or just find them in our list of pre-installed playground examples (called “face”).

Summary

Serverless platforms such as nuclio help test, develop and productize AI faster. We plan on adding more AI examples to nuclio on a regular basis, allowing developers to take pre-developed and tested code and modifying it to their needs.

Help us expand nuclio’s bank of function examples by signing up for our online hackathon and building the greatest serverless application on nuclio. You just might win a Phantom 4 Pro drone… more details on the devpost nuclio web site. Also give nuclio a star on Github and join our Slack community.

Special thanks to Omri Harel for implementing and debugging the AI functions above.