{{announcement.body}}
{{announcement.title}}

Deploying ML Models Using Container Technologies: FnProject

DZone 's Guide to

Deploying ML Models Using Container Technologies: FnProject

In this article, I will make an example of how to transfer a machine learning model to production in the fastest and most effective way.

· Cloud Zone ·
Free Resource

Machine learning is one of the most trending topics of our time. Almost every company and professionals/students related to the IT sector are working in this field and increasing their knowledge level day by day.

As the projects about machine learning start to become widespread, there are more and more innovations about the practices related to how these projects are transferred to production environments. In this article, I will make an example of how to transfer a machine learning model to production in the fastest and most effective way. I hope it will be a useful study in terms of awareness.

Before starting our example, I want to give some information about this transfer infrastructure verbally.

When we examine modern software development architectures today, we see that new practices and frameworks are developing in every step of the software development life cycle. We see that different approaches have been tried in every step from the development of the application to the testing and deploying. While examining modern software development architectures, we see that one of the most fundamental components are container technologies. Briefly, to mention the advantages that container technologies offer us;
  • It enables the applications we develop to be run very easily and quickly.
  • By keeping all the additional library dependencies required by the application integrated with our application, it enables us to deploy and distribute our application in an easy, fast and effective. 
  • It is very easy to manage and maintain the applications put into use as containers.

Container technologies are an issue that needs to be examined in depth. Therefore, I will not go into more detail on this subject in this article. For more detailed information, you can review Docker documentation and read hundreds of blog articles about this subject.

In a world where we talk about container technologies, Docker is undoubtedly the locomotive technology. Docker, which has become a defacto standard in the industry, is a tool used by almost everyone who develops container-based software. Today, I will perform this example using Fnproject, an open source technology that runs on the Docker platform.

Fnproject is an open source and container-native platform. Fnproject convert to the software we developed into containers that enable us to run like a function on the cloud or on the prem servers. This platform supports all of the python, java, ruby, node.js and C # programming languages.

In order to make this example, firstly Docker and Fnproject should be installed in the environment we work in. You can prepare the necessary infrastructures in your own working environment by following the links below.

After preparing the necessary infrastructure requirements, let's create a machine learning model, which is now our first job, and store it on disk.

Here I will build a simple regression model using the Boston Housing dataset in Sklearn. My goal is not to build a state of the art machine learning model. Just create a simple model and save it to disk.

Python
 




x
18


1
from sklearn.datasets import load_boston
2
from sklearn.linear_model import LinearRegression
3
from sklearn.metrics import mean_squared_error
4
from sklearn.model_selection import train_test_split, cross_val_score
5
import pickle
6
 
          
7
X, y = load_boston(return_X_y=True)
8
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size = 0.1, random_state=5)
9
 
          
10
regres = LinearRegression()
11
regres.fit(X_train, Y_train)
12
 
          
13
pred = regres.predict(X_test)
14
rmse = (np.sqrt(mean_squared_error(Y_test, pred)))
15
r2 = round(regres.score(X_test, Y_test),2)
16
 
          
17
filename = 'boston_model.pkl'
18
pickle.dump(regres, open(filename, 'wb'))




Yes, the model I developed was recorded on disk. Now, let's move this model to the directory where we will work to turn it into a container.

Shell
 




xxxxxxxxxx
1


 
1
mkdir faas
2
 
          
3
cd faas
4
 
          
5
mkdir boston
6
 
          
7
cd boston 
8
 
          
9
mv /Users/emete/boston_model.pkl .



First of all, I will turn the prediction model I produced into a function. Then, Fnproject will automatically turn this function into a container that can be deployed in any environment. However, there are some files that I need to prepare for this and that Fnproject expects from me; (Since I have developed a python project, my code files have a .py extension. Extensions for different languages will change.) (You can access the formats of these files from the link.)

  • func.py: This is the python function that will generate the prediction. This function will be called for every request that we want to predict.
  • func.yaml: This file determines the configuration of the environment in which the function will run. In this file, we decide the name of the function, the amount of memory it will need, the version information, the language with which the code will be run, and what the entrypoint is.
  • requirements.txt: We write the library dependencies of the function/application we wrote in this file. Thus, while the function turns into a container, it is ensured that these dependencies are included in the container.

First, we will write the function func.py. This function will load the boston_model.pkl model that we have saved and will enable the model to make a new prediction by using the parameters coming to the function and return the prediction result. We save this file (func.py) in the same directory with model file (boston_model.pkl).

Python
 




xxxxxxxxxx
1
25


 
1
import io
2
import json
3
import cloudpickle as pickle
4
from fdk import response
5
 
          
6
def load_model(model_name):
7
        return pickle.load(open(model_name, 'rb'))
8
 
          
9
def pred(model, data):
10
    return {'prediction':model.predict(data).tolist()}
11
 
          
12
model_name = 'boston_model.pkl'
13
model = load_model(model_name)
14
 
          
15
def handler(ctx, data: io.BytesIO=None):
16
    try:
17
        input = json.loads(data.getvalue())['input']
18
        prediction = pred(model, input)
19
    except (Exception, ValueError) as ex:
20
        print(str(ex))
21
 
          
22
    return response.Response(
23
        ctx, response_data=json.dumps(prediction),
24
        headers={"Content-Type": "application/json"}
25
    )



Yes, as can be seen, we have 3 methods, but our main method is the handler method. We will then give this function as an entry point when creating the func.yaml file. That is, this method will be the first code block to run when the function is called. 

Let's create our func.yaml file. In this file, we enter the environment parameters of the function, the entrypoint information and the name of our function. We also save this file under the same directory.

YAML
 




xxxxxxxxxx
1


 
1
schema_version: 20200101
2
name: pythonfn
3
version: 0.0.1
4
runtime: python
5
entrypoint: /python/bin/fdk /function/func.py handler
6
memory: 256



The process of creating this file is also completed. It remains to create the requirements.txt file. In this file, we write the library dependencies that will be required for our function to work. We also save this file under the same directory.

Plain Text
 




xxxxxxxxxx
1


 
1
scikit-learn==0.21.3
2
cloudpickle==1.2
3
pandas==0.24.2
4
numpy==1.17
5
scipy==1.3
6
fdk==0.1.12



This is what Fnproject infrastructure expected from us. Now, using the files we create with Fnproject commands, we will turn our model into an image that will enable us to use it as a function. Then we will deploy this image to our local machine as a container and test the function.

First of all, we need to start Fnproject via a terminal screen.

Shell
 




xxxxxxxxxx
1


 
1
fn start



Fnproject infrastructure started working. Now we will create an fn project in the directory where we save our files and then deploy this project to the fn server installed on our local machine.

Shell
 




xxxxxxxxxx
1


 
1
fn create app boston


We created the application. Now we will deploy this application as a container. In this step, the necessary libraries will be downloaded using the configuration files we have created and packaged as a docker image with our function and saved in the local docker registry. Then, again, the new image will be deployed automatically for use in a container. The time of this step may vary according to the number and size of the libraries we write in the requirements.txt file.

Shell
 




xxxxxxxxxx
1


 
1
fn --verbose deploy --app boston  --local



We were able to deploy our application to the Fnproject server installed in local. Let's take a look at what happened at the back of Docker before moving on to the test. First of all, let's check our Docker images.

Shell
 




xxxxxxxxxx
1


 
1
docker image ls



As we can see, our new docker image has come to the registry. Now let's see if this new image has been prepared to work through a container.

Shell
 




xxxxxxxxxx
1


1
docker container ls



As can be seen, let's test this function that is standing up now as a container.

As the function is known, it estimates house prices based on the machine learning model that we produced earlier. Each house has 13 float type features and we need to send these 13 properties of the house, which we want to estimate the price, as a parameter to the function (You can reach the details about the Boston Housing data set that we used to train the model from the link.).

I will test the function with two different methods, one of which will be with the help of fn commands.

Shell
 




xxxxxxxxxx
1


 
1
echo -n '{"input":[[0.23,18.3,2.3,0.45,0.51,6.6,65.2,4.1,1.1,296.56,15.3,396.9,5.30]]}' | \
2
fn invoke boston pythonfn --content-type application/json



The other call method will be with standard CURL, but before I can call with CURL, I need to look at the endpoint of the running container. 

Shell
 




xxxxxxxxxx
1


 
1
fn inspect function boston pythonfn



I got the endpoint information, and now we can test the function with the CURL command.

Shell
 




xxxxxxxxxx
1


 
1
curl -X "POST" -H "Content-Type: application/json" -d \
2
 '{"input":[[0.23,18.3,2.3,0.45,0.51,6.6,65.2,4.1,1.1,296.56,15.3,396.9,5.30]]}' \
3
 http://localhost:8080/invoke/01E46TAQKPNG8G00GZJ0000001



As we can see, we called our function with CURL.

As I explained at the beginning of the article, the images produced by the infrastructure we created are all container-native. It has no need other than Docker to work. Therefore, the created images can be deployed in any environment that can operate a container. This can be a cloud environment or an on-prem cluster. 

In the next article, I will send the image that I produced here to a docker registry in the cloud and deploy it as a function in the cloud. As we have seen, we have transformed our model into a very simple image that can work standalone, and then deployed it as a container and managed to test it.

Topics:
artificial intelligence, cloud, container, machine learning, tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}