Deploying ML Models Using Container Technologies: FnProject
Deploying ML Models Using Container Technologies: FnProject
In this article, I will make an example of how to transfer a machine learning model to production in the fastest and most effective way.
Join the DZone community and get the full member experience.Join For Free
Machine learning is one of the most trending topics of our time. Almost every company and professionals/students related to the IT sector are working in this field and increasing their knowledge level day by day.
As the projects about machine learning start to become widespread, there are more and more innovations about the practices related to how these projects are transferred to production environments. In this article, I will make an example of how to transfer a machine learning model to production in the fastest and most effective way. I hope it will be a useful study in terms of awareness.
Before starting our example, I want to give some information about this transfer infrastructure verbally.
- It enables the applications we develop to be run very easily and quickly.
- By keeping all the additional library dependencies required by the application integrated with our application, it enables us to deploy and distribute our application in an easy, fast and effective.
- It is very easy to manage and maintain the applications put into use as containers.
Container technologies are an issue that needs to be examined in depth. Therefore, I will not go into more detail on this subject in this article. For more detailed information, you can review Docker documentation and read hundreds of blog articles about this subject.
In a world where we talk about container technologies, Docker is undoubtedly the locomotive technology. Docker, which has become a defacto standard in the industry, is a tool used by almost everyone who develops container-based software. Today, I will perform this example using Fnproject, an open source technology that runs on the Docker platform.
Fnproject is an open source and container-native platform. Fnproject convert to the software we developed into containers that enable us to run like a function on the cloud or on the prem servers. This platform supports all of the python, java, ruby, node.js and C # programming languages.
In order to make this example, firstly Docker and Fnproject should be installed in the environment we work in. You can prepare the necessary infrastructures in your own working environment by following the links below.
After preparing the necessary infrastructure requirements, let's create a machine learning model, which is now our first job, and store it on disk.
Here I will build a simple regression model using the Boston Housing dataset in Sklearn. My goal is not to build a state of the art machine learning model. Just create a simple model and save it to disk.
Yes, the model I developed was recorded on disk. Now, let's move this model to the directory where we will work to turn it into a container.
First of all, I will turn the prediction model I produced into a function. Then, Fnproject will automatically turn this function into a container that can be deployed in any environment. However, there are some files that I need to prepare for this and that Fnproject expects from me; (Since I have developed a python project, my code files have a .py extension. Extensions for different languages will change.) (You can access the formats of these files from the link.)
- func.py: This is the python function that will generate the prediction. This function will be called for every request that we want to predict.
- func.yaml: This file determines the configuration of the environment in which the function will run. In this file, we decide the name of the function, the amount of memory it will need, the version information, the language with which the code will be run, and what the entrypoint is.
- requirements.txt: We write the library dependencies of the function/application we wrote in this file. Thus, while the function turns into a container, it is ensured that these dependencies are included in the container.
First, we will write the function func.py. This function will load the boston_model.pkl model that we have saved and will enable the model to make a new prediction by using the parameters coming to the function and return the prediction result. We save this file (func.py) in the same directory with model file (boston_model.pkl).
Yes, as can be seen, we have 3 methods, but our main method is the handler method. We will then give this function as an entry point when creating the func.yaml file. That is, this method will be the first code block to run when the function is called.
Let's create our func.yaml file. In this file, we enter the environment parameters of the function, the entrypoint information and the name of our function. We also save this file under the same directory.
The process of creating this file is also completed. It remains to create the requirements.txt file. In this file, we write the library dependencies that will be required for our function to work. We also save this file under the same directory.
This is what Fnproject infrastructure expected from us. Now, using the files we create with Fnproject commands, we will turn our model into an image that will enable us to use it as a function. Then we will deploy this image to our local machine as a container and test the function.
First of all, we need to start Fnproject via a terminal screen.
Fnproject infrastructure started working. Now we will create an fn project in the directory where we save our files and then deploy this project to the fn server installed on our local machine.
We created the application. Now we will deploy this application as a container. In this step, the necessary libraries will be downloaded using the configuration files we have created and packaged as a docker image with our function and saved in the local docker registry. Then, again, the new image will be deployed automatically for use in a container. The time of this step may vary according to the number and size of the libraries we write in the requirements.txt file.
We were able to deploy our application to the Fnproject server installed in local. Let's take a look at what happened at the back of Docker before moving on to the test. First of all, let's check our Docker images.
As we can see, our new docker image has come to the registry. Now let's see if this new image has been prepared to work through a container.
As can be seen, let's test this function that is standing up now as a container.
As the function is known, it estimates house prices based on the machine learning model that we produced earlier. Each house has 13 float type features and we need to send these 13 properties of the house, which we want to estimate the price, as a parameter to the function (You can reach the details about the Boston Housing data set that we used to train the model from the link.).
I will test the function with two different methods, one of which will be with the help of fn commands.
The other call method will be with standard CURL, but before I can call with CURL, I need to look at the endpoint of the running container.
I got the endpoint information, and now we can test the function with the CURL command.
As we can see, we called our function with CURL.
As I explained at the beginning of the article, the images produced by the infrastructure we created are all container-native. It has no need other than Docker to work. Therefore, the created images can be deployed in any environment that can operate a container. This can be a cloud environment or an on-prem cluster.
In the next article, I will send the image that I produced here to a docker registry in the cloud and deploy it as a function in the cloud. As we have seen, we have transformed our model into a very simple image that can work standalone, and then deployed it as a container and managed to test it.
Opinions expressed by DZone contributors are their own.