What Is Machine Learning?

Machine learning is the art of using computer algorithms to learn from experiences and use those experiences for future predictions.

Tom Mitchell gave a really simple definition of machine learning. Here we go:

A computer program is said to learn from experience (E) with respect to some task (T) and some performance measure (P), if its performance on (T), as measured by (P), improves with experience (E).

This definition dazzled me a bit, too. In human language, if you want your program to predict, for example, buy patterns at a busy grocery store (task T), you can run it through a Machine Learning algorithm with data about past buying patterns (experience E) and, if it has successfully learned, it will then do better at predicting future buy patterns (performance measure P). With the buying pattern prediction, you can respond accordingly and give the customer tailor-made discounts for example.

Supervised Machine Learning

If you look at our definition, we need some past buying patterns. This is called a training set. Using Machine Learning with a training set is called supervised Machine Learning. The program is trained on a pre-defined set of training examples that then facilitate its ability to reach an accurate conclusion when given new data.

Under supervised Machine Learning, there are two major subcategories.

1. Regression Machine Learning Systems

These are systems where the value being predicted falls somewhere on a continuous spectrum. These systems help us with questions of "How much?" or "How many?"

2. Classification Machine Learning Systems

These are systems where we seek a yes-or-no prediction, such as "Is this tumor cancerous?" or "Does this cookie meet our quality standards?" or "If I bought cookies last week, will I buy shampoo?" and so on.

In this blog post, we will be experimenting with classification. We are going to use Naive Bayes classifiers. Naive Bayes classifiers, a family of classifiers that are based on the popular Bayes’ probability theorem, are known for creating simple yet well-performing models — especially in the fields of document classification and disease prediction.

If you want to know more about Naive Bayes classifiers, here is a great article by Alexandru Nedelcu.

Let's get started.

A Simple Machine Learning App

I created an example use Node that consists of two services: the API and the trainer.

The API is responsible for feeding historical data to the trainer and asking prediction questions to the classifier with a trained model. We use a MongoDB for document storage and the storage of our trained model, a serialized object.

The trainer is responsible for training the model using the historical data found in the datastore. The trainer worker will train continuously and will store the trained model to mongo. If somebody requests a prediction through the API, it will retrieve the latest trained model. The schema below shows all the components.

alt

The code for the Machine Learning application can be found on GitHub. The code has comments and I think it's self-explanatory. Check it out and start the Machine Learning bandwagon today!

Start the App

The code has two subdirectories called API and trainer. Both contain the package.json, a Dockerfile , and the main.js. They are using the Node package monk for MongoDB management. Express.js for the API and an awesome natural language machine package called Natural.

Docker

If you are into Docker development, the easiest way to start the example is to usedocker-compose. Just run docker-compose up. Don't forget to build the images first with docker-compose build
alt

Adding Historical Data for Training

When the app is started, the trainer will start training and you can feed the model using the API endpoint http://127.0.0.1:3000/training to POST some historical data.

alt

Here an example using Postman. We are going to post a text "this is a beautiful world" that is stemmed with the label "positive." Play with this API call and add more "positive" and "negative" text. When done, the model is trained and you can start predicting whether the sentence is "positive" or "negative." 

alt

Training the Model

The trainer is running a loop using a process manager and will make the model better if there is more historical data.

Here is a snapshot of the data in the database:

alt

This is the output of the trainer with this very minimal data set:

alt

You can see after training it saved the model in MongoDB for later retrieval doing predictions. If you look in the database, you find the persistent model:

alt

Doing Predictions

OK. We trained the model. Time to do some predictions. Just hit the /predict endpoint with a text and the classifier, with the loaded trained model, will give you an answer if the text is "positive" or "negative." 

alt

I send a new sentence, "I like beautiful people," to the API, which is not in the training set and the classifier knows it belongs to the "positive" stem. So cool!