Build Custom Visual Recognition Model Using Watson Studio

DZone 's Guide to

Build Custom Visual Recognition Model Using Watson Studio

In this article, you will learn how to build a custom visual recognition model using Watson Studio.

· AI Zone ·
Free Resource

Watson Visual Recognition service helps you to accurately analyze, classify, and train images using machine learning. There is a set of built-in models in Watson Visual Recognition that provides highly accurate results without training:

  • General model — General Classifier categories
  • Face model — Locate faces within an image, gender, and age.
  • Explicit model (Beta) — Whether an image is inappropriate for general use.
  • Food model (Beta) — Specifically for images of food items.

With a custom model, you can train the Watson Visual Recognition service to classify images to suit your business needs (if it doesn’t provide highly accurate results using the General/Face/Food/Explicit model available in Watson Visual Recognition service). In this article, you will learn how to build a custom visual recognition model using Watson Studio.


  1. IBM Cloud Account

Prepare Images

Collect a minimum of 10 images for each class in the .zip file. For this blog, I have selected 4 classes(Benz, Audi, BMW, Camry) with 10 images in each class.

Screen Shot 2018-05-22 at 2.40.07 PM

Requirements for preparing images to train custom models:

  • Supported image file formats: JPEG(.JPG) and PNG(.png) formats
  • Minimum image size is: 32*32 pixels
  • Minimum number of image files required in the .zip folder: 10 images
  • Maximum number of image files in .zip folder: 10,000 images or 100MB per .zip file
  • The Watson Visual Recognition service accepts a maximum of 256MB per training call.
  • Negative images are required and it is not used to create a class within the created classifier but does define what the updated classifier is NOT. Negative example files should contain images that do not have the subject of any of the positive classes. You can only specify ONE negative example file.

Steps to Train Your Custom Model

Step 1: Login to your IBM Cloud account and search for Watson Studio service in the Catalog.

Screen Shot 2018-05-28 at 11.20.48 AM

Step 2: Create Watson Studio service by checking if Region/Organization/Space you intend to deploy is correct.

Screen Shot 2018-05-28 at 11.22.28 AM

Step 3: Click on Get Started.

Screen Shot 2018-05-28 at 11.21.28 AM

Step 4: Create a New project.

Screen Shot 2018-05-28 at 10.44.45 AM

Step 5: Select option Complete to have access to every tool within Watson Studio.Screen Shot 2018-05-28 at 10.44.55 AM

Step 6: Enter Project Name and select Storage. If you are creating Watson Studio for the first time, you will have to Add the storage and then Refresh to have the same view as shown below!

Screen Shot 2018-05-28 at 10.45.33 AM

Step 7: Associate Watson Visual Recognition Service to your project by going to the Settings tab.

Screen Shot 2018-05-28 at 10.46.16 AM

Step 8: Scroll down to Associated Services, click on Add service and then select Watson.

Screen Shot 2018-05-28 at 10.46.37 AM

Step 9: Scroll down to Add Visual Recognition service.

Screen Shot 2018-05-28 at 10.58.30 AM

Step 10: You can either choose Existing Visual Recognition Service or if you are doing it for the first time, you can select New and then create Visual Recognition service with Lite plan.

Screen Shot 2018-05-28 at 10.58.42 AM

Screen Shot 2018-05-28 at 10.58.53 AM

Step 11: In the popup window, ensure that Resource group is correct and click on Confirm.

Screen Shot 2018-05-28 at 10.59.06 AM

Step 12: We can proceed to the next step if Visual recognition service is listed in the Associated services list else, you will have to repeat the previous steps again. Associating Visual Recognition service is important because we will be training that visual recognition instance!

Screen Shot 2018-05-28 at 11.00.37 AM

Step 13: In the Assets tab, create New visual recognition model.

Screen Shot 2018-05-21 at 2.58.04 PM

Step 14: Browse or drag and drop the .zip files to Watson Studio which will be then uploaded to Cloud Object Storage.

Screen Shot 2018-05-21 at 2.58.23 PM

Step 15: Upon successful uploading of images to Cloud Object Storage(COS), .zip files will be listed on the right side of your Watson Studio service.

Screen Shot 2018-05-21 at 2.59.35 PM

Step 16: Create a class by entering a class name to it.

Screen Shot 2018-05-21 at 2.59.56 PM

Step 17: Drag and drop the .zip files to corresponding classes as shown below.

Screen Shot 2018-05-21 at 3.00.22 PM

Screen Shot 2018-05-21 at 3.01.29 PM

Step 18: When status has changed to Model is ready to train, click on Train Model button.

Screen Shot 2018-05-21 at 3.01.39 PM

Step 19: Wait until Model training is completed.

Screen Shot 2018-05-21 at 3.01.49 PM

Step 20: Click on the hyperlink in the popup that is displayed after successful training of the model to view and test the model.

Screen Shot 2018-05-21 at 3.06.05 PM

Steps to Test Your Custom Model

After successful model training, you will be redirected to a page where you can see Overview (Model ID, Status, and other metadata) of the model builder. Take a note of Model ID as that will be required during the implementation stage.

Screen Shot 2018-05-21 at 2.56.45 PM

You can also understand how your model is performing by uploading an image in the Test area of the same dashboard view. If you aren’t happy with the results, you can also edit and Retrain the model.

Screen Shot 2018-05-21 at 2.54.47 PM

During the Implementation phase, you will have to pass the Model ID as Classifier ID while calling the Watson Visual Recognition service in your application. Node.JS sample implementation

Alternatively, you can also build custom visual recognition model programmatically by using REST API/Watson SDK’s. Please refer below for more details on the same:

ai, artificial intelligence, ibm cloud, ibm watson, watson studio

Published at DZone with permission of Riya Roy , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}