Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Machine Learning to Remotely Log Asset Performance

DZone 's Guide to

Using Machine Learning to Remotely Log Asset Performance

See how to use machine learning to remotely log asset performance.

· AI Zone ·
Free Resource

For global manufacturing enterprises or other industries that rely on automated machinery across locations, the ability to keep tabs on asset performance becomes crucial. While manual supervision has worked well in such scenarios, there is a definite opportunity to optimize costs here. That's by enabling virtual monitoring and logging of asset performance data. 

Our team recently built a solution for this use case using machine learning solutions from AWS. It was designed to remotely capture video on machine performance and create logs of when the asset/machine was in operation and for how long. 

Phase 1: Building the Right Machine Learning Model and Deployment 

The first part of the solution involved designing and training a machine learning algorithm that could accurately identify the client’s floor cleaning machine from a video feed with a proposed location. Typically, the video feed would be peppered with other objects and people. So the algorithm’s ability to accurately recognize/identify the cleaning machine is crucial.

Image title

To facilitate this, the team:

  • Designed a model based on Convolutional Neural Network (CNN), using Tensorflow as their deep learning framework of choice. CNN is an often used deep learning algorithm when it comes to analyzing video imagery.
  • Curated images of the client’s floor cleaning machine from the internet, product brochures, and screen grabs from YouTube videos. Since the acquired data was insufficient, it was then increased with data augmentation technique, which worked to render existing images into different profiles. For example: create a left profile by extrapolating from an existing image of the machine in the right profile.
  • Normalized image data to ensure all collected images are in the same format and dimensions, to feed into the model.
  • Tagged each image to define the area within which the machine was located, so the model can consistently learn to recognize the machine.
  • Fed the pre-processed data into the model for training to spot the cleaning machines in a video feed.
  • After successful testing the model, it was deployed as Endpoint in AWS SageMaker.

AWS Solution at Play

Amazon SageMaker

Amazon SageMaker is a fully managed platform to build, train, and deploy machine learning models. For this particular project, the team used SageMaker to create the algorithm, as well as for data augmentation and training. The AWS-optimized Tensorflow on SageMaker allowed for quick and efficient scaling of training data to create a more accurate machine learning model.

Amazon SageMaker also enabled automatic tuning of the model across repeated training and validation runs, thus creating a more accurate algorithm.

Phase 2: Analyze Video Feed

Once the endpoint was in place, the next phase involved:

  • Collecting the CCTV video stream from various client locations and dropping it in an Amazon S3 bucket.
  • Dropping the file in S3 triggers the AWS Lambda function.
  • The Lambda function hit the Sagemaker endpoint with necessary parameters. The endpoint processed and analyzed the video feed to accurately identify cleaning machines at every 5 seconds interval, along with the timestamp and site location using video analysis techniques.
  • These sightings were made into a JSON file with timestamps and locations to create a complete log of a machine cleaning or failing to clean at a particular customer location or site.
  • This file was again stored in an Amazon S3 bucket.

Image title

Amazon Solutions at Play

Amazon S3

A highly scalable, available, and secure object storage service, Amazon S3 was used in this project to both store the incoming video streams as well as the logs of machine performance in a JSON file format.

AWS Lambda

AWS Lambda is a serverless computing platform that can run code in response to event triggers. It also automatically manages all the resources required to manage and scale your code. Here, the lambda function acts as a middleware between the API and the S3 bucket. It invokes the Sagemaker endpoint with required parameters.

Here, a new video feed is being dropped into the S3 bucket which triggers the Lambda function. It invokes the API endpoint (Endpoint on Sagemaker), with required parameters like bucket information and target bucket. The endpoint will process/analyze the video feed to identify the machine and extract information like site name along with the timestamp. Finally, it creates the JSON file with the extracted information.

Business Benefits

While this is a small use case, the potential application of machine learning solutions can be huge:

  • Helps in perceiving more insights about the behavior of the machines, which later helps in root cause analysis for the odd machine behavior and improve the product.
  • Helps in being more proactive in asset performance assessment and maintenance, instead of being reactive

So, that's what we did. Given the scenario, would you have built a different solution or built the same thing but differently? Any suggestions on what could have been done better?

Topics:
machine learning ,amazon sagemaker ,ml algorithm ,aws lambda ,amazon s3 ,asset performance ,ai

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}