{{announcement.body}}
{{announcement.title}}

Automatically Filter Image Uploads According to Their NSFW Score

DZone 's Guide to

Automatically Filter Image Uploads According to Their NSFW Score

Learn how to automatically filter and moderate image uploads according to their NSFW score via the PixLab NSFW API endpoint.

· AI Zone ·
Free Resource

When developing and later administrating a web or mobile application that deals with a lot of image uploads, shares, likes and so forth, you would want to set up a smart automated mechanism that moderate your users' image, GIF or video uploads on the fly and act accordingly when NSFW content is being detected such as rejecting, flagging or even censoring (i.e. apply a blur filter to) the target picture or video frame being suspected.

Since images and user-generated content dominate the Internet today, filtering NSFW contents becomes an essential component of Web and mobile applications. NSFW is the acronym for not safe for work and refer to images or GIF frames that contain adult content, violence or gory details.

In the scale of the modern web, it is impractical to rely on a human operator to moderate each image upload one after one. Instead, Computer Vision, a field of Artificial Intelligence is used for such a task. Computers are now able to automatically classify NSFW image content with greater precision & speed.

In this post, you will learn how to make use of the PixLab API to detect and filter unwanted contents (GIF included) and make decisions based on the score number from PixLab to blur the image in question or delete it.

      The PixLab API

PixLab Logo

PixLab is a Machine Learning SaaS platform that offers Computer Vision and Media Processing APIs either via a straightforward HTTP RESTful API or offline SDK via the SOD Embedded CV library. PixLab HTTP API feature set includes but not limited to:

  • Over 130 Machine Vision & Media Processing API endpoints.
  • State-of-the-art document scan algorithms such as Passports, ID cards via the /docscan API endpoint, face detection (/facedetect), facial landmarks extraction (/facelandmarks), facial recognition (/facecompare), NSFW content analysis (/nsfw) and many more.
  • 1TB of media storage served from a highly available CDN over SSL.
  • On the fly image compression, encryption, and tagging.
  • Proxy for AWS S3 and other cloud storage providers.

Invoking the NSFW Endpoint

According to the PixLab documentation. The purpose of NSFW is to detect not suitable for work (i.e. nudity & adult) content in a given image or video frame. NSFW is of particular interest, if mixed with some media processing API endpoints like blur, encrypt or mogrify to censor images on the fly according to their nsfw score. This can help the developer automate things such as filtering user's upload.

HTTP Methods

The NSFW endpoint support both GET and POST HTTP methods which means that you can send either a direct link (public URL) to the target image to scan via GET or upload the image directly from your HTML form or Web app for example to the NSFW endpoint via POST. POST on the other side is more flexible and supports two content types: multipart/form-data and application/json.

API Response

The NSFW API endpoint always return a JSON object (i.e. application/json) for each HTTP request whether successful or not. The following are the JSON fields returned in the response body:

  • Status
  • Score
  • Error

The field of interest here is the score value. The more this value approaches 1, the more your picture is highly nsfw. In which case, you have to act accordingly such as rejecting the picture or better, apply a blur filter (see example below) to the picture in question.

Real-World Code Sample

Given a freshly uploaded image, perform nudity & adult content detection at first and if the nsfw score is high enough, apply a blur filter on the target picture. A typical (highly nsfw) blurred image should look like the following after processing:

Censored NSFW Picture

Such blurred image was obtained via the following Python script:

Python

Similarly, the same code logic implemented in PHP:

PHP
 




x


 
1
/*
2
 * PixLab PHP Client which is just a single class PHP file without any dependency that you can get from Github
3
 * https://github.com/symisc/pixlab-php 
4
 */
5
require_once "pixlab.php";
6
 
          
7
# Target Image: Change to any link (Possibly adult) you want or switch to POST if you want to upload your image directly, refer to the sample set for more info.
8
$img = 'https://i.redd.it/oetdn9wc13by.jpg';
9
# Your PixLab key
10
$key = 'My_Pixlab_Key';
11
 
          
12
# Censure an image according to its NSFW score
13
$pix = new Pixlab($key);
14
/* Invoke NSFW */
15
if( !$pix->get('nsfw',array('img' => $img)) ){
16
    echo $pix->get_error_message();
17
    die;
18
}
19
/* Grab the NSFW score */
20
$score = $pix->json->score;
21
if( $score < 0.5 ){
22
    echo "No adult content were detected on this picture\n";
23
}else{
24
    echo "Censuring NSFW picture...\n";
25
    /* Call blur with the highest possible radius and sigma */
26
    if( !$pix->get('blur',array('img' => $img,'rad' => 50,'sig' =>30)) ){
27
        echo $pix->get_error_message();
28
    }else{
29
        echo "Blurred Picture URL: ".$pix->json->link."\n";
30
    }
31
}


The code above is self-explanatory and regardless of the programming language, the logic is always same. We made a simple HTTP GET request with the input image URL as a sole parameter. Most PixLab endpoints support multiple HTTP methods so you can easily switch to POST based requests if you want to upload your images & videos directly from your mobile or web app for anlaysis. Back to our sample, only two API endpoints are needed for our moderation task:

  1. First, NSFW is the analysis endpoint that must be called first. It does perform nudity & adult content detection and return a score value between 0..1. The more this value approaches 1, the more your picture/frame is highly nsfw.
  2. The Blur endpoint is called later only if the nsfw score value returned earlier is greater than certain threshold. In our case, it is set to 0.5. The blur endpoint return either a direct link (URL) to the blurred image which is stored on the PixLab storage cluster or on your own AWS S3 bucket if you connect your bucket (i.e. Access/Secret S3 key) from the PixLab dashboard. This feature should give full control over your processed (i.e. censored) images.

Conclusion

Surprisingly, automatically filtering unwanted content is straightforward for the average web developer or site administrator who may lack technical machine learning skills thanks to open computer vision technologies such as the one provided by PixLab. Find out more code samples on https://github.com/symisc/pixlab and https://github.com/symisc/sod if you are C/C++ developer.

Topics:
api, artifcial intelligence, computer vision, javascript, machine learning, php, python, web, webdev

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}