How to Classify NSFW (Not Safe for Work) Imagery with AI Content Moderation using Java
Moderate your website's content uploads with an artificial intelligence service and tackle the problem effectively while conserving precious time and resources.
Join the DZone community and get the full member experience.
Join For FreeThe purpose of this article is to highlight some of the contemporary challenges in moderating degrees of explicit NSFW (Not Safe for Work) image content on websites and to demonstrate a cloud-based Artificial Intelligence Content Moderation API which can be deployed to increase the efficacy of the content moderation process.
Pornographic images are typically banned on mainstream websites and professional networks. That’s because failing to ban such content means exposing website patrons and employees alike to unsolicited, sexually explicit imagery, which can amount to charges of sexual harassment, depending on how litigious your region of the world is. Enforcing a ban on such content is no small task, however, due to the large volume of image files that are uploaded to content-curating networks each day, and in part due to the difficulty of clearly defining policies against imagery which is some degree of sexually suggestive (i.e., content that is racy) rather than fully pornographic.
Before moving on, I will first elaborate on the distinction between categories of explicit content, largely to highlight the complexity of "raciness" within the content moderation process. Pornography directly refers to content displaying human sexual organs or sexual activity, and it is necessarily created for an erotic purpose. Raciness relates generally to a degree of sexual suggestiveness that reaches a noticeable (or even shocking) level without constituting pornography. Most people have little trouble identifying the former when they see it; definitions of raciness, however, can vary somewhat depending on the audience and the situation. For example, a picture of men and women in bathing suits on a beach might strike you as completely acceptable at work if it’s a family photo from a vacation, and not acceptable if it’s a photo from a swimsuit magazine.
Now let’s look at where high volumes of image uploads come from. Many online businesses rely on User-Generated Content (UGC) images for product reviews, insurance claims, profile pictures, and more. Further, at most workplaces, employees can easily upload images via their personal machine for a multitude of reasons, such as to share with colleagues across professional communication channels (Slack, Teams, etc.) or to use as background photos for their office computer’s home screen. These are just a few examples of common situations that can yield hundreds, thousands, or even tens of thousands of new uploads each week (depending on the size and scale of the company). The challenge is that each new image holds the risk of a potential NSFW policy breach, regardless of low the odds are, and that makes identifying problematic files a "needle in a haystack" affair. This problem ironically worsens as a business becomes larger and more successful. Achieving a scale to accept more image uploads means there is a higher and higher likelihood of encountering unsafe images in that pool of uploads, and it only gets harder and harder to identify them as a result.
Before approaching this problem with an API service, preventing the upload of unsafe images begins with laying out clear business policies against explicit images for patrons and employees alike. Pornography is easy to define and can be explicitly discouraged through various user agreements, education/awareness training sessions, handbooks, and other such documentation. Racy content, while harder to define, can still be discouraged through similar avenues by taking the time to identify categories of non-pornographic content which are not considered acceptable for your business. To expand on my earlier example, a company may choose to ban images of men and women in bathing suits only if those images come from certain magazines or online publications. It’s unavoidable, however, that unsafe content will slip through the cracks regardless of such policies due to negligent or deviant actors on the client-side.
To enforce an NSFW content policy broadly and effectively, the inclusion of an AI content moderation service for image uploads is essential. In recent years, it has become a common addition to mainstream content-curating platforms. Without such a service, moderating image uploads to a website can only be done by manually opening, reviewing, and tagging each individual file based on its contents. That task is as tedious as it is expensive, typically requiring large teams of employees working around the clock to keep up with continual image uploads. In many cases, the quantity of uploads is completely insurmountable, which means explicit images are more likely to go undetected for longer and longer periods of time, continually increasing the likelihood that such images will be opened and cause harm. With an AI content moderation service in place, it’s uniquely possible to moderate a high volume of image uploads in a cost-effective way while simultaneously distinguishing between degrees of severity and avoiding unnecessary “overkill” content flagging.
The Cloudmersive NSFW Image Classification API provides a very effective layer of AI content moderation as a cloud service for any website/application and is designed to identify and classify images containing explicit material using a dynamic scoring system that creates an actionable distinction between outright pornography and varying degrees of raciness. To do so, it provides each scanned file with a numerical score and an accompanying probability tag that describes the score in natural language. The scoring scale extends from 0.0 to 1.0, with values between 0.8 to 1.0 indicating a “High Probability” that pornographic content is present, and values between 0.2 to 0.8 representing “Medium Probability" (an increasing degree of raciness). On the bottom end of the scale, scores between 0.0 to 0.2 indicate a “Low Probability” that such content is present.
Below, I’ve provided instructions to help you structure your NSFW Image Classification API call using complementary, ready-to-run code examples in Python.
Let’s begin by installing the Java SDK with Maven. First, add a reference to the repository in pom.xml:
<repositories>
<repository>
<id>jitpack.io</id>
<url>https://jitpack.io</url>
</repository>
</repositories>
Next, add a reference to the dependency in pom.xml:
<dependencies>
<dependency>
<groupId>com.github.Cloudmersive</groupId>
<artifactId>Cloudmersive.APIClient.Java</artifactId>
<version>v4.25</version>
</dependency>
</dependencies>
With installation complete, you can now include the import classes at the top of your file and call the API. The parameters for this API call include the following:
- Your image file path
- Your Cloudmersive API key (this can be obtained for free by registering a free account).
// Import classes:
//import com.cloudmersive.client.invoker.ApiClient;
//import com.cloudmersive.client.invoker.ApiException;
//import com.cloudmersive.client.invoker.Configuration;
//import com.cloudmersive.client.invoker.auth.*;
//import com.cloudmersive.client.NsfwApi;
ApiClient defaultClient = Configuration.getDefaultApiClient();
// Configure API key authorization: Apikey
ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey");
Apikey.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//Apikey.setApiKeyPrefix("Token");
NsfwApi apiInstance = new NsfwApi();
File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported.
try {
NsfwResult result = apiInstance.nsfwClassify(imageFile);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling NsfwApi#nsfwClassify");
e.printStackTrace();
}
With that, you're all done: no further code is required. Below, I've included a sample response model for your reference.
{
"Successful": true,
"Score": 0,
"ClassificationOutcome": "string"
}
Moderating content doesn't end with classifying explicit images, of course. There are many other ways your website content's safety can become compromised, such as through the unfiltered uploading of images or text files containing profanity, hate speech, or various forms of harassment. Boosting the power and scope of your application's content moderation processes whenever possible is highly recommended.
Opinions expressed by DZone contributors are their own.
Comments