Category
Theme

Note: This website was automatically translated, so some terms or nuances may not be completely accurate.

Published Date: 2022/11/30

How should you respond to inappropriate online posts? How can content moderation protect your company's brand value and CX?

The internet world we interact with maintains order through "content moderation," which removes inappropriate posts to prevent users from experiencing discomfort. Content moderation is becoming a critical issue not only for platform companies but also for any business operating social media or media platforms. This article explores the theme "Does content moderation protect brands and improve CX?" to consider the nature of content moderation.

What is content moderation in internet services?

Have you ever encountered unpleasant posts while using social media? Countermeasures against such inappropriate content are what content moderation (or content regulation) entails. Content moderation refers to the monitoring of posts, images, and videos uploaded to the internet. It plays a role in maintaining a healthy community by taking measures such as removing illegal content, content deemed offensive or inappropriate by users, harmful information like slander, or banning users who post such content.

Monitoring methods include using tools, employing AI, and having humans oversee the process.

【Tool-Based Monitoring】
Tools can filter text containing inappropriate expressions or automatically detect offensive images and videos. While relatively inexpensive to implement, they face challenges in monitoring accuracy, such as overlooking content not matching predefined keywords or blocking harmless posts.

【Monitoring Using AI
Machine learning AI can enhance monitoring accuracy. While gaining significant attention recently, it is more costly than tools and faces challenges such as difficulty interpreting context when machine learning performance is low.

【Human Visual Monitoring】
Content moderators visually monitor posts. They remove harmful content in response to user complaints and may also check posts one by one in real time. While highly accurate, this method is costly.

Since each method has its pros and cons, monitoring often combines tools with visual checks, or AI with human oversight. Some major platform operators employ large teams alongside AI-powered automatic detection to address issues, highlighting the importance of content moderation in maintaining order within communities.

However, the approach to content moderation is heavily influenced by the platform's policies. What constitutes optimal rules depends on the platform's objectives and governance structure. Therefore, establishing policies that clearly align with the platform's core purpose is crucial.

For instance, major IT companies with diverse platforms publicly disclose their moderation system workflows as mechanisms supporting "Trust and Safety" (overall activities ensuring service integrity). While content services like blogs often check for spam posts or hate speech, services where users interact set policies tailored to their nature—such as excluding malicious businesses or users seeking encounters.

Additionally, a video-sharing platform has established community guidelines to enable users to express themselves safely. These guidelines meticulously define inappropriate content, such as dangerous acts or behaviors leading to bullying and harassment, and enforce monitoring through both technology and human oversight. Furthermore, they maintain a reporting channel for users to flag posts violating these guidelines, thereby preserving the health of the community.

How Far Should Platform Operators Go in Content Moderation?

While content moderation is critically important, drawing the line on how much regulation is necessary is indeed challenging. Let's examine the approach platform operators take when tackling content moderation.

When content moderation is inadequate, users may experience discomfort or struggle to find desired information amid excessive noise. This can lead to users leaving the community. To prevent such outcomes, posts that risk damaging the customer experience (CX) must be filtered out of users' view. However, determining what constitutes a violation is an extremely complex issue. On platforms like social media used by people worldwide, posts must be moderated by interpreting their intent and considering diverse cultural backgrounds and legal frameworks.

Overly strict regulations that infringe on freedom of expression can create a "censorship"-like environment, causing users to feel constrained and ultimately leave the community. For example, regulations designed to exclude certain words often lead to unintended consequences, such as the deletion of posts that contain no harmful intent. In Japan, the Ministry of Internal Affairs and Communications has begun considering platform regulations, but establishing uniform boundaries is difficult, and creating standards and rules is no simple task. While the necessity of content moderation is recognized and the technology to support it is becoming relatively easier to implement, careful and thoughtful consideration is required regarding how content should actually be moderated.

What Companies Can Do to Convey Their Brand Message Accurately

So far, we've examined the approach platform operators take to content moderation. But what about companies outside the platform space? These considerations are equally relevant for businesses operating official social media accounts or e-commerce sites where users post reviews.

In recent years, corporate account management on social media has become widespread. Platforms where users share opinions, such as review sites and message boards, have also proliferated. However, these touchpoints with consumers always carry inherent risks. Negative discussions—such as complaints about the brand or dissatisfaction with purchase experiences—may occur in these open forums. Furthermore, corporate controversies and negative opinions, regardless of their truthfulness, tend to attract significant user attention. Consequently, they can spread rapidly and become newsworthy. This not only risks diminishing brand value but also potentially leads to misinterpretation of facts due to inaccurate information and the spread of distorted perceptions.

Adopting a content moderation approach as a countermeasure to such situations may reduce the risk of damaging brand image. By closely monitoring online opinions about the company and responding swiftly when concerning posts are found—verifying facts, investigating causes, offering explanations, or even requesting information disclosure from site administrators when necessary—companies can potentially minimize damage.

In practice, there have been cases where companies facing misinformation spread immediately responded, posting correct information on social media the next day, which resolved the situation. On the other hand, some companies deliberately neither deny nor confirm rumors or online backlash, choosing instead to wait for the situation to calm down or fade away. Immediately countering unfounded slander or unreasonable criticism on social media can sometimes add fuel to the fire. While early intervention is sometimes best, other times it's better to observe. Judging based on the situation is crucial.

When criticism floods in toward a specific target and spirals out of control, it's called a "backlash." However, the damage inflicted on a company's brand value can be far greater than what the term "backlash" alone can convey. Responding appropriately to negative posts and preventing malicious posts before they occur helps ensure the company's message is accurately conveyed to the public and protects its brand image.

Furthermore, publicly disclosing the policy on "what rules govern content moderation" communicates the values the brand holds dear and the CX it aims to provide users, demonstrating the brand's stance. For example, a company that strictly enforces rules against discriminatory content might be perceived as one that values diversity and equality. Conversely, a brand that avoids overly strict rules and prioritizes user autonomy can also be seen as a valid approach.

In other words, the health of a platform or community service significantly influences not only CX but also the company's brand image. To provide users with a better experience, both platform operators and the companies involved must carefully establish guidelines on what to regulate, to what extent, and how much freedom to allow. Ultimately, this will also lead to building a fanbase that supports the values cherished by the company or brand and enhance engagement.

 

Content moderation aimed at improving platform and community health also prompts a reassessment of relationships and distance with users. Start by reviewing the current state of communication with customers—such as posts on your company's social media accounts and comments on various services—and consider: "What policies are appropriate to protect our brand value?" From there, you may discover avenues for improving CX that align with your brand.

 

The information published at this time is as follows.

Was this article helpful?

Share this article

Also read