Artificial Intelligence

A Facebook spokesperson reveals that the social media giant would be redoubling its efforts to counter harmful content on its platform using artificial intelligence. 

It results in a company that would use Artificial Intelligence to prioritize harmful content. This move is targeting at helping over fifteen thousands of human reviewers and moderators in dealing with reported contents.

During the press interaction, we will make sure of getting to the worst of the worst, prioritizing real-world imminent harm above all. 

There have numerous attempts in the past to bring AI into the content moderation process on Facebook platform. 

However, not all have met with success. It tracks down, few significant efforts of Facebook in the past and how it has fared in tackling these issues. What are the 3 things that Artificial Intelligence can do for a company.

Facebook Efforts for AI-Based Moderations

In the past, Facebook used an XML method which uses a single shared encoder to train massive amounts of multilingual data. It provides improvements in both supervised and unsupervised machine translation of low-resource language for better detections of hate speech and harmful contents in languages other than English too.

This system enables the quality of classifiers to apply training in one language, for most cases, English that uses across different languages. This method was able to detect harmful language and content in about forty languages proactively.

Whole Post Integrity Embedding will soon succeed this method. WPIE is a pre-trained representation of content on integrity problems. In compared to the previous system, WPIE method trained on a more extensive set of violation and training inputs. While introducing such way, Facebook improves performance across modalities by using focal loss which prevents easy-to-classify examples from overwhelming the detectors during training with gradient blending, which executes an optimal blend of modalities based on overfitting behaviours.

Facebook claimed that deploying these tools helped in substantially improving the performance of integrity tool. 

For example, this tool helps in detecting almost 97.6 percentage of 4.4 million of drug sale content hosted on this platform by this year.

Earlier, as the COVID-19 situation was looming extensive, Facebook started utilizing SimSearchNet, a convolutional neural net-based model, built initially to detect near to exact duplicates for fighting misinformation. 

SimSearchNet was helping in end-to-end picture indexing to recognize and flagging near to duplicate match.

Recently, Facebook introduces its machine translation model called M2M-100, which has trained Two-Thousand Two Hundred languages about ten times. 

This is the amount of training data used in the preceding model. This model builds as many to many data set with 7.5 billion sentences for hundreds of languages using the novel-mining technique. The resulting parameters capture information from related language and reflect a diverse script of language and morphologies. 

One of the salient features found to be that this doesn’t require English as a link between two languages means a language that translates to another without having to be the first translation to the English language. 

The ultimate goal of such a model is to perform bidirectional translations between seven-thousands of languages to immensely benefit low-resource languages. 

Apart from its prominent application in communication, Facebook anticipated that the M2M-100 model would help in content moderation across a more extensive set of language.

How are these Moderation Techniques successful?

With its worldwide pandemic situation, AI content moderation system of Facebook conducts a litmus test. 

While there are a few hits, many loopholes were left exposed. CEO of Facebook and Twitter apprehend appears on the failure to make a harmful and oft-inflammatory content with the platform and also on the alleged bias.

Why do Indian Enterprises need To Revamp Their Cybersecurity Strategy?

Besides, the voice around the health and safety of human moderators at Facebook has grown strongest. 

The moderators accused flouting conducive working environment practices by underpaying them and forcing them on returning to work even at the heights of the pandemic. 

Noted that, instead of Coronavirus, thousands of these content moderators sent back to home and all the content moderation activities primarily governed by an AI system successfully.

Apart from these examples, there have been repeated criticisms of Facebook’s way of handling harmful content. 

It has found that the platform was pulling down seemingly harmless content while allowing toxic contents to thrive.

Wrap-Up

Despite the new advancements, Facebook is regularly announcing on AI-enabled content moderation. This fact remains that it’s more dependent on the human moderator. The problem with such a premise is that these human moderators exposed to hours of triggering content daily affecting mental well-being. 

Additionally, they also complain about being overworked and underpaid.

Because of all these issues, it is safer than it will be a while before Facebook presenting a truly breakthrough AI model for countering harmful, triggering, and biased content on its platform.