Home Technology 3 Challenges Facing Facebook’s Plan to Use AI in Killing Hate Speech

3 Challenges Facing Facebook’s Plan to Use AI in Killing Hate Speech

Recently, Facebook CEO Mark Zuckerberg said to the US Congress that his company would increasingly depend on artificial intelligence (AI) to curb the spread of hate speech on the platform.

In fact, he emphasized his point by saying that in a duration of five to ten years, Facebook would have artificial intelligence (AI) tools with the ability to get into various linguistic nuances.

Mark gave these remarks after he was called upon by the United States Congress to testify regarding the scandal surrounding Cambridge Analytica’s misappropriation of millions of users’ personal data.

Currently, Facebook employs about 15,000 human moderators who not only screen but also get rid of offensive content. The platform intends to recruit 5,000 more by the close of the year.

Nevertheless, these moderators can only act on posts that Facebook users have reported or flagged. Alternatively, using artificial intelligence (AI) to spot potentially offending content may make it easier and faster to remove. Here are some of the three challenges affecting this move:

Easy Words with Hard Meanings

For AI, language remains a considerable barrier. Although it is a seamless process for a computer to identify phrases or keywords, it is difficult to comprehend the meaning behind a post.

It calls for in-depth knowledge of the world. Language is both a sophisticated and powerful communication method since it depends on common-sense knowledge. Also, human beings utilize a mental model of other individuals to pack loads of information into some few words.

Ernest Davis, an NYU professor who focuses on the issue of common-sense reasoning with computers, said recognizing fake news is difficult. He also acknowledged Snopes.com for what it does in looking at a broad range of aspects and pointed out that fake news regularly contains half-truths.

It is Proving to be an Arms Race

The purveyors of misinformation and hate could as well adopt new ways to evade detection even if natural-language understanding is improved. Sean Gourley, the chief executive officer of Primer, issued this warning while talking at an MIT Technology Review function.

Primer prides itself on utilizing artificial intelligence (AI) to make reports for the United States intelligence agencies through the help of In-Q-Tel. He also highlighted an unavoidable potential of AI being used to optimize and mass-produce targeted fake news in the future.

Video will Worsen things

Current times appear to be ushering in the worst era of increased deceptive or fake news. On the other hand, Researchers have shown persuasive synthetic audio and videos made using machine learning technologies, including having politicians make non-existing speeches. As such, this kind of trickery has raised the alarm for the potential creation of fake revenge porn.

Since AI researchers have recently started to comprehend video, fake videos made with the help of AI could prove challenging to identify. In fact, these videos are made using two neural networks, which compete to produce and detect fake imagery. In addition, the process depends on fooling one of the networks, especially into thinking that something is real. As such, creating a system that could differentiate fake things from real ones could be challenging.

Source TechnologyReview

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...