Facebook recently released an intriguing research paper called “The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes.”
It’s well worth the read and a closer look at the database developed over the course of the research. The idea is simple: Use machine learning to help identify ‘mean’ and hateful memes.
If such a tool could be developed, it would help not just Facebook. It would help the entire online ecosystem better police the internet, stopping a significant portion of hate speech in its tracks.
Unfortunately, as the research shows, the AI still has a ways to go before it can truly be considered effective.
To conduct the search, the company compiled a data set consisting of a million samples of memes from all across the web. They eliminated any that were in clear violation of Facebook’s Terms of Service, which left them with 162,000 samples.
They re-created those memes by copying the text onto a new image, sourced via a partnership with Getty Images. That done, they had human reviewers judge whether the memes were “hateful”. Once a consensus was reached, they culled the remaining memes down to a final data set of 10,000, using these to attempt to “teach” the AI to identify hateful memes.
The result? The human reviewers had an average accuracy rate of 84.7 percent, versus the AI’s 64.73 percent accuracy rate. It was a good effort by the AI, especially since this was the first attempt. Clearly, much more work needs to be done before the AI routine can be said to be truly effective.
A researcher involved in the project had this to say about the results:
“If we knew what is missing, it would be easy to fix and the gap between AI and humans would be lower. Generally speaking, we need to work on improving multimodal understanding and reasoning.”
One thing you can be sure of is this: The company will keep hammering away at the problem, and the AI’s percentages will increase in time. It’s an interesting, challenging project that could have an enormous impact. Kudos to Facebook for their work to this point.
Used with permission from Article Aggregator