Skip to content

Countering Online Extremism Is Too Important To Leave To Facebook

Facebook logo. (Jaap Arriens/NurPhoto via Getty Images)

Getty

Social media has a complicated history with terrorism. As groups like the Islamic State harnessed the power of social platforms for recruitment, incitement and propaganda, the initial response from Silicon Valley was to refuse government calls to take action in deference to terrorists’ free speech rights. In the face of overwhelming public pressure and threats of new legislation across the world, the Valley made an abrupt about-face and began actively touting its counter-terrorism efforts. Yet, whistle-blower claims that Facebook’s counter-terrorism efforts are far from as successful as the company publicly claims and new reporting of just how pervasive and easily discoverable terrorism content is on Facebook reminds us how little we actually know about the success or failure of Silicon Valley’s counter-terrorism efforts.

Facebook has become in many ways the public face of Silicon Valley’s embrace of machine learning and content blacklists to strip terrorist content from its platform.

Yet, the company’s public statements framing its efforts as a tremendous success are at stark odds with its refusal to release even the most basic statistics that might permit external evaluation of its claims.

To date the company has steadfastly refused to release even the most routine of indicators such as its algorithms’ false positive and false negative rates.

In fact, the only real numbers the company has released publicly are estimates of the amount of content it has deleted and the fact that 99% of the ISIS and Al Qaeda terror content it deletes was flagged by its algorithms.

Facebook’s “99%” figure has become one of its most famous counter-terrorism statistics, yet also its most misreported. Last year Facebook proudly touted that “in Q1 we took action on 1.9 million pieces of ISIS, al-Qaeda and affiliated terrorism propaganda, 99.5% of which we found and flagged before users reported them to us. The New York Times at the time summarized this as “Facebook’s A.I. found 99.5 percent of terrorist content on the site, leading to the removal of roughly 1.9 million pieces of content in the first quarter,” while the BBC did slightly better with “the firm said its tools spotted 99.5% of detected propaganda posted in support of Islamic State, Al-Qaeda and other affiliated groups, leaving only 0.5% to the public.”

The reality is that Facebook subsequently confirmed that its statement means only that of the ISIS and Al Qaeda content it ultimately deleted, 99.5% of it came from its automated filters, much of it from its simple hash-based blacklists. It says absolutely nothing about how much of the total terrorism content across Facebook is being deleted nor does it say anything about how often the algorithms are wrong.

Most importantly, it does not break out how much of the content was blocked through Facebook’s simple hash-based content blacklist of known preexisting material versus novel material caught by its vaunted AI algorithms.

Kickstart Your Online Business With These 300+ Video Tutorials!

To date we have no idea whether Facebook’s AI algorithms have reached the point of actually being useful or whether it is relying exclusively on its content blacklists to remove material.

In the case of the New Zealand attack, it was astonishing to see the company acknowledge that it had not previously been using audio fingerprinting as a secondary signal for its video blacklist. Such fingerprinting is so widely deployed and considered such a standard that it is almost inconceivable that Facebook was not previously using it.

Facebook’s New Zealand failure also exposed the fact that the company has been relying on binary classifiers to detect violent and terroristic content, which is again absolutely inconceivable.

For a company that has such a vaunted team of deep learning experts, the company’s apparent naivety in the technological basics of content matching are staggering. Its bewildering failure to utilize the myriad industry standard approaches suggests Facebook’s deep pool of AI talent have not been properly deployed towards counter-terrorism efforts.

Of course, Facebook has little incentive to actually combat terrorism. After all, it actually makes money from every terrorist recruitment video, every call for violence, every propaganda video and has to date declined to respond when asked whether it would refund the money it earns from terrorist use of its platform.

Put simply, Facebook profits financially from terrorism.

The more terrorist content that is posted to Facebook, the more money the company makes.

Unlike copyright infringement and child exploitation imagery, which are among the only classes of illegal content that Facebook can actually face some liability for, the company faces no legal repercussions of any kind for all of the terrorist material it hosts.

It is notable that Facebook is not in the news for an epidemic of copyright infringement on its site. It is not being attacked from all sides by Hollywood companies for hosting illegal copies of their latest blockbusters. Facebook recognizes that it could actually bear legal liability for failing to take reasonable steps to combat copyright infringement and so invests aggressively in countering it. Instead, it is terrorism material, which it has little legal incentive to remove, that is what has run rampant through its walled gardens.

Faced with the simple fact that it actually makes money from terrorism today, faces no legal obligations or incentives to remove terrorist content and that it would cost considerable sums of money for it to actually meaningfully reduce terrorist use of its platform, it is clear why Facebook does not do more.

Unsurprisingly, the company did not respond to a request for comment.

Putting this all together, perhaps the best solution would be new legislation that would place terrorism content in the same category as copyright infringement, forcing Facebook to make the necessary investments in combatting it. At the very least, perhaps Congress could compel Facebook to turn over for the first time detailed statistics that would offer the first external visibility into its counter-terrorism efforts and to at the minimum order it to adopt the industry best practices it astoundingly claims not to have only begun belatedly examining in the aftermath of each of its highly consequential public failures.

In the end, combatting terrorism is simply too important to leave up to Facebook.

Let’s block ads! (Why?)

Source link

Achieve Goals You Never Thought Possible 4X Faster

4XSystem

Lena Khalid is an Accountant by profession. She quits her job that requires a lot of travelling and work from home since 2008.

Started with affiliate marketing, and she learns the trick of the trades fast. She created a few membership sites and focusing in smaller niches.

In 2010, she started to assist offline businesses going online via website design and consultation on internet marketing.

Today, LenaKhalid.com has a list of related websites to assist business owners to get online fast!!

Back To Top

This site is protected by wp-copyrightpro.com