Tech vs ISIS

Written by asandre | Published 2017/06/22
Tech Story Tags: isis | social-media | technology | facebook | twitter

TLDRvia the TL;DR App

Coding and artificial intelligence are not enough. Facebook, Twitter, Google on combating violent extremism online.

Earlier this month, Facebook published a white paper detailing its approach to counter terrorism.

“In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online,” write Monika Bickert, Facebook director of global policy management, and Brian Fishman, counterterrorism policy manager.

Our stance is simple: There’s no place on Facebook for terrorism.

Facebook explains how it removes terrorists and posts that support terrorism “whenever we become aware of them.” That means when those posts are reported and subsequently reviewed. “And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities,” the paper points out.

Facebook and other digital platforms realize that, although radicalization primarily occurs offline, the Internet does play a role .

“We believe technology, and Facebook, can be part of the solution,” Bickert and Fishman write.

Facebook relies on three main pillars:

  • Artificial intelligence, including image matching; language understanding; removing terrorist clusters; recidivism; and cross-platform collaboration. This last item is quite important considering that Facebook is only one of the platforms of the company’s family of apps.
  • Human expertise which includes reviewing flagged posts; counting on the advise of terrorism and safety specialists; and understanding and analyzing what constitutes a credible threat that merits escalation to law enforcement.
  • Partnering with others, including in the tech industry, government, and civil society. Facebook also collaborates with law enforcement agencies to the extent that they can (considering for instance how encryption works) and with organizations working on countering terrorists’ narratives online

“We want Facebook to be a hostile place for terrorists,” the white paper concludes. “The challenge for online communities is the same as it is for real world communities — to get better at spotting the early signals before it’s too late.”

The role of Facebook, by far the largest social media platform in the world with almost 2 billion monthly active users, and of all Silicon Valley companies is key in the success of the campaign against ISIS and terrorism.

At Google and Youtube, senior vice president and general counsel Kent Walker has recently wrote an op-ed published by the Financial Times where he highlights the steps the company is taking to fight terrorism online.

Four steps we're taking today to fight terrorism online_Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge…_blog.google

“Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all,” said Walker.

Google and YouTube are committed to being part of the solution. There should be no place for terrorist content on our services.

“We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online,” he added.

But Walker also stresses what he calls “the uncomfortable truth” — the fact that identifying and removing content that violates platforms’ policies is not enough and that the industry “must acknowledge that more needs to be done.”

Google and Youtube pledged to:

  • increase the use of technology to help identify extremist and terrorism-related videos;
  • increase the number of independent experts in Youtube’s Trusted Flagger program.
  • take a tougher stance on videos that do not clearly violate the platform policies;
  • expand Youtube’s role in counter-radicalisation efforts.

Collectively, these changes will make a difference.

“And we’ll keep working on the problem until we get the balance right, he said while explaining the company’s commitment to working with industry colleagues like Facebook, Microsoft, and Twitter. “Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge.”

Just like Facebook, Google, and Youtube, Twitter Public Policy team is working diligently on removing terrorist-related content. And similarly to Facebook, they rely a lot on technology to identify harmful users and posts.

Tech giants launch the Global Internet Forum to Counter Terrorism_Facebook, Microsoft, Twitter and YouTube intensify their fight against terrorism._hackernoon.com

In its tenth transparency report and on the 13th anniversary since the first-ever tweet, the company said that it suspended a total of 636,248 terror-related accounts in the period of August 1, 2015 through December 31, 2016.

During the reporting period included in the latest transparency report — July 1, 2016 through December 31, 2016 — a total of 376,890 accounts were suspended for violations related to promotion of terrorism.

Most of the accounts suspended, about 74 percent, consisted of accounts surfaced by Twitter’s own internal, proprietary spam-fighting tools.

The report includes a new section with Government terms of service (TOS) reports.

“This new section is limited to data about government reports to remove content in violation of Twitter’s terms of service (TOS) against the promotion of terrorism,” the company said while stating that government TOS reports represent less than 2% of all suspensions.

“It also includes an update on the company’s continued work to remove terrorist content from our platform beyond government reports.”

Back in August last year, Twitter strongly condemned the use of the platform by terror-related organization and stressed its commitment “to eliminating the promotion of violence or terrorism on our platform.”

Our efforts continue to drive meaningful results, including a significant shift in this type of activity off of Twitter.

And as Twitter pointed out in a 2016 blog post, “there is no ‘magic algorithm’ for identifying terrorist content on the internet, so global online platforms are forced to make challenging judgement calls based on very limited information and guidance.”

Recently, the company hired Emily Horne, formerly with the National Security Council at the White House in the Obama administration, as Twitter’s new global policy communications director.

According to Recode, “Horne will oversee Twitter’s messaging and communications for all things related to Twitter policy, including issues of abuse, hate speech and user privacy. As part of the NSC, Horne handled messaging around anti-ISIL counterterrorism initiatives, and regularly briefed top government security officials.”

Recode points out how Horne will play a key role is an important. “Comms deals with censorship issues with foreign governments and also walks a fine line between allowing free speech and allowing hate speech.”

In December last year, Twitter, Microsoft, Facebook, and Youtube announced a shared-industry database to help identify terrorist-related content spreading across their platforms and to speed up takedowns and suspensions.

“But it’s fair to say that the issue of terrorist takedowns is just the tip of the political iceberg that has crashed into social media giants in recent times,”TechCrunch reports.

Trolls, fake news, and hate speech have become huge problems for many online platforms, and in particular for Twitter and Facebook in the wake of the U.S. election last year with many criticizing the two social media companies of skewing political discourse by incentivizing the sharing of misinformation and enabling the propagation of far right extremist views.

Fake accounts, propaganda, and the fake news phenomenon are only one of the many facets of the fight against ISIS and terrorism online.

“We have to take measures to keep these tools from being misused,” explained Yasmin Green, head of research and development at Jigsaw, at WIRED’s 2017 Business Conference in New York. “We look at censorship, cybersecurity, cyberattacks, ISIS — everything the creators of the Internet did not imagine the Internet would be used for.”

Jigsaw, a think tank within Google’s parent company Alphabet Inc., is intensely focused on combatting online pro-terror propaganda.

In 2016, Green travelled to Iraq to speak directly to ex-ISIS recruits. “The conversations led to a tool called the Redirect Method, which uses machine learning detect extremist sympathies based on search patterns,” WIRED writes. “Once detected, the Redirect Method serves these users videos that show the ugly side of ISIS — a counter-narrative to the allure of the ideology. At the point that they are buying a ticket to join the caliphate, she said, it was too late.”

“It’s mostly good people making bad decisions who join violent extremist groups,” Green stressed. “So the job was: let’s respect that these people are not evil and they are buying into something and lets use the power of targeted advertising to reach them, the people who are sympathetic but not sold.”

In addition to the Redirect Method, which so far has targeted around 300,000 people with videos served up by it, Jigsaw has also created Perspective, a machine-learning algorithm that uses context and sentiment training to target toxic speech in comment sections on news organizations’ sites, whose beta version is being used by the likes of The New York Times.

But the issue goes beyond counter-messaging and removing harmful posts.

According to Slate, some “argue that social media companies are liable not for allowing terrorists to use their platforms but for profiting from that use.”

Citing recent lawsuits, Slate explains that because digital ads target specific viewers based on the content of the pages they visit, when it comes to terrorist posts, ads target “those who might be most sympathetic to terrorist messages.”

“If the connection between a terrorist’s tweet and an attack intuitively seems too attenuated, what happens when social media profits on that content?,” the online magazine writes.

The issue of combating ISIS and violent extremism online has recently been the focus of the G7 at the last summit in Taormina, Italy.

“While being one of the most important technological achievements in the last decades, the Internet has also proven to be a powerful tool for terrorist purposes,” reads the G7 joint statement on the fight against terrorism and violent extremism that came out of the first day of the G7 Summit in Taormina, Sicily.

The G7 leaders — CanadianPM Justin Trudeau; Emmanuel Macron of France; Angela Merkel of Germany; Paolo Gentiloni of Italy; Shinzo Abe of Japan; Theresa May of the United Kingdom; Donald Trump of the United States; and Jean-Claude Juncker of the European Commission and Donal Tusk of the Council of the European Union — called upon the technology groups and social media platforms “to substantially increase their efforts to address terrorist content.”

We encourage industry to act urgently in developing and sharing new technology and tools to improve the automatic detection of content promoting incitement to violence, and we commit to supporting industry efforts in this vein including the proposed industry-led forum for combatting online extremism.

G7 statement on the fight against terrorism and violent extremism_At the G7 Summit in Italy, the G7 leaders condemned “in the strongest possible terms terrorism in all its forms and…_medium.com

The G7 leaders highlighted the need “to support the promotion of alternative and positive narratives rooted in our common values and with due respect to the principle of freedom of expression.”

Countering propaganda is key, they said, to fight terrorism and violent extremism, online recruitment by extremists, and radicalization and incitement to violence.

The leaders agreed to increase their engagement with civil society, youth and religious leaders, detention facilities, and educational institutions. They also decided to task Interior Ministers of the G7 countries “to meet, as soon as possible, to focus on implementation of the following commitments and to work collectively with the private sector and civil society to defeat terrorism.”

In March, representatives from 68 countries gathered in Washington DC for the Meeting of Ministers of the Global Coalition on the Defeat of ISIShosted by Secretary of State Rex Tillerson and the U.S. Department of State. Countering extremism and terrorism online was high on the agenda.

In his introductory remarks at the Coalition meeting, Secretary Tillerson highlighted the role of social media in countering terrorist messages and in the fight against ISIS. He stressed how the cooperation with Silicon Valley has contributed to a 75% reduction of ISIS content online in one year.

“We all should deepen our cooperation with the tech industry to prevent encrypted technologies for serving as tools that enable extremist collaboration,” Tillerson said.

He added: “We need the global tech industry to develop new advancements in the fight and we thank those companies which are already responding to this challenge. We must capitalize on the strong advancements in data analytics and algorithmic technologies to build tools that discover ISIS’ propaganda and identify imminent attacks.”


Written by asandre | Comms + policy. Author of #digitaldiplomacy (2015), Twitter for Diplomats (2013). My views here.
Published by HackerNoon on 2017/06/22