“He accessed the material and was using it to self-radicalise. Online played a major role in what happened”. These words of Commander Dean Haydon, head of the counterterrorism at London’s Metropolitan police, explain what motivated Darren Osborne to drive a van into a crowd of people outside of Mosque in Finsbury Park last June. Darren Osborne’s conviction and the insights into his radicalisation are timely. Just last month Theresa May called on tech companies to intensify their efforts in tackling extremism online.
The use of the virtual world by extremists, and violent Islamist extremists in particular, such as the so-called Islamic State (IS), has been extensively documented and debated by journalists, academics and politicians. But the growing presence of far-right extremist groups online has so far received less attention. While the issue was addressed by the UK’s Home Affairs Select Committee, it still lags behind in the wider discussion about counterterrorism. Meanwhile, far-right groups have been increasingly active in using digital platforms to disseminate propaganda and hate speech online. Are tech companies doing enough to tackle extremist materials online – and are different forms of extremism treated with equal importance?
A group of top intelligence officials travelled to Silicon Valley in 2016 to discuss the role of tech in counterterrorism. As a senior official in the Obama administration stated, “countering the vile ideology of ISIL and similar groups in the digital sphere is a priority for both government and private sector”.
This meeting appeared to set the tone for the future. Tech companies slowly started to allocate more attention and resources to the matter – though focusing mainly on Islamist groups. In a drastic move, Twitter announced in early 2016 that it had suspended 125,000 accounts associated with IS. In the months that followed, Facebook appointed Brian Fishman, expert in online strategies used by IS and al-Qaeda, to head their counterterrorism department. Google shortly followed suit, announcing their new software with the potential to weed out potential IS recruits by combining Google search advertising algorithms and YouTube’s platform.
Then, ending 2016 with a bang, Google, Facebook, Twitter and Microsoft all joined forces to create a shared database tracking terrorism-promoting content. While there is no clear indication that the focus is on violent Islamist extremism, computer scientist behind the software Hany Faird’s comments offer a good idea of its main targets: “what we want is to eliminate this global megaphone that social media gives to groups like ISIS”.
Research from the Institute of Strategic Dialogue (ISD) and J.M. Berger, senior researcher at George Washington’s Program on Extremism, are part of a handful of works documenting the growth of far-right extremism online. Berger’s study found that Twitter’s white supremacist presence grew by 600% over the last four years, beating Islamist extremists in tweets and followers. Speaking on the BBC 4 Radio Today show, Rebecca Skellett, Senior Programme Manager at the ISD, explained that there has been a “huge proliferation of the far right’s use of the online space”. Skellett then explained that the rate at which that Islamist content is taken down has led to the disproportionate availability of far-right extremist content.
As it was put in the ISD’s report ‘The Fringe Insurgency’, this “strategic, tactical and operational convergence has allowed the extreme right to translate large-scale online mobilisation into real-world impact”. As a result, these groups are utilising the virtual world to organise grassroots activities, influence elections and intimidate minorities.
Examining the real-life outcomes of online activities is key – it represents the meeting point of the virtual and real world, where content turns violent. Darren Osborne’s attack is an obvious example, the attack in Charlottesville following the white supremacist rally last summer is another. White Supremacists used Facebook to organise and recruit for the rally. The online organisation, whether directly or indirectly, also inspired the Charlottesville terrorist attack, where a member of a far-right group drove his car into a crowd of peaceful protesters, killing one person and injuring many more.
There is very little information available about what tech companies are doing to tackle far-right extremism. But YouTube officially announced last summer that the company is expanding its team of experts to include the Anti-Defamation League, the No Hate Speech Movement and the ISD. This could be understood as an expanded approach to treating far-right extremism as Islamist extremism. Yet big tech companies still haven’t come out publicly to stake their claims against far-right extremists, as they had done against Islamist extremists.
This being said, an inquiry by the UK Home Affairs Select Committee offers the most comprehensive review of how tech companies are failing to deal with far-right extremism online and highlights the inadequacies in removing supremacist content. One key finding stressed “the weakness and delays in Google’s response to our reports of illegal Neo-Nazi propaganda on YouTube”.
While the report does not ignore the issue of Islamist extremism online, it serves as a critical reminder to tech companies that more must be done elsewhere. Considering the rise of far-right extremism online, and the high levels of violence linked to their activities, it seems about time that tech companies recalibrate their efforts to tackle these very real threats–not just perceived ones.