Advertisement
Advertisement
Technology
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Google CEO Sundar Pichai appears before the US House of Representatives; Judiciary Committee on December 11. Photo: Melina Mara/Washington Post

YouTube still struggling to remove hate videos, and to avoid recommending them to users

  • Racists, anti-Semites and other extremists continue to use YouTube to spread their ideas. Critics say platform is slow to identify such content
  • YouTube doesn’t ban conspiracy theories or false news stories, but has made efforts to reduce the reach of such content this year
Technology

A year after YouTube’s chief executive promised to curb “problematic” videos, it continues to harbour and even recommend hateful, conspiratorial videos, allowing racists, anti-Semites and proponents of other extremist views to use the platform as an online library for spreading their ideas.

YouTube is particularly valuable to users of social media sites that are popular among hate groups but have scant video capacity of their own. Users on these sites link to YouTube more than to any other website, thousands of times a day, according to the recent work of Data and Society and the Network Contagion Research Institute, both of which track the spread of hate speech.

Facebook uses artificial intelligence to fight hate speech in Myanmar

The platform routinely serves videos espousing neo-Nazi propaganda, phoney reports portraying dark-skinned people as violent savages and conspiracy theories claiming that large numbers of leading politicians and celebrities molested children. Critics say that even though YouTube removes millions of videos on average each month, it is slow to identify troubling content and, when it does, is too permissive in what it allows to remain.

The struggle to control the spread of such content poses ethical and political challenges to YouTube and its embattled parent company, Google, whose chief executive, Sundar Pichai, testified before the US House of Representatives’ Judiciary Committee this month amid several controversies.

With 400 hours of video uploaded to YouTube every minute, the platform struggles to deal quickly with hate videos.

YouTube has focused its clean-up efforts on what chief executive Susan Wojcicki in a blog post last year called “violent extremism”. But she also signalled the urgency of tackling other categories of content that allows “bad actors” to take advantage of the platform, which 1.8 billion people log on to each month.

“I’ve also seen up-close that there can be another, more troubling, side of YouTube’s openness. I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” Wojcicki wrote. But a large share of videos that researchers and critics regard as hateful don’t necessarily violate YouTube’s policies.

The recommendation engine for YouTube, which queues up an endless succession of clips once users start watching, recently suggested videos claiming that politicians, celebrities and other elite figures were sexually abusing or consuming the remains of children, often in satanic rituals, according to watchdog group AlgoTransparency.

Google is still struggling with aspects of the hate video problem on YouTube. “How do you draw lines in a way that is right,” asked CEO Sundar Pichai after testifying to a US House of Representatives committee this month. Photo: AFP

YouTube does not have a policy against falsehoods, but it does remove videos that violate its guidelines against hateful, graphic and violent content directed at minorities and other protected groups. It also seeks to give wide latitude to users who upload videos, out of respect for speech freedoms and the free flow of political discourse.

“YouTube is a platform for free speech where anyone can choose to post videos, subject to our Community Guidelines, which we enforce rigorously,” the company said in response to questions from The Washington Post.

In an attempt to counter the huge volumes of conspiratorial content, the company has also worked to direct users to more-reliable sources – especially after major news events such as mass shootings.

But critics say YouTube and Google generally have faced less scrutiny than Twitter and Facebook – which have been blasted for the hate and disinformation spreading on their platforms during the 2016 US election and its aftermath – and, as a result, YouTube has not moved as aggressively as its rivals to address such problems.

The big problem is people trust way too much what’s on YouTube – in part because it’s Google’s brand
Guillaume Chaslot, former YouTube engineer

Researchers are increasingly detailing the role YouTube plays in the spread of extremist ideologies, showing how those who push such content maximise the benefits of using various social media platforms while seeking to evade the particular restrictions on each.

“The centre of the vortex of all this stuff is often YouTube,” says Jonathan Albright, research director at Columbia University’s Tow Centre for Digital Journalism.

Although YouTube doesn’t ban conspiracy theories or false news stories, Facebook, YouTube and Twitter have made efforts to reduce the reach of such content this year.

YouTube’s community guidelines define hate speech as content that promotes “violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes”.

Moderators evaluate each post based on a strike system, with three strikes in a three-month period resulting in termination of an account.

YouTube does not publish statistics describing its effectiveness at detecting hate speech, which the company concedes is among its biggest challenges.

Facebook, by contrast, recently began publishing such data and the results highlight the challenge: between July and September, its systems caught about half of posts it categorised as hate speech before they were reported by users, compared to more than 90 per cent of posts that the study determined to be terrorism-related. Artificial intelligence (AI) systems are even less capable of finding hate when it is only video and not text.

Google overall now has more than 10,000 people working on maintaining its community standards. The company declined to release a number for YouTube alone. But YouTube officials acknowledge that finding and removing hateful videos remains difficult, in part because of technical limitations of analysing such a vast and fast-growing repository of video content.

Facebook needs to face up to hate speech

Users upload 400 hours of video to YouTube each minute, according to the company.

YouTube reported that 6.8 million of the 7.8 million videos it removed in the second quarter of this year for violating standards were first flagged by computerised systems. But detecting terrorists waving identifiable flags or committing violence is comparatively easy, according to experts, both because the imagery is more consistent and because government officials keep lists of known or suspected terrorist groups and individuals whose content is monitored with particular care.

There is no equivalent list of hate groups or creators of hateful content. YouTube and other social media companies routinely face accusations from conservatives of acting too aggressively against videos that – while treading close to restrictions against hateful or violent content – also carry political messages.

Former YouTube engineer Guillaume Chaslot, an AI expert who once worked to develop the platform’s recommendation algorithm, says he discovered the severity of the problem, which he believes he helped create, on a long bus ride through his native France in 2014, the year after he left the company.

I do think we’ve definitely gotten better at areas where you’re better able to clearly define policies
Sundar Pichai, Google CEO, regarding policing of YouTube

A man sitting on the seat next to him was watching a succession of videos claiming that the government had a secret plan to kill one-quarter of the population. Right after one video finished, another started automatically, making roughly the same claim.

Chaslot tried to explain to the man that the conspiracy was obviously untrue and that YouTube’s recommendation engine was simply serving up more of what it thought he wanted. The man at first appeared to understand, Chaslot says, but then concluded: “But there are so many of them.”

“The big problem is people trust way too much what’s on YouTube – in part because it’s Google’s brand,” Chaslot says.

After testifying before Congress, Pichai acknowledged that Google still had more work to do in crafting and enforcing policies on hate speech and other offensive content online.

“I do think we’ve definitely gotten better at areas where you’re better able to clearly define policies, where there’s less subjectivity,” said Pichai, pointing to YouTube.

But he appeared to grapple with the harder calls. “How do you draw lines in a way that is right, [and] you don’t make mistakes on either side,” he said, “and how do you do it responsibly?”

Post