YouTube CEO Susan Wojcicki announced in a December 4 blog post that the platform is going to bring aboard about 10,000 reviewers in 2018. Their job will be to screen YouTube videos and assist the company’s machine learning algorithms to more accurately find troubling content

Wojcicki pointed out that between June and December of 2017, YouTube has removed more than 150,000 videos for violent extremism, and that machine learning is helping human reviewers remove nearly five times as many videos as they were previously.

“Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed,” Wojcicki wrote. “Since we started using machine learning to flag violent and extremist content…the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.”

She also announced that YouTube is taking “a new approach to advertising” featuring more content curation, stricter criteria for videos eligible to show ads, and a greater number of ad reviewers.

“We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values,” Wojcicki said. “Equally, we want to give creators confidence that their revenue won’t be hurt by the actions of bad actors.”

This development isn’t surprising, considering that YouTube and its parent company, Google, have endured a great deal of controversy over ads that appeared on videos featuring violent or extremist content, and had lost several major advertisers as a result.

As part of its attempt to address the issue, YouTube set a 10,000-view threshold for videos that will be allowed to display advertising.

According to The Guardian, YouTube has in recent weeks used machine learning technology to help its human moderators find and shut down hundreds of accounts and hundreds of thousands of comments.

But that doesn’t mean YouTube has freed itself from the specter of controversy.

In November, there were reports that YouTube was allowing violent content to get past the YouTube Kids filter, which is supposed to block content that is inappropriate for young viewers. Other reports indicate that “verified” channels were featuring child exploitation videos, including webcams of young girls in revealing clothes.

The company is working to address these issues as well. Building on the success of its machine learning and human moderation to deal with violent and extremist videos, “we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech,” Wojcicki wrote.

Photo by Alexey Boldin / Shutterstock.com