Posted on April 3, 2017

Google’s AI Hasn’t Passed Its Biggest Test Yet: Hunting Hate

Mark Bergen, Bloomberg, March 31, 2017

YouTube Logo

An advertiser boycott of YouTube is testing a critical and much-hyped part of Google’s future: its prowess in artificial intelligence. Some experts in the field say the technology isn’t up to scratch yet, but that if any company can solve the problem at hand, it’s the online search giant.

Some of the world’s biggest marketers halted YouTube spending this month after ads from large brands were found running alongside hateful and extremist videos. Google parent Alphabet Inc. risks losing $750 million in revenue this year from the debacle, analysts at Nomura Instinet estimated this week.

{snip}

To detect and police content across YouTube’s sprawling library, and ensure ads don’t run against questionable content, Google must solve an AI problem no one has cracked yet: automatically understanding everything that’s going on in videos, including gesticulations and other human nuances.

{snip}

A potential solution lies in machine learning, a powerful AI technique for automatically recognizing patterns across reams of data — a Google specialty. Chief Executive Officer Sundar Pichai has pledged to infuse the technology across all its products.

{snip}

Computer scientists doubt technology alone can expunge offensive videos. “We’re not there yet where we can, say, find all extremist content,” said Hany Farid, a Dartmouth professor and senior adviser to the Counter Extremism Project, which has repeatedly called on YouTube to tackle this problem. He recommends companies like Google and Facebook Inc. deploy more human editors to filter content. “Machine learning, AI is nowhere near that yet,” he said. “Don’t believe the hype.”

The AI hype machine is running at full speed in Silicon Valley right now, as startups and technology giants like Google, Amazon and Microsoft compete to recruit engineers and scientists skilled in the field.

{snip}

“A video is three dimensional, with two dimensions in space and an additional dimension in time,” said Jiebo Luo, an AI expert at the University of Rochester, who has created text-based hate speech filters for social media like Twitter. “It’s a challenge.” Particularly so for something as big as YouTube. It said in 2015 that people uploaded 300 hours of content per minute to the service, making it impossible to screen all videos as they appear.

Two years have passed, so could software now evolve to a point where it dissects every video uploaded online, deciphering the vile from the rest? “It’s possible,” said Luo, “especially given the massive computational and people resources that Google has. No other company is in a better position to do that.”

{snip}

In a memo to aggrieved YouTube advertisers last week, the company said its machine learning algorithms will improve the precision and classification of videos. However, it also warned that with the volume of content involved this can never be 100 percent guaranteed.

{snip}