YouTube Fact-Checking Tool Goes Wrong: In Flaming Notre Dame, It Somehow Sees 9/11 Tragedy - Gadgets4use Technology & tech news, Reviews

Breaking

Monday, April 15, 2019

YouTube Fact-Checking Tool Goes Wrong: In Flaming Notre Dame, It Somehow Sees 9/11 Tragedy

Image result for YouTubeA new YouTube tool for battling misinformation failed in a highly public way on Monday, wrongly linking video of the flaming collapse of the spire at Notre Dame Cathedral in Paris to the September 11, 2001, terrorist attacks.

As images of the iconic tower falling to the streets played on newscasts around the world - and on the YouTube channels mirroring those newscasts - "information panels" appeared in boxes below the videos providing details about the collapses of New York's World Trade Center after the terrorist attack, which killed thousands of people.

The 9/11 tragedy is a frequent subject of hoaxes, and the information panels were posted automatically, likely because of visual similarities that computer algorithms detected between the two incidents. YouTube began rolling out the information panels providing factual information about the subjects of frequent hoaxes in the past few months.

The misfire underscored the ongoing limits of computerised tools for detecting and combating misinformation - as well as their potential for inadvertently fueling it. While major technology companies have hired tens of thousands of human moderators in recent years, Silicon Valley executives have said that computers are faster and more efficient at detecting problems.

But Monday's incident shows the weaknesses of computerised systems. It comes just a month after YouTube and Facebook struggled for hours to detect and block video of a mass shooting at a New Zealand mosque that Internet users were posting and reposting.

"At this point nothing beats humans," said David Carroll, an associate professor of media design at the New School in New York and a critic of social media companies. "Here's a case where you'd be hard pressed to misclassify this particular example, while the best machines on the planet failed."

YouTube acknowledged the failure, which BuzzFeed reported it found on three different news channels on the site.

The appearance of the information panels fed a wave of baseless speculation on social media that the fire was a terrorist attack. On Twitter, some users falsely asserted that the fire was sparked by Muslim terrorists. Authorities in Paris instead blamed ongoing renovations at the cathedral and cited no evidence of terrorism.

The panels were one of the central ideas YouTube proposed last year in the aftermath of the school shooting in Parkland, Florida, during which a video suggesting one of the teenage survivors was a "crisis actor" rose to the top of YouTube's "trending" videos.

The video giant's algorithms automatically place the "information panels" below controversial or conspiracy-related videos, with short descriptions and links to sources such as Wikipedia and Encyclopaedia Britannica. Videos suggesting the moon landing was fake, for instance, include links to the Apollo space program.

YouTube said in a statement, "We are deeply saddened by the ongoing fire at the Notre Dame cathedral. Last year, we launched information panels with links to third-party sources like Encyclopaedia Britannica and Wikipedia for subjects subject to misinformation. These panels are triggered algorithmically and our systems sometimes make the wrong call. We are disabling these panels for live streams related to the fire."

A Twitter spokeswoman said the company is "reviewing and taking action in line with our rules."

YouTube and other technology companies have reported successes in using artificial intelligence to detect some types of common images that users upload to their platforms. These include child pornography and also, increasingly, images from extremist terrorist groups, which rely on familiar flags, logos and certain violent images, such as beheadings.

But automated systems have struggled with the unexpected, such as the visual similarity between the collapse of Notre Dame's spire and the Twin Towers. They also have struggled with video that relies on context, including hateful conspiracy theories, sexualized images that stop short of explicit pornography and, in one recent case, clips encouraging children to commit suicide.

YouTube, based in San Bruno, Calif., is a subsidiary of Google, one of the world's wealthiest and most advanced corporate developers of artificial intelligence and machine learning.

Pedro Domingos, a machine-learning researcher and University of Washington professor, said the algorithm's failure on Monday "doesn't surprise me at all."

If the algorithm saw a video of tall structures engulfed in smoke, and inferred that it was related to the attack on the World Trade Center, "that speaks well of the state-of-the-art in video system understanding, that it would see the similarity to 9/11. There was a point where that would have been impossible."

But the algorithms lack comprehension of human context or common sense, making them woefully unprepared for news events. YouTube, he said, is poorly equipped to fix such problems now and likely will remain so for years to come.

"They have to depend on these algorithms, but they all have sorts of failure modes. And they can't fly under the radar anymore," Domingos said. "It's not just Whac-a-Mole. It's a losing game."

No comments:

Post a Comment