Why mass shooting videos keep spreading online even as tech giants try to stop them
[ad_1]
If the aftermath of the horrific mass shooting three years ago in New Zealand gives any indication, social media may be haunted for years by graphic video of Saturday’s attack on a grocery store in Buffalo, New York.
In March 2019, a gunman in Christchurch, New Zealand, killed 51 people at two mosques and livestreamed it on Facebook. To this day, major social media platforms and tech companies are still fighting off attempts to upload and share slightly altered versions of the shooter’s video, according to the Global Internet Forum to Counter Terrorism, a tech industry group that pushes back against violent extremist online content to ultimately reduce the spread of extremism.
There’s plenty of work left to do.
The attack in Buffalo, which killed 10 people, was livestreamed on Twitch. President Joe Biden has called it an act of terrorism and white supremacy. The attack, which targeted Black people, was accompanied by a hate-filled document.
Despite years of work to control the spread of such material, videos from the Buffalo attack were still readily available on major platforms like Facebook and Twitter in the days after the shooting. The continued struggle demonstrates how big the challenge is for YouTube, Twitter, Facebook and other online services to stamp out the glorification of violence, which is generally banned in most apps’ terms of service.
“Our work operates in an extremely adversarial and dynamic environment,” Sarah Pollack, a spokesperson for the anti-extremist tech forum, said in an interview Tuesday.
Twitch, the platform on which the Buffalo shooting was livestreamed, said it stopped the broadcast in less than two minutes. But in the hours and days afterward, those terrifying moments spread at a speed that called into question how effective the tech industry’s strategy has been post-Christchurch.
One way promoters of the video made end-runs around the system was to turn to lesser-known video hosting sites, such as Streamable. On that site alone, the video was viewed 3 million times before it was removed Sunday, The New York Times reported, and a link to the Streamable video was shared hundreds of times across Facebook and Twitter.
Streamable isn’t among the 18 online services that are members of the Global Internet Forum to Counter Terrorism, so it didn’t have access to the forum’s full set of tools major apps use to detect and remove videos of mass shootings. Amazon, which owns Twitch, is a member of the forum.
The ominous lesson, experts said, is that as long as there are small services unable or unwilling to thoroughly scrub out mass shooting videos, the industry will have a difficult time stamping them out.
“There’s always going to be some sketchier and sketchier company that is happy to make a buck off the dregs of the internet,” said an expert in countering online extremism, Hany Farid, a computer science professor at the University of California, Berkeley.
“I don’t think there’s an easy answer,” he said.
Facebook and Twitter said Tuesday that their internal teams were working with the industry anti-extremism forum to cut off dissemination of the Buffalo video, in line with their terms of service.
At the center of the companies’ strategy is a shared database of information about extremist videos. As mass shootings and other violent attacks occur, the 18 services participating in the industry forum add more information to the database, allowing all of the participating companies to know the digital fingerprints of violent videos, detect matches and speed up the takedown process.
Companies continue to share more digital fingerprints, known as “hashes,” as they discover altered versions of banned videos.
Three years after the Christchurch shooting, “we continue to get hashes,” Pollack said. She said Christchurch data makes up 5 percent of the entire shared database, which also includes data about videos from the Islamic State terrorist group, better known as ISIS, and other extremist groups.
“Members will continue to be able to add new hashes of the perpetrator-produced content for as long as they find new versions,” she said.
“We have to expect there to be these continued attempts to manipulate the content,” she added. “We never say, ‘All right, we’ve got enough hashes.’”
Among the common ways people alter videos are to overlay text or add banners, Pollack said. Users can sometimes alter videos enough that they can evade matching software on established platforms, which then report the near-duplicates back to the industry forum.
By Sunday night, the forum’s 18 member companies had identified 130 “visually distinct videos” related to the Buffalo shooting and 740 “visually distinct images,” the forum said Tuesday.
Pollack said the industry forum welcomes new member companies if they’re willing to meet certain criteria. She declined to comment on whether Streamable or any other newer tech companies were in talks to join, but she said the forum wants to bring in more members from different parts of the world and with different types of services.
“In order to make progress to meet this mission, we have to be able to bring more companies to the table,” she said. The forum was founded in 2017 after lobbying by European leaders. The most recent member to join was Zoom, in December.
Another industry organization, Tech Against Terrorism, has said it would begin alerting individual tech companies when it finds copies of the Buffalo livestream or the document on their platforms.
Streamable, a startup in Delaware, was acquired last year by a larger, London-based tech company called Hopin. The Wilmington address mentioned on its website is listed as “for sale,” and the building at that address was locked Monday, The Washington Post reported.
Hopin said in a statement Monday that it was trying to take down all videos from the Buffalo suspect.
“These types of videos violate our community guidelines and our terms of service and we are working diligently to remove them expeditiously as well as terminate accounts of those who upload them,” Hopin said.
“We are deeply disturbed by this senseless, racially motivated act of gun violence and are deeply saddened for the innocent lives lost and for their families,” it said.
Hopin didn’t respond to questions about how many resources it is devoting to anti-extremism or to what extent it works with other tech companies on the subject. Streamable didn’t immediately respond to a separate request for comment.
Brian Fishman, a former Facebook employee who oversaw efforts there against “dangerous organizations,” said companies and others need to keep looking for “choke points” in the distribution of mass shooting videos.
“It’s time to start thinking about this distribution model with the same rigor as we’ve collectively analyzed ISIS distribution. It’s not as planned, but there are clear stages of distribution,” Fishman said on Twitter.
Brittan Heller, a fellow in democracy and technology at the Atlantic Council, an international affairs think tank, said tech companies that allow livestreaming should consider limits on the service — such as time delays or requiring minimum follower counts — if they don’t already have them.
“It’s not just having terms of service. It’s having strong, enforceable and transparent enforcement regimes around content moderation,” said Heller, who’s also a lawyer in private practice advising tech companies on human rights and related subjects.
She said unmoderated social media and video hosting sites would go on existing as a kind of alternative business model, fueled in part by white supremacy and other violent ideologies.
“The tech sector can’t stop this from happening. That is a uniquely American political problem,” Heller said.
While the biggest tech companies, such as Google, Microsoft and Facebook’s parent company, Meta, have stepped up enforcement of their terms of service, merely large platforms, such as Twitch, Discord and Telegram, haven’t faced as much scrutiny from the public, regulators or lawmakers.
Farid said that makes a difference.
“They have yet to have a reckoning, where their CEOs are dragged in front of Congress week after week after week to answer for their crimes against humanity,” he said.
He said the most effective way to force change at tech companies would be to impose legal liability, because under U.S. law now, tech platforms can’t be held liable for publishing or even promoting extremist content. That might change soon in Europe and the United Kingdom, putting pressure on the companies to make changes worldwide.
[ad_2]
Source link