Hours after a gunman livestreamed himself during a shooting spree on Facebook, the social media platform, as well as Twitter and Google's YouTube were still struggling to contain the spread of videos and other material related to the deadliest attack in New Zealand history.
On Friday, at least 49 people were killed and dozens more injured by at least one gunman at two mosques in Christchurch, New Zealand.
According to authorities, the 28-year-old appeared to livestream video of the attack, documenting the attack from the drive to the Al Noor Mosque from a first-person perspective, and showing the shooter walking into the mosque and opening fire.
Following the broadcast of the attacks critics questioned why it had taken as long as 90 minutes to take down the video and prevent it from spreading throughout the digital ecosystem. The answer? It's harder than it looks.
On YouTube, the video lingered for at least eight hours after the attack as different individuals republished it.
"Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube. As with any major tragedy, we will work cooperatively with the authorities," a company spokeswoman told ABC News in a statement.
The company removed thousands of videos related to the shooting.
Facebook also issued a statement saying it had taken down the suspected shooter's Facebook and Instagram accounts and removed the video he posted of the attack.
"Our hearts go out to the victims, their families and the community affected by the horrendous shootings in New Zealand. Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter's Facebook and Instagram accounts and the video. We're also removing any praise or support for the crime and the shooter or shooters as soon as we're aware. We will continue working directly with New Zealand Police as their response and investigation continues," Facebook New Zealand spokeswoman Mia Garlick wrote in a statement.
Facebook issued a statement Saturday saying it had taken down 1.5 million videos of the attack in the first 24 hours, including 1.2 million that were blocked at upload. The social network also said it was removing edited versions of the shooting not showing any graphic content "out of respect for the people affected by this tragedy and the concerns of local authorities."
Portions of the video were also spread by individuals on Twitter, which said it, too, was working to remove the content and had suspended the shooter's account.
The self-broadcasting of the attack on the chosen media platforms and online message boards was deliberate, and meant for maximum exposure, experts said.
"The way in which this murder rolled out was a full press kit, complete with viral marketing through trolls who would know to download and re-upload the content, shows that we are very far from the moderation we desperately need," Joan Donovan, the director of the Technology and Social Change Research Project at Harvard's Kennedy School, told ABC News.
"Facebook was used to livestream because of how easy it is to capture an audience and circulate the material. Facebook has not taken time to moderate white supremacist and Islamophobic content. That has to be addressed, not just in policy, but in content moderation applications," Donovan said.
But Facebook's former Chief Information Security Officer Alex Stamos took issue with calling the video "viral."
"This isn't about the video 'going viral' in the traditional sense, where a piece of content explodes on social media *because* of engagement on that platform," Stamos tweeted. "It isn't going to get a lot better than this."
Stamos pointed out that the verboten nature of the offensive material made it a popular search on Google. He also noted that the "shooter was an active member of a rather horrible online community (which I will not amplify) that encourages this kind of behavior. He posted the FB Live link and mirrors to his manifesto right before, so thousands of people got copies in real-time," he tweeted.
"So now we have tens of millions of consumers wanting something and tens of thousands of people willing to supply it, with the tech companies in between. YouTube and Facebook/Instagram have perceptual hashing [digital fingerprints] built during the ISIS crisis to deal with this and teams looking," Stamos tweeted. "Perceptual hashes and audio fingerprinting are both fragile, and a lot of these same kinds of people have experience beating them to upload copyrighted content. Each time this happens, the companies have to spot it and create a new fingerprint."
Comparing the livestreamed New Zealand shooting video to content from ISIS, Stamos told ABC News, "The ISIS problem was partially cracked because the [tech] companies infiltrated all their [messaging app] Telegram channels. So you could grab a video and block it before the first upload attempt. No equivalent chokepoint here."
The other problem in policing the broadcast of murder is that technology relies on artificial intelligence to first spot problem content. But algorithms are mostly reactive and not predictive. If the programs detect content that's disturbing, they can flag it and create a digital fingerprint, or hash.
"You can create a content fingerprint of something that has been previously recorded (even if it ended just moments ago). But live content is problematic as there is no way to fingerprint something that occurs in the future," Ashkan Soltani, the former chief technologist at the Federal Trade Commission, wrote ABC News in an email.
"There are AI techniques that can combine attributes of a scene along with metadata (i.e., where it's being uploaded from, who the uploader is, what words might occur in the title, etc.) but they're not as robust," Soltani wrote.
Soltani also echoed Donovan in saying tech companies have prioritized copyright infringement over problem content.
"This has more to do with priorities," he said. "If the same level of scrutiny was placed on these platforms for hosting controversial content as there was copyrighted content (for which they can be sued) -- they would dramatically increase their investment in technical solutions to deal with some of these problems."
The other problem is that the shooter was so adroit at referencing coded language and familiar tropes of white supremacists that it provided cover for his intentions to actually commit an attack. He was posting on online messaging boards popular with hate groups just prior to the attack.
"So much reads like alt-right trolling you wouldn't know, just by reading it, that he'd go out and shoot 49 people. Until he'd done it. You can tell it took the channel a while to figure out he was really shooting people. Some were asking if it was live-action role-playing. And then, as they realized, the community split. Some took the position: delete everything, we've got to get out of here, and others were cheering him on," Ben Nimmo, of the Atlantic Council's Digital Forensic Research Lab, told ABC News.
"The thing is hindsight is no good, you've got to have foresight," Nimmo added.
What took big tech so long to remove videos of New Zealand mosque attack?