Watch CBS News

Facebook blames AI for failing to catch New Zealand attack video

Analyzing Facebook's crisis management
Facebook struggles to police content on its platform 07:02

Facebook said its artificial intelligence tools failed to detect the 17-minute video that showed the terrorist attack in Christchurch, New Zealand, adding that AI is " not perfect."

In a blog post posted Wednesday, Facebook executive Guy Rosen, the company's vice president of integrity, wrote that Facebook's artificial intelligence tools rely on "training data" -- or thousands of examples of a particular type of content -- to discern whether content is problematic. 

"This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems," Rosen wrote. "However, this particular video did not trigger our automatic detection systems."

He added, "To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."

New Zealand prime minister puts pressure on social media companies 06:11

Facebook is under fire after the alleged terrorist used the platform to livestream his massacre at a mosque in Christchurch, with some New Zealand businesses vowing to boycott the service. Increasingly, critics are pointing to what they say is a lack of investment and controls from Facebook to safeguard against issues like misinformation and violent content, which can spread quickly on the service. 

Livestream: A lethal lesson

While the attack was live-streamed, the video was viewed fewer than 200 times, Rosen said. During the live broadcast, the service "did not get a single user report," he wrote. "This matters because reports we get while a video is broadcasting live are prioritized for accelerated review."

The video was viewed about 4,000 times before Facebook blocked it from the service, he added. The first report came in 12 minutes after the video ended. He noted that Facebook accelerates a review if a video is flagged for suicide, which he said the livestream wasn't flagged for.

"In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures," he noted. "As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review."

Within the first 24 hours of the attack, Facebook blocked 1.2 million uploads of the video, but said that its efforts were thwarted partly by "bad actors" who coordinated efforts to post the video. Others, including news media, distributed the photo as a way to document the attack, he added.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.