Facebook Auto-Generates Videos Celebrating Extremist Images
Pages from a confidential whistleblower's report obtained by The Associated Press are photographed Tuesday, May 7, 2019, in Washington. Facebook likes to give the impression that itās stopping the vast majority of extremist posts before users ever see them., but the confidential whistleblowerās complaint to the Securities and Exchange Commission alleges the social media company has exaggerated its success. Even worse, it shows that the company is making use of propaganda by militant groups to auto-generate videos and pages that could be used for networking by extremists. (AP Photo/Jon Elswick)
BY DESMOND BUTLER, BARBARA ORTUTAY
WASHINGTON (AP) ā The animated video begins with a photo of the black flags of jihad. Seconds later, it flashes highlights of a year of social media posts: plaques of anti-Semitic verses, talk of retribution and a photo of two men carrying more jihadi flags while they burn the stars and stripes.
It wasnāt produced by extremists; it was created by Facebook. In a clever bit of self-promotion, the social media giant takes a year of a userās content and auto-generates a celebratory video. In this case, the user called himself āAbdel-Rahim Moussa, the Caliphate.ā
āThanks for being here, from Facebook,ā the video concludes in a cartoon bubble before flashing the companyās famous āthumbs up.ā
Facebook likes to give the impression that itās staying ahead of extremists by taking down their posts, often before users even see them. But a confidential whistleblowerās complaint to the Securities and Exchange Commission obtained by The Associated Press alleges the social media company has exaggerated its success. Even worse, it shows that the company is inadvertently making use of propaganda by militant groups to auto-generate videos and pages that could be used for networking by extremists.
The research behind the SEC complaint is aimed at spotlighting glaring flaws in the companyās approach. Last year, researchers began monitoring users who explicitly identified themselves as members of extremist groups. It wasnāt hard to document. Some of these people even list the extremist groups as their employers. One profile heralded by the black flag of an al-Qaida affiliated group listed his employer, perhaps facetiously, as Facebook. The profile that included the auto-generated video with the flag burning also had a video of al-Qaida leader Ayman al-Zawahiri urging jihadi groups not to fight among themselves.
While the study is far from comprehensive ā in part because Facebook rarely makes much of its data publicly available ā researchers involved in the project say the ease of identifying these profiles using a basic keyword search and the fact that so few of them have been removed suggest that Facebookās claims that its systems catch most extremist content are not accurate.
āI mean, thatās just stretching the imagination to beyond incredulity,ā says Amr Al Azm, one of the researchers involved in the project. āIf a small group of researchers can find hundreds of pages of content by simple searches, why canāt a giant company with all its resources do it?ā
Al Azm, a professor of history and anthropology at Shawnee State University in Ohio, has also directed a group in Syria documenting the looting and smuggling of antiquities.
Facebook concedes that its systems are not perfect, but says itās making improvements.
āAfter making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago,ā the company said in a statement. āWe donāt claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.ā
But as a stark indication of how easily users can evade Facebook, one page from a user called āNawan al-Farancsaā has a header whose white lettering against a black background says in English āThe Islamic State.ā The banner is punctuated with a photo of an explosive mushroom cloud rising from a city.
BY DESMOND BUTLER, BARBARA ORTUTAY
WASHINGTON (AP) ā The animated video begins with a photo of the black flags of jihad. Seconds later, it flashes highlights of a year of social media posts: plaques of anti-Semitic verses, talk of retribution and a photo of two men carrying more jihadi flags while they burn the stars and stripes.
It wasnāt produced by extremists; it was created by Facebook. In a clever bit of self-promotion, the social media giant takes a year of a userās content and auto-generates a celebratory video. In this case, the user called himself āAbdel-Rahim Moussa, the Caliphate.ā
āThanks for being here, from Facebook,ā the video concludes in a cartoon bubble before flashing the companyās famous āthumbs up.ā
Facebook likes to give the impression that itās staying ahead of extremists by taking down their posts, often before users even see them. But a confidential whistleblowerās complaint to the Securities and Exchange Commission obtained by The Associated Press alleges the social media company has exaggerated its success. Even worse, it shows that the company is inadvertently making use of propaganda by militant groups to auto-generate videos and pages that could be used for networking by extremists.
A banner reading "The Islamic State" is displayed on the Facebook page of a user identifying himself as Nawan Al-Farancsa. The page was still live Tuesday, May 7, 2019, when the screen grab was made. Facebook says it has robust systems in place to remove content from extremist groups, but a sealed whistleblower's complaint reviewed by the AP says banned content remains on the web and is easy to find. (Facebook via AP)
According to the complaint, over a five-month period last year, researchers monitored pages by users who affiliated themselves with groups the U.S. State Department has designated as terrorist organizations. In that period, 38% of the posts with prominent symbols of extremist groups were removed. In its own review, the AP found that as of this month, much of the banned content cited in the study ā an execution video, images of severed heads, propaganda honoring martyred militants ā slipped through the algorithmic web and remained easy to find on Facebook.
The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders and suicides off its service. In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the companyās ability to weed out violent posts automatically through artificial intelligence. During an earnings call last month, for instance, he repeated a carefully worded formulation that Facebook has been employing.
āIn areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,ā he said. Then he added: āThatās what really good looks like.ā
According to the complaint, over a five-month period last year, researchers monitored pages by users who affiliated themselves with groups the U.S. State Department has designated as terrorist organizations. In that period, 38% of the posts with prominent symbols of extremist groups were removed. In its own review, the AP found that as of this month, much of the banned content cited in the study ā an execution video, images of severed heads, propaganda honoring martyred militants ā slipped through the algorithmic web and remained easy to find on Facebook.
The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders and suicides off its service. In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the companyās ability to weed out violent posts automatically through artificial intelligence. During an earnings call last month, for instance, he repeated a carefully worded formulation that Facebook has been employing.
āIn areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,ā he said. Then he added: āThatās what really good looks like.ā
Pages from a confidential whistleblower's report obtained by The Associated Press are photographed Tuesday, May 7, 2019, in Washington. Facebook likes to give the impression that itās stopping the vast majority of extremist posts before users ever see them., but the confidential whistleblowerās complaint to the Securities and Exchange Commission alleges the social media company has exaggerated its success. Even worse, it shows that the company is making use of propaganda by militant groups to auto-generate videos and pages that could be used for networking by extremists. The researchers in the SEC complaint identified over 30 auto-generated pages for white supremacist groups, whose content Facebook prohibits. (AP Photo/Jon Elswick)
The research behind the SEC complaint is aimed at spotlighting glaring flaws in the companyās approach. Last year, researchers began monitoring users who explicitly identified themselves as members of extremist groups. It wasnāt hard to document. Some of these people even list the extremist groups as their employers. One profile heralded by the black flag of an al-Qaida affiliated group listed his employer, perhaps facetiously, as Facebook. The profile that included the auto-generated video with the flag burning also had a video of al-Qaida leader Ayman al-Zawahiri urging jihadi groups not to fight among themselves.
While the study is far from comprehensive ā in part because Facebook rarely makes much of its data publicly available ā researchers involved in the project say the ease of identifying these profiles using a basic keyword search and the fact that so few of them have been removed suggest that Facebookās claims that its systems catch most extremist content are not accurate.
āI mean, thatās just stretching the imagination to beyond incredulity,ā says Amr Al Azm, one of the researchers involved in the project. āIf a small group of researchers can find hundreds of pages of content by simple searches, why canāt a giant company with all its resources do it?ā
Al Azm, a professor of history and anthropology at Shawnee State University in Ohio, has also directed a group in Syria documenting the looting and smuggling of antiquities.
Facebook concedes that its systems are not perfect, but says itās making improvements.
āAfter making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago,ā the company said in a statement. āWe donāt claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.ā
But as a stark indication of how easily users can evade Facebook, one page from a user called āNawan al-Farancsaā has a header whose white lettering against a black background says in English āThe Islamic State.ā The banner is punctuated with a photo of an explosive mushroom cloud rising from a city.
A Facebook page for a user that translates into English as "Lights of bitterness" that lists the user as a doctor at the Islamic State. The page was still live as of Tuesday, May 7, 2019, when the screen grab was made. Facebook says it has robust systems in place to remove content from extremist groups, but a whistleblower's complaint reviewed by the AP says banned content remains on the web and easy to find. (Facebook via AP)
The profile should have caught the attention of Facebook ā as well as counter-intelligence agencies. It was created in June 2018, lists the user as coming from Chechnya, once a militant hotspot. It says he lived in Heidelberg, Germany, and studied at a university in Indonesia. Some of the userās friends also posted militant content.
The page, still up in recent days, apparently escaped Facebookās systems, because of an obvious and long-running evasion of moderation that Facebook should be adept at recognizing: The letters were not searchable text but embedded in a graphic block. But the company says its technology scans audio, video and text ā including when it is embedded ā for images that reflect violence, weapons or logos of prohibited groups.
The social networking giant has endured a rough two years beginning in 2016, when Russiaās use of social media to meddle with the U.S. presidential elections came into focus. Zuckerberg initially downplayed the role Facebook played in the influence operation by Russian intelligence, but the company later apologized.
Facebook says it now employs 30,000 people who work on its safety and security practices, reviewing potentially harmful material and anything else that might not belong on the site. Still, the company is putting a lot of its faith in artificial intelligence and its systemsā ability to eventually weed out bad stuff without the help of humans. The new research suggests that goal is a long way away and some critics allege that the company is not making a sincere effort.
When the material isnāt removed, itās treated the same as anything else posted by Facebookās 2.4 billion users ā celebrated in animated videos, linked and categorized and recommended by algorithms.
But itās not just the algorithms that are to blame. The researchers found that some extremists are using Facebookās āFrame Studioā to post militant propaganda. The tool lets people decorate their profile photos within graphic frames ā to support causes or celebrate birthdays, for instance. Facebook says that those framed images must be approved by the company before they are posted.
Hany Farid, a digital forensics expert at the University of California, Berkeley, who advises the Counter-Extremism Project, a New York and London-based group focused on combatting extremist messaging, says that Facebookās artificial intelligence system is failing. He says the company is not motivated to tackle the problem because it would be expensive.
āThe whole infrastructure is fundamentally flawed,ā he said. āAnd thereās very little appetite to fix it because what Facebook and the other social media companies know is that once they start being responsible for material on their platforms it opens up a whole can of worms.ā
Another Facebook auto-generation function gone awry scrapes employment information from userās pages to create business pages. The function is supposed to produce pages meant to help companies network, but in many cases they are serving as a branded landing space for extremist groups. The function allows Facebook users to like pages for extremist organizations, including al-Qaida, the Islamic State group and the Somali-based al-Shabab, effectively providing a list of sympathizers for recruiters.
At the top of an auto-generated page for al-Qaida in the Arabian Peninsula, the AP found a photo of the damaged hull of the USS Cole, which was bombed by al-Qaida in a 2000 attack off the coast of Yemen that killed 17 U.S. Navy sailors. Itās the defining image in AQAPās own propaganda. The page includes the Wikipedia entry for the group and had been liked by 277 people when last viewed this week.
As part of the investigation for the complaint, Al Azmās researchers in Syria looked closely at the profiles of 63 accounts that liked the auto-generated page for Hayāat Tahrir al-Sham, a group that merged from militant groups in Syria, including the al-Qaida affiliated al-Nusra Front. The researchers were able to confirm that 31 of the profiles matched real people in Syria. Some of them turned out to be the same individuals Al Azmās team was monitoring in a separate project to document the financing of militant groups through antiquities smuggling.
Facebook also faces a challenge with U.S. hate groups. In March, the company announced that it was expanding its prohibited content to also include white nationalist and white separatist contentā previously it only took action with white supremacist content. It says that it has banned more than 200 white supremacist groups. But itās still easy to find symbols of supremacy and racial hatred.
The researchers in the SEC complaint identified over 30 auto-generated pages for white supremacist groups, whose content Facebook prohibits. They include āThe American Nazi Partyā and the āNew Aryan Empire.ā A page created for the āAryan Brotherhood Headquartersā marks the office on a map and asks whether users recommend it. One endorser posted a question: āHow can a brother get in the house.ā
Even supremacists flagged by law enforcement are slipping through the net. Following a sweep of arrests beginning in October, federal prosecutors in Arkansas indicted dozens of members of a drug trafficking ring linked to the New Aryan Empire. A legal document from February paints a brutal picture of the group, alleging murder, kidnapping and intimidation of witnesses that in one instance involved using a searing-hot knife to scar someoneās face. It also alleges the group used Facebook to discuss New Aryan Empire business.
But many of the individuals named in the indictment have Facebook pages that were still up in recent days. They leave no doubt of the usersā white supremacist affiliation, posting images of Hitler, swastikas and a numerical symbol of the New Aryan Empire slogan, āTo The Dirtā ā the membersā pledge to remain loyal to the end. One of the groupās indicted leaders, Jeffrey Knox, listed his job as āstomp down Honky.ā Facebook then auto-generated a āstomp down Honkyā business page.
Social media companies have broad protection in U.S. law from liability stemming from the content that users post on their sites. But Facebookās role in generating videos and pages from extremist content raises questions about exposure. Legal analysts contacted by the AP differed on whether the discovery could open the company up to lawsuits.
At a minimum, the research behind the SEC complaint illustrates the companyās limited approach to combatting online extremism. The U.S. State Department lists dozens of groups as ādesignated foreign terrorist organizationsā but Facebook in its public statements says it focuses its efforts on two, the Islamic State group and al-Qaida. But even with those two targets, Facebookās algorithms often miss the names of affiliated groups. Al Azm says Facebookās method seems to be less effective with Arabic script.
For instance, a search in Arabic for āAl-Qaida in the Arabian Peninsulaā turns up not only posts, but an auto-generated business page. One user listed his occupation as āFormer Sniperā at āAl-Qaida in the Arabian Peninsulaā written in Arabic. Another user evaded Facebookās cull by reversing the order of the countries in the Arabic for ISIS or āIslamic State of Iraq and Syria.ā
John Kostyack, a lawyer with the National Whistleblower Center in Washington who represents the anonymous plaintiff behind the complaint, said the goal is to make Facebook take a more robust approach to counteracting extremist propaganda.
āRight now weāre hearing stories of what happened in New Zealand and Sri Lanka ā just heartbreaking massacres where the groups that came forward were clearly openly recruiting and networking on Facebook and other social media,ā he said. āThatās not going to stop unless we develop a public policy to deal with it, unless we create some kind of sense of corporate social responsibility.ā
Farid, the digital forensics expert, says that Facebook built its infrastructure without thinking through the dangers stemming from content and is now trying to retrofit solutions.
āThe policy of this platform has been: āMove fast and break things.ā I actually think that for once their motto was actually accurate,ā he says. āThe strategy was grow, grow, grow, profit, profit, profit and then go back and try to deal with whatever problems there are.ā
Barbara Ortutay reported from San Francisco. Associated Press writer Maggie Michael contributed to this report.
Follow the authors on Twitter at https://twitter.com/desmondbutler and https://twitter.com/BarbaraOrtutay
Have a tip? Contact the authors securely at https://www.ap.org/tips
The profile should have caught the attention of Facebook ā as well as counter-intelligence agencies. It was created in June 2018, lists the user as coming from Chechnya, once a militant hotspot. It says he lived in Heidelberg, Germany, and studied at a university in Indonesia. Some of the userās friends also posted militant content.
The page, still up in recent days, apparently escaped Facebookās systems, because of an obvious and long-running evasion of moderation that Facebook should be adept at recognizing: The letters were not searchable text but embedded in a graphic block. But the company says its technology scans audio, video and text ā including when it is embedded ā for images that reflect violence, weapons or logos of prohibited groups.
The social networking giant has endured a rough two years beginning in 2016, when Russiaās use of social media to meddle with the U.S. presidential elections came into focus. Zuckerberg initially downplayed the role Facebook played in the influence operation by Russian intelligence, but the company later apologized.
Facebook says it now employs 30,000 people who work on its safety and security practices, reviewing potentially harmful material and anything else that might not belong on the site. Still, the company is putting a lot of its faith in artificial intelligence and its systemsā ability to eventually weed out bad stuff without the help of humans. The new research suggests that goal is a long way away and some critics allege that the company is not making a sincere effort.
When the material isnāt removed, itās treated the same as anything else posted by Facebookās 2.4 billion users ā celebrated in animated videos, linked and categorized and recommended by algorithms.
But itās not just the algorithms that are to blame. The researchers found that some extremists are using Facebookās āFrame Studioā to post militant propaganda. The tool lets people decorate their profile photos within graphic frames ā to support causes or celebrate birthdays, for instance. Facebook says that those framed images must be approved by the company before they are posted.
Hany Farid, a digital forensics expert at the University of California, Berkeley, who advises the Counter-Extremism Project, a New York and London-based group focused on combatting extremist messaging, says that Facebookās artificial intelligence system is failing. He says the company is not motivated to tackle the problem because it would be expensive.
āThe whole infrastructure is fundamentally flawed,ā he said. āAnd thereās very little appetite to fix it because what Facebook and the other social media companies know is that once they start being responsible for material on their platforms it opens up a whole can of worms.ā
Another Facebook auto-generation function gone awry scrapes employment information from userās pages to create business pages. The function is supposed to produce pages meant to help companies network, but in many cases they are serving as a branded landing space for extremist groups. The function allows Facebook users to like pages for extremist organizations, including al-Qaida, the Islamic State group and the Somali-based al-Shabab, effectively providing a list of sympathizers for recruiters.
At the top of an auto-generated page for al-Qaida in the Arabian Peninsula, the AP found a photo of the damaged hull of the USS Cole, which was bombed by al-Qaida in a 2000 attack off the coast of Yemen that killed 17 U.S. Navy sailors. Itās the defining image in AQAPās own propaganda. The page includes the Wikipedia entry for the group and had been liked by 277 people when last viewed this week.
As part of the investigation for the complaint, Al Azmās researchers in Syria looked closely at the profiles of 63 accounts that liked the auto-generated page for Hayāat Tahrir al-Sham, a group that merged from militant groups in Syria, including the al-Qaida affiliated al-Nusra Front. The researchers were able to confirm that 31 of the profiles matched real people in Syria. Some of them turned out to be the same individuals Al Azmās team was monitoring in a separate project to document the financing of militant groups through antiquities smuggling.
Facebook also faces a challenge with U.S. hate groups. In March, the company announced that it was expanding its prohibited content to also include white nationalist and white separatist contentā previously it only took action with white supremacist content. It says that it has banned more than 200 white supremacist groups. But itās still easy to find symbols of supremacy and racial hatred.
The researchers in the SEC complaint identified over 30 auto-generated pages for white supremacist groups, whose content Facebook prohibits. They include āThe American Nazi Partyā and the āNew Aryan Empire.ā A page created for the āAryan Brotherhood Headquartersā marks the office on a map and asks whether users recommend it. One endorser posted a question: āHow can a brother get in the house.ā
Even supremacists flagged by law enforcement are slipping through the net. Following a sweep of arrests beginning in October, federal prosecutors in Arkansas indicted dozens of members of a drug trafficking ring linked to the New Aryan Empire. A legal document from February paints a brutal picture of the group, alleging murder, kidnapping and intimidation of witnesses that in one instance involved using a searing-hot knife to scar someoneās face. It also alleges the group used Facebook to discuss New Aryan Empire business.
But many of the individuals named in the indictment have Facebook pages that were still up in recent days. They leave no doubt of the usersā white supremacist affiliation, posting images of Hitler, swastikas and a numerical symbol of the New Aryan Empire slogan, āTo The Dirtā ā the membersā pledge to remain loyal to the end. One of the groupās indicted leaders, Jeffrey Knox, listed his job as āstomp down Honky.ā Facebook then auto-generated a āstomp down Honkyā business page.
Social media companies have broad protection in U.S. law from liability stemming from the content that users post on their sites. But Facebookās role in generating videos and pages from extremist content raises questions about exposure. Legal analysts contacted by the AP differed on whether the discovery could open the company up to lawsuits.
At a minimum, the research behind the SEC complaint illustrates the companyās limited approach to combatting online extremism. The U.S. State Department lists dozens of groups as ādesignated foreign terrorist organizationsā but Facebook in its public statements says it focuses its efforts on two, the Islamic State group and al-Qaida. But even with those two targets, Facebookās algorithms often miss the names of affiliated groups. Al Azm says Facebookās method seems to be less effective with Arabic script.
For instance, a search in Arabic for āAl-Qaida in the Arabian Peninsulaā turns up not only posts, but an auto-generated business page. One user listed his occupation as āFormer Sniperā at āAl-Qaida in the Arabian Peninsulaā written in Arabic. Another user evaded Facebookās cull by reversing the order of the countries in the Arabic for ISIS or āIslamic State of Iraq and Syria.ā
John Kostyack, a lawyer with the National Whistleblower Center in Washington who represents the anonymous plaintiff behind the complaint, said the goal is to make Facebook take a more robust approach to counteracting extremist propaganda.
āRight now weāre hearing stories of what happened in New Zealand and Sri Lanka ā just heartbreaking massacres where the groups that came forward were clearly openly recruiting and networking on Facebook and other social media,ā he said. āThatās not going to stop unless we develop a public policy to deal with it, unless we create some kind of sense of corporate social responsibility.ā
Farid, the digital forensics expert, says that Facebook built its infrastructure without thinking through the dangers stemming from content and is now trying to retrofit solutions.
āThe policy of this platform has been: āMove fast and break things.ā I actually think that for once their motto was actually accurate,ā he says. āThe strategy was grow, grow, grow, profit, profit, profit and then go back and try to deal with whatever problems there are.ā
Barbara Ortutay reported from San Francisco. Associated Press writer Maggie Michael contributed to this report.
Follow the authors on Twitter at https://twitter.com/desmondbutler and https://twitter.com/BarbaraOrtutay
Have a tip? Contact the authors securely at https://www.ap.org/tips
Comments