Facebook is unwittingly auto-generating content for terror-linked groups that its artificial intelligence systems do not recognise as extremist, according to a complaint made public on Thursday.
The National Whistleblowers Centre in Washington carried out a five-month study of 3,000 Facebook members' pages who liked or connected to organisations proscribed as terrorist by the US government.
Researchers found that the Islamic State group and al-Qaeda were "openly" active on the social network.
More worryingly, Facebook's own software was automatically creating celebration and memories videos for extremist pages that had amassed sufficient views or likes.
The Whistleblower's Centre said it filed a complaint with the US Securities and Exchange Commission on behalf of a source that preferred to remain anonymous.
"Facebook's efforts to stamp out terror content have been weak and ineffectual," an executive summary of the 48-page document shared by the center read.
"Of even greater concern, Facebook itself has been creating and promoting terror content with its auto-generate technology."
Survey results shared in the complaint indicated that Facebook was not delivering on its claims about eliminating extremist posts or accounts.
The company said it had been removing terror-linked content "at a far higher success rate than even two years go" since making heavy investments in technology.
"We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world," a Facebook spokesman said.