Facebook has served as an “anti-Rohingya sounding board”, denounces Amnesty

Les Rohingya, communauté à majorité musulmane de Birmanie, sont des milliers à avoir fui au Bangladesh en 2017, face à la répression militaire sanglante dans leur pays à majorité bouddhiste.

Posted Sep 29, 2022, 7:01 AM

Again, Facebook is accused of fueling online hatred. This time, Amnesty International is taking up the pen to denounce the platform’s role in the bloody military crackdown on the Rohingya in 2017. “The dangerous algorithms of Meta, which owns Facebook, as well as the frantic search for profit contributed significantly to the atrocities perpetrated by the Myanmar military,” says the NGO in a report released Thursday.

Five years ago, while thousands of members of this predominantly Muslim community were “killed, tortured, raped and displaced” as part of a “campaign of ethnic cleansing”, Facebook in Myanmar became “a fund of resonance for virulent anti-Rohingya content,” condemns Amnesty.

At the time, the very popular social network held a “dominant position” in the country, which made the impact of this content which “flooded the platform” all the more dramatic. This incitement to violence and discrimination also came from the highest representatives of the army, notes the NGO, which cites a publication by General Min Aung Hlaing, finally banned from Facebook in 2018.

Despite “repeated” pleas from activists, some of whom even made the trip to Meta’s headquarters in Menlo Park, California, “Meta disregarded these warnings” and the platform’s “report” function turned out to be useless, maintains the association. And for good reason: the staff responsible for moderation were too few, according to the report, which speaks of “a single content moderator dedicated to Myanmar and a speaker of Burmese” in 2014.

Algorithms designed to divide

In subsequent years, Meta admitted that it had “not done enough to prevent their platform from being used to foment division and incite violence” and reported some improvements in its community engagement and content moderation practices at Myanmar (“dozens” of moderators have reportedly been hired).

An insufficient position for Amnesty International, which accuses Meta of not “taking into account the crucial role played by its algorithms” in the amplification of anti-Rohingya content. The NGO relies on the Facebook internal documents recently made public by whistleblower Frances Haugen, demonstrating that its business model based on advertising targeting systematically escalates polarizing content, which generates more engagement.

“This report reveals that Meta was aware of the risks associated with its algorithms for a long time, but never acted on them”, hammers the report. With the tech giant’s responsibilities established from a human rights perspective, the report concludes, Meta “has a responsibility to provide survivors with an effective remedy.”

Amnesty International particularly deplores the refusal of Mark Zuckerberg’s group to fund an education project in a Rohingya refugee camp in Bangladesh. A refusal which gave rise to a complaint against the company, still under examination in the United States.

The Gafa having “shown themselves unable to deal with this problem” of moderation, according to the NGO, it invites States to intervene by “applying legislation allowing effective control of economic models based on surveillance”. This recommendation echoes the Digital Services Act (DSA), a text currently being prepared by Brussels on content moderation. And already strongly criticized by the Gafa.

LEAVE A REPLY

Please enter your comment!
Please enter your name here