Study finds Meta, X approved ads containing violent antisemitic, anti-Muslim hate speech before German election

According to a new study by the German Institute for Social Research, Meta and X social media giants approved ads that targeted users in Germany using violent anti-Muslim or anti-Jewish hate speech ahead of the country’s federal election. Ekois a corporate responsibility campaign group.

Researchers from the group tested whether or not the ad review systems of both platforms would approve or reject submissions containing hateful, violent messages targeting minorities in advance of an election in which immigration has become a major topic in mainstream political discourse. This included ads that contained anti-Muslim insults; calls for immigrants be imprisoned or gassed; and AI generated imagery of mosques or synagogues burning.

The majority of the test ads approved within hours after being submitted for review mid-February. Germany’s federal election is set to take place Sunday, February 23, 2019.

Hate speech advertisements scheduled

Eko claimed that X had approved all 10 hate speech ads submitted by its researchers just days before the federal elections were due to be held, while Meta had approved half of them (five ads), which would run on Facebook (and possibly Instagram), but rejected the remaining five. Meta’s explanation for the five rejections was that it believed there might be political or social sensitivities which could influence voting.

The five ads Meta approved, however, included violent hate speech, likening Muslim refugee to “vermin,” “rodents,” or “viruses,” and branding Muslim immigrants as rapists, calling for them be sterilized or gassed. Meta approved an ad that called for burning synagogues in order to “stop globalist Jewish rat agenda”. A policy is in place that requires disclosure about the use of AI imagery ( ) for ads relating to social issues, elections, or politics.

X approved all five of the hateful ads, and five more that contained violent hate speech against Muslims and Jews.

The additional approved ads included messages attacking “rodent immigrants” who were “flooding the country” to “steal our democracy” and an antisemitic insult which suggested that Jews lie about climate change to destroy European industry in order for them to accumulate economic power. The AI-generated image of a group of men sitting at a table with stacks of gold bars and a Star of David above them was used in the latter ad.

A second ad X approved was a direct attack against the SPD, the center left party that currently leads Germany’s coalition government. The ad made a false claim that the SPD wants to accept 60 million Muslim refugees, before trying to incite violence. X also scheduled an ad claiming that “leftists” wanted “open borders”calling for the extermination Muslims “rapists.” In In a tweet from Decemberhe urged German voters to support the Far Right AfD to “save Germany.” Alice Weidel’s leader of the AfD, he also hosted a Livestream on X.

Eko researchers disabled all test advertisements before any approved ads were scheduled to run. This was to ensure that users of the platform would not be exposed to violent hate speech. It claims that the tests reveal glaring flaws in the ad platform’s approach to content moderating. In the case of X it’s unclear whether the platform modifies ads, as all 10 violent hate speeches ads were approved for display.

These findings also suggest that ad platforms may be earning revenue by distributing violent hate speeches.

EU’s Digital Services Act

Eko tests suggest that neither platform is properly enforcing the bans on hate speeches they both claim to be applying to ad-content in their policies. Eko also reached the same conclusion in the case Meta after conducting a test before the new EU online governance regulations came into effect in 2023 — suggesting that the regime has had no impact on how it operates. A spokesperson for Eko told TechCrunch that “our findings suggest that Meta’s AI-driven ad moderating systems remain fundamentally flawed, despite the Digital Services Act being fully implemented.”

“Rather then strengthening its ad-review process or hate-speech policies, Meta seems to be backtracking on all fronts,” they added. They pointed to the company’s announcement of rolling back moderation policies and fact checking policies as a “sign of active regression” that puts it in direct conflict with DSA rules regarding systemic risks.

Eko submitted its latest findings, which include a review of the DSA’s key aspects, to the European Commission. This oversees the enforcement of these DSAs on the two social media giants. It also stated that it shared the results of its investigation with both companies. However, neither company responded.

EU DSA investigations are open into Meta and X. These include concerns about election safety and illegal content. However, the Commission has not concluded these proceedings. In April, the Commission said that it suspected Meta of inadequate moderation for political ads.

In a preliminary decision announced in July on a portion DSA investigation of X, suspicions were raised that the platform was not adhering to the regulations’ ad transparency requirements. The EU has not yet reached any conclusions on the bulk investigation. It began in December 2023.

Confirmed violations of the DSA could result in penalties of up 6% of global turnover. Systemic non-compliance may even lead to temporary blocking of regional access to platforms that violate. The EU is taking its time in deciding on the Meta and X investigations, so any DSA sanctions are still up in the air.

In the meantime, the German elections are just hours away and a growing body civil society research indicates that the EU’s flagship regulation on online governance has failed to protect the democratic process of the EU’s largest economy from a variety of tech-fueled challenges. Global Witness released results of a test of X’s and TikTok’s algorithmic “For You” feeds in Germany earlier this week. The tests showed that the platforms were biased towards promoting AfD-related content over other political content. Researchers from civil society have also accused X, in the run up to the German election, of blocking data access. The DSA is supposed allow researchers to study election security risks.

The European Commission has taken an important step by opening DSA investigation into Meta and X. Now we need to see that the Commission takes strong action to address concerns raised in these investigations,” Eko spokesperson also told us.

Our findings, along with mounting evidence from other civil societies groups, show that Big Tech won’t clean up their platforms voluntarily. The spokesperson said that Meta and X are continuing to allow hate speech, incitement of violence, and election misinformation to spread widely, despite the legal obligations they have under the DSA. (We have not revealed the name of the spokesperson to avoid harassment.)

Regulators must take action — both by enforcing DSA, but also by implementing mitigation measures such as pre-election. This could include turning off recommender systems based on profiling immediately before elections and implementing other ‘break-glass measures’ to prevent algorithmic amplification, such as hateful material, in the run-up to elections. They suggest that in the current political climate there is a real risk that the Commission does not fully enforce these laws as a concession towards the U.S.

www.aiobserver.co

More from this stream

Recomended