Social media giants criticised for failing to remove nearly 90 per cent of Islamophobic content

A new report has called out the major social media platforms that failed in their commitment to ensuring online hate towards Muslims would be removed after the 2019 Christchurch terror attack.

Muslims congregate for prayer at the Auburn Gallipoli Mosque in Sydney.

Muslims congregate for prayer at the Auburn Gallipoli Mosque in Sydney. Source: Getty / Anadolu Agency

A new global report has revealed that 89 per cent of Islamophobic content online has not been removed after being reported to social media giants, fuelling concerns over safety for Muslims on social media and in the real world.

In new findings released this week, US-based not-for-profit organisation (CCDH) reported 530 posts deemed to preach hate towards Muslims in three weeks between February and March 2022.

The posts received 25 million views worldwide, but 89 per cent of these posts were not taken down across Facebook, Instagram, Twitter, YouTube and TikTok.
Instagram, Twitter and TikTok failed to remove hashtags such as #deathtoislam, #islamiscancer, #raghead and #antiislam.

Of the 105 posts reported on Twitter, only three were removed. Facebook was the next worst performer, with only seven posts removed from 125 posts reported.

Instagram received 227 reports of anti-Muslim content, with its moderators removing 12 posts and deleting 20 accounts that were responsible for the posts.

The report stated that YouTube failed to remove any of the 23 videos reported.

When SBS News contacted YouTube for comment, a spokesperson confirmed it had now removed five videos and placed age restrictions on eight.

"Much of the hateful content we uncovered was blatant and easy to find - with even overtly Islamophobic hashtags circulating openly, and hundreds of thousands of users belonging to groups dedicated to preaching anti-Muslim hatred," CCDH chief executive Imran Ahmad said.

The report comes three years after the Christchurch terror attack, where 51 Muslim worshippers were gunned down in a mosque during Friday prayer by a right-wing extremist.
Men praying on a street outside a mosque in Sydney
Friday prayers spilled out onto the street at Sydney's Lakemba Mosque on 22 March 2019 after 51 people were killed and dozens more injured in the Christchurch attack. Credit: Lisa Maree Williams/Getty Images
Following the fatal attack, the New Zealand government launched the Christchurch Call - an open pledge to ensure the internet was better monitored to prevent harmful content from being posted and manifested into real-world violence.

The CCDH harshly criticised the social media giants that made the commitment - namely Meta (formerly Facebook), Twitter and Google - yet failed to act on its pledge to remove online content that supported Islamophobia.

"When social media companies fail to act on hateful and violent content, it normalises these opinions, gives offenders a sense of impunity, and can inspire offline violence."

In a statement to SBS News, a YouTube spokesperson said: "YouTube’s hate speech and harassment policies outline clear guidelines prohibiting content that promotes violence or hatred against individuals or groups where religion, ethnicity or other protected attributes are being targeted."

In YouTube's last quarter, it removed over 410,000 videos for violating its hate policies.
According to a Twitter spokesperson, the platform does "not tolerate the abuse or harassment of people on the basis of religion."

"Today, more than 60 per cent of violative content is surfaced by our automated systems, further reducing the burden on individuals to report abuse," they said.

"While we have made recent strides in giving people greater control to manage their safety, we know there is still work to be done."

Twitter has banned the referenced hashtags from appearing in the trends tab on the platform and has expanded its hateful conduct policy to prohibit the dehumanisation of a group of people based on their religion.

SBS News has contacted Meta and TikTok for comment, but so far there has been no response.

Controversial theory maintains online presence

The report also analysed 100 posts that referenced a theory that is part of a radical conspiracy adopted by white supremacists, that non-white people will dominate western culture and take over the world.
According to the findings, YouTube failed to remove any of the eight videos that were reported that promoted the Great Replacement Theory. These videos together collected nearly 19 million views.

According to YouTube, many claims made under this theory contravene its hate speech policy. Its systems work to prioritise content from authoritative sources at the top of its search results.

'Disheartened': The risk in Australia

Of the 125 posts reported on Facebook, many of them were found from public pages that regularly post anti-Muslim hate.

Eight of the 23 pages outlined in the report are Australian-based that have amassed more than 200,000 followers.

Legal advisor to the Australian Muslim Advocacy Network Rita Jabri Markwell said the findings in the report validated the concerns she knew were to be true for Muslims online.

"We're more than disappointed. We're disheartened. It's a very powerless place to be," she said.
In a statement to SBS News, Australia's eSafety Commissioner Julie Inman Grant said she was "very concerned" about the proliferation of hate speech online targeting Muslim communities.

"Hate speech violates most of the major platforms’ community standards and it’s clear the platforms need to be doing more to enforce their own policies and be more vigilant and proactive in identifying and removing this harmful content quickly," she said.

According to eSafety Commission research in 2020, Australians from culturally and linguistically diverse backgrounds are up to three times more likely to experience online abuse and hate.

"This abuse disproportionately targets their religion, race and ethnicity."

This year, the federal government enforced the Online Safety Act, which now provides Australian online users who face online abuse with mechanisms to receive legal assistance.
"With the commencement of the Online Safety Act in January, eSafety now has powers to formally help Australian adults experiencing serious and harmful abuse," Ms Inman Grant said.

But while Ms Jabri Markwell commends the strengthening of online safety laws, she says more work needs to be done "upstream" to place the onus on social media platforms, rather than online users to report behaviour they see online.

She says, so far, there is no appropriate recourse under the current law for a community member who is vilified on the basis of religion, and it's something she wants to see legislated from the top down.

"We recommended that they actually act more upstream to target those bad actors that are out there just pushing out one story after another, one hateful post after another to demonise and dehumanise us as Muslims," she said.

"Why allow these communities and this environment to ferment and to grow to the point where people are so manipulated, that they go about harassing and victimising other Australians?

"It's more effective to stop the manipulation from happening in the first place."

Share
6 min read
Published 30 April 2022 3:47pm
By Rayane Tamer
Source: SBS News


Share this with family and friends