Crackdown on online hate: Social media giants told action needed now

Social media companies told they must ensure online safety

Social media companies told they must ensure online safety Source: AAP / JOEL CARRETT

Get the SBS Audio app

Other ways to listen

Six social media companies have been issued with legal notices by Australia's online safety regulator to outline their strategies to stop the spread of harmful content by terrorists and violent extremists. Meta, Google, X, Telegram, WhatsApp and Reddit have 49 days to explain the steps they're taking to protect Australians or face tens of millions of dollars in fines.


The spread of extremist content online - and its impact on Australians - is a growing problem that Australia's independent regulator for online safety is determined to stop.
Julie Inman Grant is the eSafety Commissioner.
"Unfortunately, it is a growing problem. And what we're seeing is so many of these platforms are enabling themselves to be weaponised by violent extremists because attention and conflict sells. And the business model is to keep things sticky and for the algorithms to continue serving up more and more conflict to keep people engaged. Now, depending on how harmful the content is, that can obviously have harmful impacts on individuals and by extension society."
The eSafety Commission has recorded more reports of the re-circulation of harmful content on social media platforms, including the 2019 terrorist attack in Christchurch.
It also continues to receive reports of online hate incidents. Those from culturally and linguistically diverse backgrounds experience online hate speech at higher levels than the national average in Australia.
An estimated 1 in 7 Australian adults [[aged 18–65]] were the target of online hate speech over a 12-month period ending August 2019. That's 2 million people or 14 per cent of the population.
Ms Inman Grant says when it comes online harm, the time for self regulation is over.
Using powers under the Online Safety Act, the regulator has issued legal notices to six social media companies: Google, Meta, X - formerly known as Twitter, WhatsApp, Telegram and Reddit.
"We need to know that they're investing in their innovating and their trust and safety infrastructure; that they're putting protections in at the front end to prevent their platforms from being weaponised from terrorist and violent extremist content being hosted, shared, and then amplified."
They have 49 days to comply with the legal notice - or face the prospect of tens of millions of dollars in penalties.
Earlier this month, the UN-backed Tech against Terrorism said it had identified users of an IS [[Islamic State]] forum talking about ways of using generative AI to further spread their message.
Ms Inman Grant says transparency and accountability is key to tackling online harm.
"Radicalisation can happen very slowly and very subtly, or it can cause intense fear and happen very quickly. But we are really concerned about the really extreme content: the beheadings, the kidnappings, the really violent shootings that our imagery, that our young people are seeing, that they cannot un-see. So very harmful. But it's the corrosive impact that some of this imagery, some of this ideology has over time that can really tear at the social fabric of democracy and social cohesion."
A recent OECD report ranked the social media platforms with the highest prevalence of terrorist and violent extremist material.
Telegram was first on the list, followed by YouTube, and then the platform X.
The Meta-owned Facebook and Instagram were ranked fourth and fifth respectively.
Professor of Global Islamic Politics at Deakin University, Greg Barton studies ways of countering violent extremism and terrorism.
He says generative AI has only accelerated the challenges of responding to online harm.
"One of the real challenges is we're going to see content produced that we won't know whether it's real or it's not real. And part of the danger there is we stop trusting anything and anyone; that we just lose our sense of confidence in what's real. We've already seen this with shades of fake news and anti-science conspiracy theory during the COVID-19 pandemic that was exacerbated by social media. Generative AI is going to have the capacity to pump out a much larger volume of more convincing material, whether it's text or whether it's images and video, and that's going to make the current challenges we face seem relatively trivial, compared with what's coming."
Professor Barton says the presence of extremist and terrorism material online makes it harder for people to engage in online spaces that are safe - it also creates more division; and can become a breeding ground for radicalisation.
He says extremists and terrorists exploits human vulnerabilities - and countering it requires building more positive community connections.
"The thing to understand that is that this is driven largely by social and psychological needs. So where there's a vulnerability, where there's an absence, where somebody needs something, then they're at risk of somebody stepping in who doesn't have their best interest at heart offering to provide that. It's tempting to think that we can solve this by arguing with them if they have ideas which strike us as extreme by just sort of trying to rebut the ideas. That's not the best place to start - or even proceed with. The main thing is to keep connection, keep the channels of communication open, try as best we can to say that we're not being judgmental. We just care about you and we want to talk. Tell me about what's happening. And if you can keep those connections open, hopefully that can be a balancing force against other voices they're hearing. And in time if they become disillusioned with those other voices, which typically is the case, it's easy for them to come back to family and friends and community. If we close them out and they've got nowhere to go, they're much more vulnerable, and it's harder for them to come back."
The country's terrorism threat level remains at POSSIBLE, but in the last 18 months Australia's national security agency, ASIO, has recorded an increase in activity by extremists spreading violence and hate.
Delivering the latest annual threat assessment last month, ASIO's Director-General of Security, Mike Burgess, says the agency is monitoring the issue closely.
Ms Inman Grant says everyone has a role to play in responding the rise in online hate.
She says tackling it is a global challenge - and more agencies internationally are working together to find solutions.
"Well, the eSafety Commissioner was the first online safety regulator set up in the world in 2015. And for the first seven years, we were largely the only one writing the playbook as we went along. And that thankfully is changing. We just had the Home Affairs Commissioner - from the European Commission - in the office yesterday talking their terrorism legislation where they have 100 per cent compliance rate. So working together with the European Union where they are a very powerful regional bloc, with a lot of expertise in regulating - is going to make us stronger. We may go faster on our own, we will go further together."


Share