How Australia Is Cracking Down on Internet Trolls
March 25, 2021“Everyone in Karachi gets held up at gunpoint. If you haven’t been held up at gunpoint, it’s because you’re the guy holding the gun,” Pakistani-Australian comedian Sami Shah recently joked in one of his shows at a Melbourne comedy club. The live audience laughed, but not everyone finds Shah’s jokes so funny online.
“It’s like being held at gunpoint in Karachi. If you haven’t been abused on Twitter by someone with an egg for an icon, you’re the guy doing the abusing at this point,” Shah said in an interview with VICE World News about his experiences of receiving hate speech online.
In recent years, Australia has seen an increase in online hate speech, especially from people with an anti-Muslim or anti-Semitic ideology, and nearly always from far-right extremists. With a lack of action from social media platforms, the Australian government is now considering stepping in with a proposed bill that will hold such companies accountable for hosting harmful content.
If passed, the Online Safety Bill 2021, will enable the government to take down harmful posts when platforms fail to act on legitimate complaints. Platforms that ignore requests to remove posts can be fined up to AU$550,000 ($422,748), while individuals who post harmful content online will face fines of AU$111,000 ($85,318) per post.
Shah has been living in Australia since 2012. In 2017, he replaced an English-born Australian as co-host of the decades-long radio show Breakfast on ABC Radio Melbourne, which he said was immediately obvious to everyone listening, because he does not have the typical Australian accent.
“If you have an online presence, which — because of the work I do, I kind of need because otherwise, I wouldn’t get any ticket sales at my next comedy show — you just have to accept that you’re going to cop abuse,” Shah said. “For two years, I was called a curry muncher and I had messages coming in saying ‘What is this? Radio Bangladesh?’”
Social media platforms have become part of daily life, but they have also proven to be a potent tool for amplifying and weaponizing people’s worst instincts. This was evident in the 2019 Christchurch mosque shootings in New Zealand, which was preceded by an anonymous post on a message board on 8chan — a controversial website with a history of accommodating extremist content. The post linked to an 87-page manifesto filled with anti-immigrant and anti-Muslim sentiments, and a Facebook page that hosted a live-stream. The Australian white nationalist who posted the message then live-streamed the massacre that killed over 50 people in two mosques in Christchurch.
Experts say these shootings have inspired far-right nationalists and anti-immigration campaigners the world over to become more active both online and off. Earlier this year, the National Socialist Network, an Australian neo-Nazi organization, burnt a cross and gestured Nazi salutes at the foot of the Grampians in Western Victoria during the Australia Day weekend in January.
A report by online safety agency Netsafe found that 14 percent of more than 3,700 adults surveyed in Australia felt they’d been targeted by online hate speech between August 2018 and 2019. People who are black, brown, or from minority groups are more likely to be abused, harassed, and criticized.
“You Muslim whore, nobody invited you to Australia. Leave now before we behead your mother and bury you all with pigs,” reads a tweet lawyer Mariam Veiszadeh said she received when she began doing advocacy work around xenophobia in 2015.
Hate speech can negatively impact a person’s mental health; it lowers self-esteem, causes psychological distress, and strengthens negative stereotypes. More dangerous is the fact that there is a history of online hate leading to offline violence.
“It was very hard, my mental health took a huge battering. The sense of guilt that I felt at the time thinking, ‘Have I potentially put, not just myself at risk, but my children and my husband?’ I think in some ways that actually hurt more,” Veiszadeh told VICE World News.
In spite of the damage that online hate speech has on both individuals and communities, there has been little action on social media platforms’ part to control it.
“Conflict sells. The algorithms that are being used is a way to keep stickiness, to keep us on the platform, to keep us from going somewhere else. So there aren’t really economic incentives for them to clean up their act, to be honest,” Julie Inman Grant, Australia’s eSafety Commissioner and former social media executive, told VICE World News.
To minimize the impact and longevity of harmful content, the Online Safety Bill 2021 seeks to reduce the turnaround time for online platforms to take down content flagged by public online safety agency eSafety from 48 hours to 24. Stakes are high for these platforms as it will only take two instances of non-compliance for a company to be banned from Australia.
“That’s certainly an incentive not to troll with impunity. Companies can either step up, raise their standards, and put safety at the forefront of what they do, or they can expect governments to tell them what and how to do it,” Grant said.
While the bill is meant to empower minorities, some think it could further silence other groups. This is because the bill gives the eSafety Commissioner a range of new powers, including the ability to impose their own “restricted access systems.” These are systems that could regulate who gets access to certain content. For example, the commissioner could require individuals to upload identity documents to access sexual content.
Furthermore, the bill also creates an “online content scheme,” which permits users to complain about content that’s not under a “restricted access system.” From there, the commissioner can conduct an investigation and order removal notices as they see fit.
Different groups are alarmed that the bill could censor adult content in Australia, which could see sex workers off the internet. Online platforms like Google have also raised concerns over the proposed legislation, saying that the bill is too broad and that the proposed 24-hour turnaround time for taking down harmful content is too short, and could lead to companies blocking unrelated content or even shutting down entire websites. Twitter and Twitch pointed out that the bill does not consider different business models and content types, and so placing requirements that only large, mature companies can comply with could be a detriment to Australia’s digital growth.
Some groups believe that approval of the bill is being rushed. The Online Safety Bill 2021 entered Parliament on Feb. 24, just eight business days after consultation on the draft legislation closed. It had received 370 submissions presenting thoughts, opinions, and oppositions to the bill from various organizations, including those of Google and Scarlet Alliance, which have not yet been released to the public.
“The other big issue we really need to address, that this legislation does not really tackle, is the broader systems of power that impact this space,” Venessa Paech, director for Australian Community Managers, told VICE World News.
The problem with the bill, some experts believe, is that it only addresses the surface-level manifestations of bigotry and extremism.
“We spend a lot of time putting policy at what we see above the water, and then forgetting that we’ve had the hull of our boat ripped out because we’ve hit the iceberg well before we got anywhere near that tip,” Dr. Jennifer Becket, a lecturer in media marketing communication at the University of Melbourne, told VICE World News.
She added that social media is a reflection of the world that we live in, so solving online hate speech also means addressing the systematic bias and lack of diversity that persists in real life.