If Australia genuinely wants to minimise online harm, we need to have more nuanced narratives than "won't somebody please think of the children".
The federal government's Online Safety Billfocuses mainly on creating pathways of redress for children and adults suffering online bullying, abuse and non-consensual sharing of intimate images. These goals are vitally important, as these online issues can translate to significant real-life harms. But wrapped up inside the bill's noble cape of protecting children are a collection of provisions that stand to impede upon our digital rights, harm vulnerable groups, and change the way the majority of Australians experience the internet - and not necessarily for the better.
The "Online Content Scheme" under part 9 gives the eSafety Commissioner expanded "take-down powers" for content on social media, messaging services or internet services (let's call it "the internet" for short). This means that the commissioner can order the removal of content online that is class 1 or class 2 material. There is much to be said about the outdated nature of the classification system in Australia, but essentially this captures any sexual content, violent or not, or content that is "unsuitable for a minor to see". So, if you watch porn online (don't worry, you are not alone - research shows that is the majority of Australians), then it will be within the remit of the eSafety Commissioner to tell you what you can and cannot see.
People tend to lament or laugh about Australia being a nanny state. Well, this bill would make the eSafety Commissioner the disapproving parent of the internet.
Jokes aside, this is a huge issue for those who work in the sex industry. Sex workers, pornography creators, online sex-positive educators and activists are just some who will struggle to work online due to the scheme. As experience of the controversial SESTA/FOSTA laws in the US (which similarly created a hostile online environment for anyone loosely connected to the sex industry, while also propelling the problematic misconception that trafficking and sex work are equivalent) has shown, crackdowns such as these incentivise online platforms to remove or censor sexual content altogether to avoid penalty, rather than undertake the much harder task of determining the difference between content that is and isn't harmful (currently impossible for even the most impressive algorithms). This forces sex workers offline and often into unsafe working environments, in turn causing even more harm.
Just as abstinence-only teaching in schools has been shown to be ineffective in place of comprehensive and inclusive sex-ed programs, you cannot sanitise the internet and expect it to solve complex issues related to sex. While we are on the topic, studies have shown that young LGBTQ+ people in particular rely on the internet and pornography to gain sexual health information and a counternarrative to the heteronormative experiences of school programs. Obviously gaining sexual health information from porn is not ideal, but does censoring sexual content on the internet reduce harm for these kids? Or do these kids' needs not matter? The solution to many of the problems associated with kids accessing sexual content is not censorship, it should be education.
Fundamentally, in seeking to shield the eyes of children from all the harms of the online world, the bill assumes sexual content and sex work to be inherently harmful and "offensive." It conflates sex, sex work, and porn with cyber bullying, online abuse, and online material that promotes violence.
Speaking of violence, part 8 of the bill proposes an "Abhorrent Violent Material Blocking Scheme". Framed as a response to the Christchurch mosque shooting in 2019, which was livestreamed by its perpetrator, the scheme would empower the eSafety Commissioner to issue a request or non-negotiable notice that internet service providers block access to sites hosting "seriously harmful content".
It's hard to argue with that sort of intention. After all, who would be in favour of abhorrent violent content and the harm it causes? But there are complex underlying issues that remain unaddressed.
In some circumstances, violence captured and shared online can be an immensely valuable tool to hold those with power to account, to shine the light on otherwise hidden human rights violations, and to be the catalyst for social change. The virality of the video of the murder of George Floyd by an American police officer was a key part of the reinvigoration of the Black Lives Matter movement last year. Closer to home, a viral video of a NSW Police officer using excessive force against an Indigenous teenager prompted more questions about racism in Australian law enforcement. Footage of the conditions and abuse against young people in Don Dale Youth Detention Centre in Darwin awoke the nation to the atrocities occurring out of sight.
Violent material circulating online is an issue worth grappling with - especially given the virality and universality of internet platforms. However, sometimes such material is important and necessary for society to confront. Simply blocking people from seeing it not only does not solve the underlying issues causing the violence in the first place, but it can also lead to the continuation of violence behind closed doors, out of sight from those who might seek accountability.
Another standout issue of the bill is that much of the eSafety Commissioner's powers extend to "relevant electronic services", which includes instant messaging and SMS, including encrypted messaging apps. The eSafety Commissioner has already argued against end-to-end encryption, saying it "will make investigations into online child sexual abuse more difficult". Claiming that encryption facilitates harming children is unproven, and strengthens a regressive surveillance agenda at the expense of our digital security. Given that the bill includes investigation powers, it is not hard to see how this will be yet another method for the government to undermine encryption, which is an essential tool for activists, whistleblowers and many marginalised groups.
The question of how best to mitigate online harm is a complex one, but we should be wary of isolating online occurrences from real-world events. We also cannot substitute one kind of harm for another by applying a technical "solution" on top of a complex social issue and calling it harm-reduction. While the goal to reduce harm that children experience online is important, we should not assume that policing the internet in broad and simplistic ways will provide safety.
- Samantha Floreani is a campaigns officer at Digital Rights Watch.