The Prime Minister's plans to go troll hunting won't fix social media's out-of-control hate speech problem.
In fact, it may end up only giving unregulated social media giants more control and influence over our lives.
With social media's voracious data-harvesting practices, no one - trolls included - is truly anonymous anyways.
Meta (previously Facebook) already knows your phone's unique ID number, the IP addresses most associated with your logins, and potentially your precise GPS location as well.
Besides, anonymity online isn't synonymous with trolling.
Research suggests that people can be more aggressive online when using their real names than when they are not.
We can see it in action for ourselves. Look at all the verified Twitter accounts that have been suspended, or even the comments from Nextdoor, a hyper local app that verifies your address and connects you with neighbours, where people are nevertheless prepared to publicly share extreme views.
Conversely, there are also many legitimate reasons why people choose to participate in online discourse anonymously.
Women, people of colour, and people with disability are all afforded valuable protection when they can move online anonymously.
In fact, Digital Rights Watch has warned that requiring these sorts of additional identification systems would disproportionately harm marginalised groups. Real name policies can lead to real world harms.
When it comes to tackling harmful content online, whether it is defamatory or discriminatory, the solution isn't to unmask the trolls.
It is to properly investigate how and why social media's engagement algorithms consistently provide disproportionate amplification to these extreme positions.
Big tech companies know that content which elicits an extreme reaction is more likely to get a click, a comment or a reshare, and is therefore more profitable.
In this way, social media platforms hand AI-enhanced megaphones to trolls.
Downstream interventions, such as ending anonymity, won't fix this systemic problem.
It'll leave regulators playing a never-ending game of "whack-a-mole" with individual trolls.
Internationally, governments have started to realise this too.
When it comes to online hate - and the related issue of misinformation, particularly given the proliferation on COVID misinformation - the EU's proposed Digital Services Act and the UK's draft online safety bill are shifting the focus away from content moderation and takedown approaches towards regulating the systems and processes of big tech platforms.
Clearly there is a growing appetite in Australia to reign in the excess of big tech too.
But if we want to get to the root of the problem, we need to regulate a business model which profits from hate, misinformation and conspiracy.
This starts with increased transparency, so evidence-based solutions can be found. The introduction of "live lists" of the top trending issues during contentious periods - such as pandemics and elections - would be an excellent start. This way we could begin to get a clearer picture of what kind of content algorithms are amplifying.
We also need to shift away from the self-regulation model.
Industry must be consulted on incoming regulation, but we shouldn't leave it to them to write new codes and legislation.
We need to compel social media platforms to operate in line with public expectations. To do this we must hold them accountable for the harm they cause, not the anonymous users who take advantage of the unregulated space.
Sign up for our newsletter to stay up to date.