Sunday marked one year since the horrific mosque shootings in Christchurch, in which 51 people were brutally murdered.
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
On that day, 12 months ago, New Zealand, and indeed the world, was shocked further to find footage of the killings was streamed live on Facebook, the world's most ubiquitous social media website.
The video stayed up for hours, replaying in the feeds of Facebook users as they trawled through advertisements and autoplaying viral videos to connect with their friends and families.
Today, the memories are raw; Christchurch, and New Zealand as a country, is still coming to grips with the effects of the horror on its people and its way of life.
The massacre, New Zealand's worst peacetime shooting, shocked the world, and spurred calls for tech companies to do more to combat extremism on their services.
Weeks after the massacre, Facebook announced it would be tightening rules around its livestreaming feature, as well as joining with a number of other tech companies under the "Christchurch Call To Action", to take stronger action against far-right hate groups and racism.
"It is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence," a May 2019 statement from Facebook, Microsoft, Twitter, Google and Amazon said.
The Christchurch Call action plan committed the tech giants to stronger rules around hate speech, better tools to report and remove extreme content, and working with anti-extremist groups to "challenge hate and promote pluralism and respect online".
But while Facebook says it has since taken measures to prevent the possibility of such events being livestreamed, there is plenty more still to be done when it comes to the proliferation of hateful content on social media.
New Zealand media last week reported on claims by a Muslim group that Facebook had failed to live up to its stated aims, and had not yet resolved the issue of Islamophobia and other extremist views being peddled online. The group maintains there is evidence that Facebook is still "allowing extreme and bigotry-based views to be normalised ... and emboldening ordinary people to commit public acts of hatred."
In the weeks after the massacre, Facebook executives maintained that it had learned from its mistakes, and called for other tech companies to do more to prevent the internet from becoming a hub and a haven for extreme hate. But Facebook's statements and updates since then have remained ambiguous as to its own community standards, and have often seemed to fall short of the zero tolerance approach it should have adopted by now.
The anniversary is an opportunity to take another look at how social media enables extremist communities to thrive. The fact that millions of people were able to, inadvertently or otherwise, view footage of a massacre as it unfolded, as well as many weeks later as it continued to pop up in people's feeds, is an indication of the scale of the problem, and the need to be ever-vigilant. Companies like Facebook now have more power than we could ever have imagined possible a decade ago, and it must wield that power as a force for good. Facebook also needs to unequivocally demonstrate that it takes the problem of online extremism seriously, and is taking measures to deal with it proactively.