"I'll believe it when I see it" is an expression that suggests our own eyes can discern fake from forgery.
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
But humans have always been flawed when it comes to spotting a fake and the playing field is getting tougher by the day.
Deepfakes, digitally altered images created by deep-learning neural networks that make convincing replicas, can be almost impossible to spot against the real thing. These pictures and videos, often created with a malicious intent, are even beginning to trick computers.
So how on earth can we actually trust what we see?
The Australian government is aiming to be a world-leader when it comes to the responsible development and adoption of AI, but currently there's no AI-specific legislation.
The Department of Industry, Science and Resources has just finished a consultation period, asking the public what they believe would be appropriate artificial intelligence policy and regulation.
It's a tentative balancing act at best and an impossible task at worst, trying to ensure innovation continues to thrive, while bad actors are unable to operate on our shores. The government wants to ensure Australians can trust AI will be used ethically, safely and responsibly.
And so they should.
The technology that underpins deepfakes does have benign, even entertaining uses. It allows people to swap their faces somewhat convincingly with celebrities or insert themselves into blockbuster movies.
We've seen an AI-generated Pope in a puffer jacket, Elon Musk protesting in New York and Donald Trump resisting arrest. But there are far more sinister uses too, a new reality show is using deepfake technology to create "evidence" their partners are cheating on them with very attractive people. A cruel twist on a trusted dating show format.
Deepfakes have also opened the door for almost anyone to fall victim to revenge porn.
Celebrities already have their faces digitally stitched onto porn actors. Some, like Scarlett Johansson, reluctantly accept it as the dark side of fame. But everyday people are falling victim too, and the effects can be catastrophic; destroyed reputations, ruined careers, families torn apart and a potential downward spiral into depression and anxiety.
AI-generated content is stripping away trust in traditional media too.
Conspiracy theorists once confined to the fringes of the internet came into their own during the pandemic, discovering the disruptive power of AI-generated content. The cascade of misinformation grew into a conspiracy movement that spilled out of the screen and into the streets in protests. These same techniques are now being employed in Australia, creating noise for the "no" campaign ahead of the referendum on the Voice.
It also completely changes how international propaganda could be accelerated or even exploited, like a fake Volodymyr Zelenskyy telling troops to lay down their weapons. The Chinese government's latest spokesperson is a realistic AI-generated news anchor, ensuring 24/7 instant reporting.
Venezuela has employed another strategy, fabricating international news reports to validate their government's activities to its domestic audience.
This high-degree of realism can blur the boundaries of what's real and what's fake and has the potential to erode our shared sense of reality. But the reality is any new technology can be weaponised, not just deepfakes.
Australia is not the only jurisdiction grappling with this rapidly evolving technology.
In America, just a handful of states have passed legislation cracking down on deepfakes relating to elections and child exploitation material. While there is no policy at a federal level, they do monitor the use of deepfakes by other governments. The European Union is proposing a risk-based approach, the more dangerous an AI system is deemed to be, the more regulation it will have.
In Australia policymakers are now deciding the way forward following the closure of the consultation period.
There's no doubt regulation is needed, but any legislation must be carefully considered and balanced. It must not stifle innovation, and it must not stop Australia keeping pace with the rest of the world, particularly with countries that are actively investing in advancing the technology.
READ MORE:
To fall behind would pose a significant threat to Australians' safety. Any policy would need to focus on the misuse of deepfakes and deceptive content while recognising the vast opportunities it presents.
Whenever a major technological disruptor hits the scene, it is met with catastrophising, and AI is no different.
The introduction of computers came with predictions of entire careers being wiped out. But these doomsday predictions never materialised, instead entire new occupations, ones that could never have been imagined, grew from the revolution.
The same will happen as we learn to harness the vast potential of artificial technology, safely and bravely.
Now is the time to advance our technologies to spot forgeries, bolster our identification technology, and digital authentication standards.
Because while pretending is powerful ... there's nothing like the real thing.
- Ches Rafferty is the chief executive officer of ScanTek, a technology company providing software solutions to verify identities in real-time.