Hollywood actress Scarlett Johansson became the unexpected champion of holding AI companies to account when she forced OpenAI to take down a chatbot that sounded so much like her that her "closest friends and news outlets could not tell the difference".
Subscribe now for unlimited access.
or signup to continue reading
It turns out OpenAI did approach her to voice their products, an offer she declined. It didn't stop OpenAI from replicating her voice anyway, and only pulled the product down after Johansson's legal team forced it.
This cavalier attitude towards consent, ownership and creative license typifies a "just do it anyway" approach that has informed this generation of AI products.
Ones which have hoovered up vast amounts of content from news websites, videos, forums and book manuscripts without permission.
When even popular celebrities can't escape artificial intelligence's gluttonous expansion, how could everyday Australians possibly hope to fare?
Just in the last month, both Google and Meta have rolled out trials to add AI-powered assistants to their core products. Google to its ubiquitous search engine, Meta across Facebook, Instagram and WhatsApp.
These are significant moves, with far reaching implications that may not be immediately clear. For instance, changing Google's search parameters upends the entire search engine optimisation industry, and the way every website owner will need to adjust and optimise their website properties.
Similarly, Meta's products using AI as a filter poses questions on what best practice content creation will look like for the millions of people and organisations who use its platforms for promotion and advertising, and what communicating to your personal network will mean when you have to go through a chatbot to contact different people.
It's early days still, but so far the trials have been met with confusion and worry, for businesses and the general public alike.
Not only because the trials have resulted in errors and bizarre fabrications from the artificial intelligence products, but also in the uncertainty of what these changes will mean ongoing.
Over the last decade, we've allowed these large digital platforms to infiltrate much of our lives, not just in a personal capacity, but as a country. This has included critical areas such as news dissemination, information sharing during disasters, public service provision and even public trials.
Slowly, we transitioned our public communications infrastructure from publicly managed platforms to privately owned digital products like Google and Facebook. We appear to be on track to do the same with AI.
A Senate inquiry that is looking at how AI will impact Australia has unearthed some startling revelations, including the admission that the Australian Electoral Commission doesn't have the resources and capability to properly defend against AI-based interference for elections, such as deepfakes.
The scope and impact of AI is vast - there are the election concerns raised, impacts across privacy, worries around copyright (with many legal battles underway) and the overall question of worker displacement and effects on jobs.
These are large, difficult topics, and our current public officers, including those whose areas of oversight will be touched by AI, like the Privacy Commission and the eSafety Commission, already have their hands full.
AI will add extra layers of complexity and harms to these issues, and the disruptions and impacts across multiple areas of our society will only get larger.
What we need therefore, is a centralised government body who is dedicated to tackling these existing and emerging risks as it relates to AI, one with technical expertise who can be available to other public offices and help to coordinate regulatory efforts.
This should be seen as additional and complementary to any current initiatives.
We need a dedicated AI Commissioner for Australia who can lead our AI efforts.
Given how consequential the AI industry and its impacts are shaping up, we're going to need all the help we can get.
- Jordan Guiao is director of responsible technology at Per Capita's Centre of the Public Square and author of Disconnect: Why we get pushed to extremes online and how to stop it.