With the emergence of chatbot tool ChatGPT, artificial intelligence has become dinner table conversation once again. But while the technology promises to change our lives, how can we be sure it will be for the better?
Well, according to former Google CEO Eric Schmidt, governments should chill out on the AI regulation and let the industry show them how it is done.
"There's no way a non-industry person can understand what's possible," Schmidt said in an interview on American television. The technology is "too new, too hard", meaning that "no one in the government can get it right, but the industry can roughly get it right".
Schmidt's comments seemed pointed, given that just two weeks ago, the European Union moved another step closer to concluding the world's first major AI regulation, known as "the AI Act".
The EU has a trailblazing record in technology regulation. Its 2016 General Data Protection Regulation forced industry and other governments to take personal data seriously and strengthen data protection laws.
Now, the EU is poised to become a global trendsetter in AI governance.
Tech companies have fought against the EU legislation's risk-based approach, echoing Schmidt's sentiments that policymakers should not mess with technology they "do not understand". But deep technical know-how doesn't mean the industry is fit to self-regulate AI.
Many opponents of the regulation fail to see it for what it truly is: a forward-looking, public-oriented, social project. Its fundamental purpose is normative - to steer "what is" towards "what should be".
While the tech industry approach of creating regulatory guardrails based solely on "how the technology works" would render regulation a glorified depiction of the status quo.
Granted, the supposedly irreverent Silicon Valley sub-culture dominated by male founders (or tech bros) may have improved somewhat in recent years. We've seen an explosion of self-imposed ethics boards or charters in tech companies, ostensibly committed to aligning industry practice with public values such as responsibility, safety, transparency and fairness.
But there is a gulf between these values and the industry's incentives and actions, making it difficult to trust that the tech industry's vision for a world brimming with advanced robots is a safe or a fair one.
Firstly, the industry is addicted to technical breakthroughs and the profits they bestow. In big commercial moments, the disgraceful operating principle of "move fast and break things" takes hold.
Take ChatGPT, the versatile chatbot that's poised to disrupt the knowledge industry as we know it.
For all the understandable excitement around the technology, it comes as a package deal with well-documented failure points that make it capable of producing believable misinformation at scale.
Nevertheless, its release has continued apace, leaving society ill-prepared for its limitations and potential misuses.
In fact, the competitive pressure towards the next breakthrough is driving the industry further away from the open-source, public spirit of its origin. With ChatGPT's stunning appearance, other tech giants including Alphabet's AI teams are rushing to test more powerful AI systems behind tightly closed doors.
This is reducing the transparency of training datasets and methods, making ethical scrutiny and public oversight basically impossible.
Secondly, the priorities of the tech industry are not always consistent with fundamental human rights.
In March, alarmed by the rapid race towards artificial general intelligence (AGI) - a system that can think and problem solve in all domains of human intelligence - hundreds of prominent AI industry leaders and experts signed an open letter that declared: "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Calling for a moratorium on the AGI race, the open letter briefly appeals to some of the same human rights values that underlie the EU's AI Act: safety, transparency and human centricity.
But that doesn't mean the signatories are all thinking like the EU regulators. Many of the letter's signatories, such as the Tesla and Twitter CEO Elon Musk, are admirers of a controversial ideology known as "longtermism".
To the "longtermists", the existential risk related to a malevolent AGI is arguably much more worrying than the immediate human suffering caused by today's already-happening discrimination, misinformation, or even death associated with AI.
Leaving aside the ongoing debate on whether AGI is anything close to becoming a reality, longtermist thinking is perilous because it engenders a cavalier attitude towards human lives and suffering, observable in comments from many in the industry. With a hyper focus on an AGI-induced apocalypse, the importance of current human suffering is negligible.
For Nick Bostrom, longtermism's thought leader, our current priority should be to technologically enhance our species' chance of future survival. And while he denied supporting eugenics "as the term is commonly understood", he has written about processes like embryo selection and how it could provide cognitive enhancement and improve global productivity.
To be fit to self-regulate, the tech industry must give us cause to trust that its vision for AI is aligned with fundamental human rights - not in an imagined cyborg-filled future, but in the messy reality of today.
It is true that policymakers may never crack open the black box of AI, but even if they fail to get it right this time, they have a solid normative vision to guide future iterations of the law, based on principles of transparency, fairness and safety.
As for the tech industry, so far there is very little evidence to suggest that they can do the same.
Sign up for our newsletter to stay up to date.