Since its release, the large language model, ChatGPT, has quickly risen to public awareness.
The tool's ability to generate high-quality human-like text has impressed experts and the general public alike, spurring a broad range of uses for the technology: Brainstorming new ideas, writing essays, creating stories, checking computer code, and expanding knowledge.
One potential use for ChatGPT that has not received much attention is to augment human moral reasoning.
On the face of it, this may seem like an odd idea given that humans seem to have a natural disinclination to receive guidance from AI. Such disinclination is known as "algorithmic aversion", which means that people will prefer human advice over AI advice, even when the technology demonstrably outperforms its human counterpart.
Algorithmic aversion has been shown to be particularly acute in morally laden situations, where potential errors can have dire consequences, such as making legal, medical, or military decisions.
The reason is that we perceive AI systems as fundamentally different to us because they lack agency and the ability to feel emotions and sensations.
However, a new study indicates that ChatGPT may not be that different from humans.
In the study, researchers compared human and ChatGPT's evaluations of hundreds of moral scenarios (e.g., "Someone tells a dirty joke at a holocaust memorial") and found a correlation of over 0.93 between the two sets of judgements. That is, human and AI moral evaluations were almost identical.
This staggering finding suggests that we could fruitfully use ChatGPT and leverage its computational power and vast knowledge base to aid our moral reasoning.
This is particularly true for evaluating public policy decisions due to their large-scale impact and inherent complexity: They must cater to a wide range of stakeholders involving diverse interests, values, and needs; they often have unintended consequences that confound their implementation and effectiveness, they require managing trade-offs between multiple practical considerations and moral judgments, and they often involve a tension between addressing immediate needs and planning for long-term sustainability.
One way to use ChatGPT to augment the moral evaluation of public policies is to calibrate our moral position on the matter at hand. Given that the tool is capable of gauging human moral stances on a variety of topics, policymakers can use it to generate multiple policies, representing different positions along a moral spectrum.
For instance, we could prompt ChatGPT to articulate three positions on the issue of immigration, two extreme positions, strict controls on the one end, open borders on the other, and one moderate balancing the requirement for security with the recognition of the rights and needs of immigrants.
Second, policymakers can use ChatGPT to play devil's advocate in moral discussions of public policies.
A devil's advocate can challenge prevailing perspectives and expose potential flaws in reasoning.
This process encourages a more thorough evaluation of moral arguments, fostering rigorous consideration and prompting refinement of initial positions.

For instance, in discussing the moral consequences of a proposal to switch from mandatory to voluntary voting, a devil's advocate might emphasise the proposal's possible drawbacks, such as lower voter turnout and reduced representation of the population in the electoral process, particularly of socially and economically disadvantaged groups. This can cause an imbalance in representation as more affluent and educated citizens could have their interests overrepresented, leading to policies that further marginalise disadvantaged groups.
This way, ChatGPT in a devil's advocate role could highlight that while a switch to voluntary voting could be seen as respecting personal freedom, it could also inadvertently exacerbate inequality and hamper social justice.
Third, when considering a moral scenario, we could ask ChatGPT to examine the situation from different ethical perspectives, such as consequentialism and deontology. A multi-lens ethical framework can support rigorous moral analyses because various ethical perspectives illuminate diverse facets of a given situation, thereby encouraging distinct forms of inquiry.
Consider a proposal to decriminalise drugs. From a consequentialist perspective, the ethicality of the proposal depends on the balance of its outcomes. If decriminalisation is deemed as leading to an increase in drug use, addiction rates, and societal costs associated with healthcare and rehabilitation, a consequentialist would conclude that the policy is ethically indefensible due to the negative consequences outweighing the positive ones.
From a deontological perspective, the morality of an action is determined by its nature, not its consequences. This perspective enshrines a set of duties, such as honesty, fairness, and justice, that must be adhered to. A deontologist could argue that individuals possess a fundamental right to personal autonomy, which includes the liberty to make decisions about one's own body. If the decriminalisation policy is seen as respecting this right, even if it results in negative outcomes, a deontologist would conclude that the policy is morally sound.
MORE OPINION:
Despite the potential benefits of ChatGPT, we need to consider the risks involved in its deployment. These include privacy violations, the increased ease of generating and spreading misinformation, and the possibility that the technology might evolve to possess smarter-than-human intelligence, putting the human race at risk.
These concerns have stimulated robust global debates around regulating advanced AI systems.
In the EU, a legislative proposal is taking shape that is likely to ban controversial uses of AI, like social scoring and real-time facial recognition, as well as require companies to declare if they are using copyrighted material to train their AI systems.
Appearing at a Senate hearing in the US, the CEO of OpenAI, the company behind ChatGPT, called for the creation of an agency that would grant licenses to develop powerful AI systems.
Australian lawmakers should closely observe these developments and use them as a guide for crafting required legislation.
- Uri Gal is a professor at the University of Sydney Business School. His research focuses on the organisational and ethical aspects of digital technologies.