We writers of words should be worried. Machines can now write as well as most of us. The recent leap in the ability of artificial intelligence has big implications for journalists (no tears, please) but also for teachers and public servants.
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
Don't take my word for it. I asked the computer: "What are the advantages of artificial intelligence for public servants?"
It responded immediately:
- Increased efficiency: AI can help public servants process and analyse data faster, freeing up time for more important tasks.
- Improved decision making: AI can provide insights and recommendations based on data analysis, helping public servants make more informed decisions.
- Enhanced customer service: AI can be used to automate routine tasks, such as answering customer inquiries, allowing public servants to focus on more complex issues.
- Increased accuracy: AI can help reduce errors and improve the accuracy of tasks such as data entry, reducing the risk of costly mistakes.
- Greater transparency: AI can help public servants track and monitor their work, providing a clear record of their actions and helping to ensure accountability.
All of that was written by a machine - just like that, with no rewriting by a mere human. The days of garbled, clunky responses have gone. A new program - ChatGPT - produces answers which are indistinguishable from those a human might give.
It's a chatbot (like those dialogue boxes companies use to interact on your computer screen) but the newest versions are jaw-droppingly good. They "understand" the question you ask and respond in good, clear language. They respond to complicated questions with complicated answers. They could be human.
You - or a student doing an essay or a public servant tasked with writing a report - might ask a question and, in a second or so, paragraphs of well-written, credible answers come back.
So what does ChatGPT think the disadvantages of AI are for public servants? It responded in clear, succinct, English:
- Job displacement: AI may replace some public servants in certain tasks, leading to job displacement and unemployment.
- Lack of empathy: AI may not have the ability to understand or feel empathy, which can be important in certain public service roles where empathy is needed to effectively deal with the public.
- Limited decision-making abilities: AI may not have the ability to make complex or nuanced decisions that require human judgment and critical thinking skills.
- Dependence on technology: Public servants may become overly reliant on AI, which could lead to a lack of critical thinking skills and problem-solving abilities.
- Cost: Implementing and maintaining AI systems can be expensive, which could be a burden for public service agencies with limited budgets.
- Ethical concerns: There are also ethical concerns surrounding AI, such as bias in algorithms, which could lead to unfair treatment of certain groups of people.
But stop, let's not be gloomy. "Write a poem about public servants." The robot is a poet:
"Public servants, oh how we rely / On your tireless work and your shining eyes / You serve the people day and night / Guiding us through the dark and the light."
And so on until: " You are the ones who make our world go round / You are the ones who make our community sound."
It's not Wordsworth exactly but it might do to get a teenager through a school assessment.
Universities were already finding it hard to distinguish work actually written by humans from that plagiarised off the web, but these new programs take the ability to use others' work without thought to a whole new level.
"Finding authenticity is a big challenge for us," Dr Abu Barkat Ullah of the University of Canberra's faculty of science and technology said.
"When we are assessing people, that will be the big challenge: is it done by a human or by AI?"
He and his colleagues already use software to detect plagiarism. Alarm bells ring if the program indicates that more than 10 per cent of an essay came directly, copy-and-paste from other sources. He encourages students to use the same software.
But the new generation of ultra-clever AI writing programs - smarter robots, if you like - make it much harder to detect original sources.
For some writing in the public service (or journalism) that might not matter very much. A routine report, say, on the importance of washing hands in a pandemic may be done faster by a robot than a human being, freeing up the human for other more taxing tasks (or for redundancy, as is the way with technology).
And it may well be better written.
But original thought might be different. Robots don't have imagination. They think within the algorithm.
And the danger is that we humans put too much credibility on what the machine tells us, according to Edward Santow who was the country's human rights commissioner until 2021 (and who is now a professor in this area at the University of Technology, Sydney).
The danger, he feels, is that we defer to them as being the authority on something. We assume they have no biases when they do.
He accepts that the new chatbots are "amazing and hugely impressive", but they have serious, potentially dangerous limitations.
He likens them to a brilliant actor who can play the part of a doctor. The actor may seem completely credible - but doesn't actually have a doctor's skills.
Public servants, for example, have to analyse information as well as gather it. They have to make sense of it.
Machines find patterns in masses and masses of data but they can draw false conclusions.
For example, an artificial intelligence program might look at the relationship between gender and scientific achievement and decide on the basis of past patterns where men dominated science that men are better scientists.
If an employer seeking a scientist used that tool, the robot might say: "employ the man". But human beings might assess the candidates more sensibly on current abilities.
Racial or gender or any other bias can be there, hidden in the computer's algorithm, but the danger is that the robot is perceived as bias-free. The danger is that we defer to machines.
With this in mind, the government has set out its "Artificial Intelligence Principles" (though they do seem a bit motherhood and apple pie):
- "Achieve safer, more reliable and fairer outcomes for all Australians;
- Reduce the risk of negative impact on those affected by AI applications;
- Businesses and governments to practice the highest ethical standards when designing, developing and implementing AI."
But the principles do emphasise that AI should be a tool to help humans and not as a substitute for human insight and thought: "People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled."
But the question of how to prevent humans under pressure of time and money from over-relying on reports generated by robots is not addressed.
Amazing though ChatGPT is, it is fallible. I asked it for the recipe for a happy marriage. It said:
"1 cup of love and affection; 1 cup of communication and understanding; 1 cup of patience and forgiveness; 1 cup of trust and loyalty; 1 cup of honesty and transparency; 1 cup of respect and appreciation; 1 cup of compromise and flexibility; 1 cup of quality time and attention"
"Instructions:
- Start with a clean slate, leaving any past mistakes or grudges behind.
- Mix together the ingredients of love and affection, communication and understanding, patience and forgiveness, trust and loyalty, honesty and transparency, respect and appreciation, compromise and flexibility, and quality time and attention.
- Stir well and continue to add these ingredients regularly throughout the marriage.
Actually, the robot may be onto something.