Demystifying artificial intelligence
Advertisement

Demystifying artificial intelligence

Can governments use AI to deliver services if they don't understand how it is working?

A recent Accenture survey found that, as Australians accessed more artificial intelligence-driven solutions in their daily lives, they expected the same from their government services. Only one in five Australians said they were opposed to governments using AI. More than half (52 per cent) actively supported it.

The survey of more than 6000 citizens from Australia, France, Germany, Singapore, Britain and the United States found Australians were keen for state and federal governments to use AI to improve public programs and services. It revealed Australians were getting more comfortable with AI as they were exposed to it. An overwhelming majority (88 per cent) were equally or more comfortable using AI today than 12 months ago.

Australians are more likely than Americans to trust government use of AI.

Australians are more likely than Americans to trust government use of AI.Credit:Shutterstock

More than 50 per cent of respondents supported governments using AI to deliver new or improved services more efficiently. Nearly half said the personalisation of government services was a top priority.

More than half of respondents said they would like to use an AI-enabled chatbot to learn about visa requirements for international travel, and supported the use of AI to streamline applications for government services like job applications, tax returns and student loans. They also welcomed human services agencies using AI to speed up eligibility decisions and better identify what people needed.

Advertisement

Unlike the US, where 53 per cent of respondents believed the private sector was better qualified than government to deliver AI-enabled services, 64 per cent of Australians believed the government was better or equally qualified than the private sector to use AI.

Other jurisdictions are reaping the benefits

Our awareness of and support for AI-enabled services is likely to rise quickly as Australians encounter their quality, speed and convenience. Around the world, AI is leading to more personalised, fast and accurate public services, leading to higher citizen satisfaction.

The Italian Ministry of Economy and Finance has implemented an AI-driven help desk to handle citizen calls. The system can deal with greater volumes than human operators, giving citizens faster services and making the ministry more productive. Since the AI help desk was deployed, customer satisfaction has risen by 85 per cent.

AI is also making government agencies more efficient by taking over the tedious and time-consuming work that machines complete better than humans.

The Singaporean government is using AI to answer the public's queries. AI processes incoming correspondence for Britain's Department for Work and Pensions and helped the US Health Department process thousands of public comments on regulatory proposals during a successful pilot.

In many of these applications, AI is not replacing public sector employees, but enabling them to take on higher-value work, involving more human-to-human interaction, problem-solving or creativity.

This is likely to make government jobs more desirable. In fact, our research found that 80 per cent of public service leaders believe that implementing intelligent technologies would improve their employees' job satisfaction.

Can we make AI transparent?

As Australian government agencies begin to use AI to make decisions, they will be called on to defend their judgment. In January, British Prime Minister Theresa May used her World Economic Forum address to highlight just this point. She said we need to ensure that "algorithms don't perpetuate the human biases of their developers".

Loading

This is a strong point. How do governments prove that their AI system is unbiased? Can we make this assessment based on external outcomes alone, or do we need to know exactly what kind of logic is used on the inside? In a simple model, that's easy to do. But when you have a neural network, with lots of interdependent models, it becomes very complex.

Like many AI challenges, the issue of AI "black boxes" is novel but surmountable. Part of the solution will be technological. For instance, we can train AI systems to produce reports that allow for greater transparency but the focus will need to be on how we augment people with AI capabilities to get the best of both worlds.

Then we need to set sensible policies and implement ethical checks. Importantly, we need to determine, in practical terms, how much understanding is really needed to ensure decisions are fair.

Backed by citizens and offering the potential for more efficient, consistent and personalised services, AI is an important opportunity for government in Australia. But we need to ensure we have the right level of insight and transparency to oversee AI decisions – just as we would for a human decision-maker.

Catherine Garner leads Accenture's government and health business across Australia and New Zealand.

Advertisement