First, an admission of bias. We are fervent believers in the importance of evidence in informing policy and implementation. The lack of evidence behind many policies and programs is shameful.
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
Apart from car parks and sports grants rorts, there are others such as the cashless welfare card and the 2007 Northern Territory intervention to which we could point.
These types of programs make the work of public servants disheartening, promote a deep-seated cynicism and are demonstrably not in the national interest.
Review after review of the APS has bemoaned the weak use of evidence and the decline in analytical capability, and the need to build evaluation and research skills in the APS. Most recently, the 2019 Thodey review of the APS highlighted it as a priority area for reform.
Conceptually, evaluation is straightforward: it is the assessment of a program or policy to see if it is working, or not, and why. Paired with monitoring, it is central to good public administration. Done well, "M&E" supports the design and implementation of better policies and programs and tells Parliament and the public what government activity has achieved.
Yet evaluation has struggled in the APS. The Hawke-Keating governments mandated that all agencies were to have evaluation plans, but that requirement was later removed, and it was left to the discretion of secretaries.
And so it languished. Some agencies defied the broader trend. The Employment Department had influential evaluations in the early 2000s on the Job Network, and the Office of Development Effectiveness in the Department of Foreign Affairs and Trade conducted award-winning evaluations between 2014 and 2018.
Even in these areas, however, evaluation has struggled. For example DFAT, with a new minister and the advent of COVID-19, abolished its ODE and stopped doing strategic evaluations.
It is perhaps not surprising that secretaries and governments have not been wildly enthusiastic about evaluation. Like ANAO audits, evaluations, when they get into the public domain, can be trouble for governments. No sensible secretary likes creating trouble.
As an official in DFAT once said to one of us a few years ago, why would they want to do evaluations when it would only create trouble for the secretary and minister?
Given our concern about the state of evaluation, we were glad to hear that Dr Andrew Leigh, the member for Fenner and Assistant Minister for Competition, Charities and Treasury, is promoting the idea of an evaluator-general. He told us recently that it would be a position in Treasury, it would work collaboratively with agencies, and that it would have a focus on promoting randomised control trials (RCTs).
For anyone who knows the minister this emphasis on RCTs is not surprising: he wrote a book, Randomistas, on how RCTs have been highly effective around the world in identifying good policies. RCTs are studies where similar people (or communities or businesses) are randomly assigned to two (or more) groups to test a program to see if it makes a difference. The "treatment group" receives the intervention, the "control group" doesn't. All sorts of things can be tested this way, from vaccines, to teaching reading in schools, to using text messaging to remind people of appointments.
Using behavioural economics concepts such as 'nudging', they are good for trialing different administrative practices, such as different types of letters to citizens, as has been done by the BETA unit in PM&C.
Even though there are differing views on the usefulness of RCTs (including among your authors!), there is value in promoting them. Very few are done in the APS but done well they can provide a rigorous assessment of impact which is missing from most evaluations. They make public servants and stakeholders really think about what a program is trying to achieve and how they will measure it.
But sorry Andrew, we need an EG to do so much more than RCTs. The E-G must help bring evaluation back to the centre of government, and to lead its embedding in administration, much the same way as financial management and audit are embedded.
We know this is a big job. It is also not headline-grabbing. But neither were the arcane processes around ministerial appointments, until they became a headline on steroids!
READ MORE:
We want an independent EG that can:
- advise the secretaries board on key evaluation priorities;
- advise government on how to embed evaluation into all new policy proposals;
- report on the state of evaluation practice and use across the APS;
- help the implementation of the Productivity Commission's 2020 Indigenous Evaluation Strategy.
Quick wins will be important, if difficult to generate. Many strategic evaluations take years to complete, and governments are impatient. Yet there are forms of evaluation which can be used to help implementation, such as developmental evaluation, which is used where there is uncertainty about the context and there is a need for early warnings and quick feedback.
And the EG needs protection. In the long term, an EG that is not embedded through legislation or regulation is vulnerable to a change of minister (who will champion this when Leigh moves on?), and to secretaries' risk aversion, despite the current government's commitment to frank and fearless advice.
So while we are fervent supporters of Dr Leigh in his quest for an evaluator-general, we need one with a legislated charter, a broader remit, and some teeth!
- Russell Ayres is an associate professor at the University of Canberra; Wendy Jarvie is an adjunct professor at the University of NSW, Canberra; Trish Mercer is a visiting fellow at the Australia and New Zealand School of Government.