COVID-19 has put computer modelling in the spotlight. From flattening the curve to economic analyses of JobKeeper, policy decisions and modelling are intertwined. Indeed, models inform almost all policy decisions on complex issues - affecting not only our health but also our environment, education, security and more.
We need to ensure that modellers are responsible in how they construct models and decision-makers are responsible in how they use them. We need guidelines and a new social contract to maintain trust in models, and the decision-making that uses them.
The challenge is that models can't be perfect - they can only ever reflect a slice of the real world. As the old saw attributed to statistician George Box goes: "All models are wrong, but some are useful."
So how can we distinguish between models that are useful and those that are misleading?
Andrea Saltelli from the University of Bergen in Norway led 22 international experts in distilling our reflections on this question into a five-point manifesto, recently published in the leading scientific journal Nature.
We are advocating that modellers be upfront about the limitations of their models, and that politicians and other decision-makers demonstrate how they have taken those limitations into account.
Good modellers and decision-makers - and there are many - already do this. But not everyone resists the temptation to offer the community certainty that goes beyond what modelling can support; certainty that many crave, especially in unsettling times.
Community members actually know, intuitively, that certainty is impossible. Certainty needs perfection - perfect individuals, relationships, communities, organisations, political processes and societies. And none of these is, or can be, perfect. What we need to aim for then is "best possible". That lesson applies not only to our lives, but also to modelling.
Models are great for exploring. They can help us better understand complex problems by showing how parts of a problem are interconnected and exposing what we don't know and where data are missing. They can integrate different views about a problem by involving those affected in the modelling process. They can let decision-makers alter inputs and assumptions to try out different options and assess likely impacts.
But they are most vulnerable when used for prediction in order to assert answers.
In our manifesto we describe five factors that make prediction problematic and put forward the best possible ways of handling them.
Models are based on assumptions - they have to be. Assumptions about how the world works are affected by the values of the modellers and the decision-making shortcuts they use. These are not necessarily bad things, as without them model-building could not even get started. But they are problematic when they reflect vested interests, biases against certain groups in society and dogmatic thinking. They are also problematic when they are hidden. The assumptions underlying a model have to be made clear, and how they influence the model needs to be open to scrutiny. Similarly, the assumptions that decision-makers use when assessing a model must also be transparent.
Because models are used for complex problems, it is tempting to include as much as possible in the model, rather than thinking about what is critical. A real-life example is that a simple HIV/AIDS model that included numbers of sexual partners provided more accurate predictions about that epidemic than a complex model that missed that critical variable. In addition, as a model gets more complex, uncertainties tend to multiply, to the point when there are so many uncertainties that any prediction is meaningless. Such cascading uncertainties must be understood and kept under control.
No one model is useful for all aspects of a problem. There are multiple ways of constructing a model. For example, a model could focus on the actions of individuals or it could focus on identifying vicious cycles and leverage points. The choice of model influences the outcomes. It is important for modellers and decision makers to be clear about the issues the model needs to address. Good practice involves using multiple models and interrogating models that provide different predictions.
Many types of modelling require numerical inputs only, so how such models deal with the unquantifiable (e.g. the value of a life) is important. Variables that don't have a number value should not be left out of such models, and how they are included must be clear. Numbers are outputs as well as inputs, and the numbers models produce also have to be open to scrutiny, so that quantification does not substitute for sound judgment. Once a crisp narrative focused around a number takes centre-stage, it is easy for other possible explanations and estimates to be ignored.
We don't and can't know everything, so how models deal with unknowns is critical. Are they ignored or taken into account? How are they taken into account? If ignored, unknowns, and especially unknown unknowns, are the source of nasty surprises. These limitations must be clear so that decision-makers can prepare as well as possible for adverse unintended consequences.
We are calling on both modellers and decision makers to #ModelResponsibly. The media and the community also have a critical role. Let decision-makers change their policies as better models are produced. Be tolerant of the limitations that urgent action imposes. Forgive honest errors of judgment. But don't let decision-makers use models to provide false assurances that hide corruption and incompetence. Don't demand certainty. Instead, demand openness about assumptions, what is included and left out, what the purpose of the model is, how uncertain the numbers are and what has happened to unknowns.
Sign up for our newsletter to stay up to date.