THE release of international test scores for TIMSS (maths and science at Years 4 and 8) and PIRLS (reading at Year 4) last week unleashed a wave of commentary bemoaning the state of Australian education.

Unfortunately, much of it was hyperbole and misinformation, which distorted the test results as well as the subsequent public discussion.

My concern is that once again we have eschewed the opportunity to use comparative information garnered from the tests to assist our thinking about teaching and learning. When commentators misuse the data by removing many of its subtleties and complexities and by making simplistic and superficial claims, education debate is dumbed down. This has happened in several ways.

First, using test results from just two year levels in only three areas of the curriculum, claims are made about the quality of Australian education. The fact is that although reading, maths and science are important, they tell us nothing about outcomes in other crucial curriculum areas, such as the arts, history, health and physical education, civics and information communications technology, to name just a few. Nor do we get any sense of how students are faring in such critical domains as problem solving, inquiry, creativity and inter-cultural understanding. At best the results are a narrow picture of student progress. Certainly the information is far too limited to legitimate the kinds of sweeping judgments made last week about Australia's education quality.

Second, the commentators took the test results at face value without asking any questions about the nature of the tests.

There are several issues associated with the construction of the tests that bear closer examination. Not the least is how a curriculum-based test at the international level can assume that students from every test country at year 4, for example, have covered the same material - key concepts, skills and understandings - to the same depth and in the same sequence. This would be hard enough to engineer across Australia with its various state curriculums, let alone across the 50 countries that participated in the tests.

More than this, given what we know about how students read texts, the question of how test material can present as culturally neutral - without advantaging or disadvantaging students from particular backgrounds - is another important consideration for an international test.

Unless students are taking the same test under the same set of circumstances and with the same preparation, its results need to be treated with some caution. After all, the quality of the information is only as good as the means by which it is produced.

Third, the commentators invariably read the test results in isolation without looking at the whole picture. In maths at year 4, Australia's mean score was significantly higher than 27 countries, but below 17 countries; by year 8 the mean score was below that of only six countries.

Similarly in science at year 4, Australia's mean score was significantly higher than 23 countries but below 18 countries; while by year 8 we were below nine countries.

Now, there could be any number of reasons for the improvement from year 4 to year 8, including that the foundations for study are being well laid in the primary years. But commentators can't cherry pick results to make their point.

Taken together and, indeed, adding results from PISA (an international test of 15-year-old students in maths, science and reading), the international tests regularly place Australian student outcomes in reading, maths and science in the top 10 countries. Certainly there is room for improvement, but it is hardly the stuff of which educational crises are made.

Fourth, commentators have tended to accept the test outcomes as presenting a problem and immediately advocated strategies to address it. A favourite tactic is to propose following the policies of those countries which are in the top five of the league table. Problems with that approach include the vast differences in contexts between countries and the fact that the reality in some countries is ignored.

In Singapore, for example, there is a concern among educators and policymakers that although students are successful in tests, their creativity is being stifled by a narrow and straitjacketed curriculum. Clearly it is useful to share information between countries, but importing policies and practices from other countries is fraught with danger.

Another tactic is to use the problem as a springboard for advocating a predetermined position. In the past week, various commentators have proposed such disparate strategies as greater school autonomy, revamped teacher education programs, and voucher systems to enable school choice - all as means to improve Australia's standing in international tests.

The problem with these approaches is that they jump from apparent problem to solution without some important intermediate steps, such as gathering and assessing the evidence, clarifying the problem and explaining causes.

These four points are not an educator's defensive response to adverse data. I am not dismissing the data, nor am I suggesting that Australian education can't improve. I am arguing that superficial readings of international test data, and knee-jerk reactions, are more likely to impede than to advance the quality of education in this country.

Rather than misinterpreting the data, our public discussions would be better served to focus on some of the issues that the test results highlight, such as the unacceptable differences in educational outcomes between students from affluent backgrounds and those who suffer educational disadvantage.

Progress in education can only be made if we respect evidence, recognise complexity and are willing to inquire and investigate, rather than manufacture crises. A quality education system can only be achieved in the presence of quality public debates about education.

Alan Reid is Professor Emeritus of Education at the University of South Australia.