JavaScript disabled. Please enable JavaScript to use My News, My Clippings, My Comments and user settings.

If you have trouble accessing our login form below, you can go to our login page.

If you have trouble accessing our login form below, you can go to our login page.

A dumbed down debate, but those tests still hold some lessons

Date

Alan Reid

The release of the international TIMSS (maths and science at years 4 and 8) and PIRLS (reading at year 4) test scores last week unleashed a wave of commentary bemoaning the state of Australian education.

Unfortunately, much of it was hyperbole and misinformation that distorted the results as well as the subsequent public discussion.

Once again we have missed the opportunity to use comparative information garnered from the tests to assist our thinking about teaching and learning. When commentators misuse the data by removing many of its subtleties and complexities and by making simplistic and superficial claims, education debate is dumbed down. This has happened in several ways.

First, using results from just two year levels in only three areas of the curriculum, claims are made about the quality of Australian education. The fact is that although reading, maths and science are important, they tell us nothing about outcomes in other crucial curriculum areas such as the arts, history, civics, health and PE. Nor do we get any sense of how students are faring in such critical domains as problem-solving, inquiry, creativity and inter-cultural understanding. At best, the results present a narrow picture of student progress. The information is too limited to legitimise the kinds of sweeping judgments about the quality of education in Australia that have been made recently.

Second, the commentators took the test results at face value, without questioning the nature of the tests themselves. There are several issues associated with the construction of the tests, not the least of which is how a curriculum-based test can assume that students from every test country at year 4, for example, have covered the same material to the same depth and in the same sequence.

This would be hard enough to engineer across Australia let alone across the 50 countries that participated in the tests.

More than this, given what we know about how students read texts, the question of how test material can present as culturally neutral is another important consideration.

Unless students are taking the same test under the same set of circumstances and with the same preparation, its results must be treated with caution.

Third, the commentators invariably read the test results in isolation. In maths at year 4, Australia's mean score was significantly higher than 27 countries and below 17 countries; but by year 8 the mean score was below just six countries. Similarly in science at year 4, Australia's mean score was significantly higher than 23 countries and below 18 countries; however by year 8 we were below just nine countries.

Now, there could be any number of reasons for the improvement from year 4 to year 8, including that the foundations for study are being well laid in the primary years. But commentators can't cherry pick results to make their point. Taken together, and adding results from PISA (an international test of 15-year-old students in maths, science and reading), the international tests regularly place Australian outcomes in reading, maths and science in the top 10 countries. This does mean there is room for improvement, but it is hardly the stuff of which educational crises are made.

Finally, commentators have tended to accept the test outcomes as presenting a problem and immediately advocate strategies to address it. A favourite tactic is to propose following the policies of those countries that are in the top five of the league table.

There are problems with such an approach, including the differences in contexts between countries. In Singapore, for example, there is a concern that although students are successful in tests, their creativity is being stifled. Clearly it is useful to share information between countries, but importing policies and practices from other countries is fraught with danger.

Another tactic is to use the ''problem'' as a springboard for advocating a predetermined position. In the past week, various commentators have proposed such disparate strategies as greater school autonomy, revamped teacher education programs and voucher systems to enable school choice - all as means to improve Australia's standing in international tests.

The problem with these approaches is that they jump from an apparent problem to solution without some important intermediate steps, such as gathering and assessing the evidence, clarifying the problem, and explaining causes.

The test results should not be dismissed - and I am not suggesting Australian education can't improve - but I believe superficial readings of international test data are more likely to impede than advance the quality of education in this country.

Rather than misinterpreting the data, we would be better served by focusing on some of the issues the test results do highlight. These include the unacceptable differences in educational outcomes between students from affluent backgrounds and those who suffer educational disadvantage.

Progress in education can only be made if we respect evidence, recognise complexity, and are willing to inquire and investigate, rather than manufacture crises. A quality education system can only be achieved in the presence of quality public debates about education.

Professor Alan Reid is Professor Emeritus of Education at the University of South Australia.

Featured advertisers

Special offers

Credit card, savings and loan rates by Mozo