What We Don't Get From Naplan - Education Matters Magazine
All Topics, Curriculum

What We Don’t Get From Naplan

Schools number-crunching published NAPLAN scores are missing the point. Moreover, principals who draw up lists of NAPLAN “competitors” demonstrate a lack of understanding of just how limited NAPLAN scores are. Comparisons with schools systems that have used NAPLAN-like tests, however, are useful. Let me explain.

Tests similar to NAPLAN were introduced in Britain, where I taught for almost six years. League tables soon followed. More than 20 years of experience of league tables has shown that such tables misrepresent more than 40 per cent of secondary schools. Why?

Well, as with NAPLAN, the British tests only give a picture of a school’s performance based on the average number of students achieving the threshold target. This mode of measurement is fraught with inconsistencies, as was pointed out by a 2011 British Institute of Education report. It concluded that a single measure for school league tables was often misleading for parents. It noted: “If able students do very well but less able students do poorly, using the average is a poor guide for parents as to what to expect of that institution for their child.”

This is precisely the issue with NAPLAN. Principals who compare their schools with others have no way of knowing individual performances but are dependent on average scores. It is a bell curve with a very squishy middle.

NAPLAN does not tell you anything about a school culture. If a school has a significant number of students under the designated NAPLAN average, this does not necessarily mean the school is a poor achiever.

Add to this, that if a school had a significant number of students over the average, this does not tell principals that their school is a high achiever.

But what does it say?

A high-performing school may well have stressed NAPLAN tuition at the cost of the curriculum so as to gain a better league-table placement. This was a matter raised by the Australian Curriculum, Assessment and Reporting Authority in its submission to the Senate inquiry into NAPLAN testing.

ACARA observed that principals’ and teachers’ “influence on their parent community cannot be overstated”.

This results in pressure from principals on teachers to lift their results. All the more so if parents are asking questions as to how and why schools with a similar socioeconomic profile are doing better. Schools that recast their instruction to stress NAPLAN may say, so ACARA suggests, more about their “lack of confidence” in their normal literacy and numeracy programs.

The result: teach to the test.

The sleeper in this is that schools are now able to measure individual teacher performance by NAPLAN scores. This has begun to occur in the United States. The impact has been seismic. When in 2010 The Los Angeles Times published individual teacher performances in similar tests on the rubric of “least effective”, “less effective”, “average”, “more effective” and “most effective”, one teacher, Rigoberto Ruelas, took his own life after he was judged “least effective”.

It is a short step with NAPLAN tests to do the same kind of assessment. Already schools can track individual students and, by checking who taught them, use the data of who is a so-called better teacher. The temptation of this for school principals is obvious. It is a way of measuring performance but misses one essential point.

Professor Alan Smithers, director of the Centre for Education and Employment at the University of Buckingham, in Britain, argues that no one teacher is ever accountable, but rather a student’s “attainment reflects a whole range of teachers”.

Principals producing tables and graphs of student NAPLAN performance would do well to remember this.

By Christopher Bantick. Christopher Bantick is a Melbourne writer and senior literature teacher at a Melbourne boys’ Anglican grammar school.

Send this to a friend