The PISA 2018 results are out today.

PISA is supposed to test of a representative sample of 15-year-olds across more than 70 countries across the world. However, questions sometimes arise over how representative the PISA data really is.

And it seems that there were some problems with the PISA 2018 data for the UK. This blogpost will try to explain the issue.

The sample of schools

In PISA, schools are randomly selected to take part in each participating country. However, some of these schools refuse to participate. If the refusal rate is too high, then the PISA data may no longer be representative of the population.

Across the UK, around one-in-four schools (27%) refused to take part in PISA 2018. This is quite a high figure; the OECD typically sets an “acceptable” threshold of 15% or less.

Because of this low school response rate, the UK had to do a “non-response bias analysis”. In other words, the UK had to provide evidence to the OECD that the sample was indeed representative.[1]

The big problem with this, however, is that no details on the bias analysis have been published by the Department for Education or the OECD. The national and international PISA 2018 reports simply say that a bias analysis been done – and that things look okay – but without providing any detail.

This is odd, to say the least.

The sample of pupils

Even within participating schools, individual pupils can refuse to take part in PISA – or may be absent on the day of the test.

In PISA 2018, this also seems to have been a big problem in England. Around one-in-six (17%) 15-year-olds across the UK within sampled schools who were meant to take part in PISA were either absent or refused.

Indeed, there was only one country with a worse pupil response rate than England (Portugal).

This is not good, and again calls into question the quality of the data.

Testing “issues”

Finally, since 2006, PISA has been conducted in England, Wales and Northern Ireland in November and December. (Scotland always tested earlier in the year – around March to May – but also moved to November/December for PISA 2018).

However, it seems that the testing window had to be extended into January this time around.[2] This is highly unusual, and it is not clear how many schools this applied to.

The most worrying thing is what it says in the national report for England:

“A short time extension to the testing window was granted due to technical issues experienced by many schools. This was partly due to anomalies with the diagnostic assessment failing to detect issues with launching the [PISA test software].”

PISA national report for England, p186

What were these technical issues? How many schools did it affect exactly? And what impact is it likely to have upon the PISA results for the UK?

Unfortunately, no further information has been provided. All we know is that there have been “technical issues” that affected “many schools”.

Implications

As I noted in a lecture that I gave on PISA last night, methodological challenges with PISA always arise. It is a crucial reason why we should not over interpret the results; particularly small differences across countries, and changes over time.

In fact, I think a swing of around 10 points in either direction for a country (as has been seen with England’s maths scores this year) should always be treated with great caution – and could well be due to methodological issues, rather than reflecting an actual substantive change in children’s learning and academic achievement.

Want to stay up-to-date with the latest research from FFT Education Datalab? Sign up to Datalab’s mailing list to get notifications about new blogposts, or to receive the team’s half-termly newsletter.

Notes

1. I have previously discussed this topic at length in a recent paper I wrote about the PISA data for Canada where there have been similar issues.

2. It’s unclear from the information available whether this was just the case in England, or whether it applies to other constituent countries of the UK as well.