Today’s Key Stage 4 statistical release from the Department for Education shows the gap between disadvantaged pupils and others to be narrowing slightly [PDF]. This is based on the DfE’s gap index, which is calculated by ranking pupils according to their English and mathematics grades and then calculating an overall index.
It would appear to be consistent with data comparing threshold measures – attainment of grade C or higher in 2016 compared to grade 4 or higher in 2017. Here the gap has again dropped slightly from 27.5% to 26.9%
Whilst this is good news, things look a bit different if we look at Attainment 8 and Progress 8. Calculations done previously by the DfE and by FFT predicted that overall Attainment 8 scores would drop because of the change to the 9-1 grading system – mainly caused by the change in points allocated to legacy (A*-G) GCSE subjects in 2017.
Perhaps most important though, given that it is the headline accountability measure, is Progress 8.
Delving into the underlying data (downloadable from the DfE website) we can find scores for overall Progress 8 and for each of the four elements within Attainment 8 – English, mathematics, English Baccalaureate (EBacc) subjects and other (open) subjects.
The chart below shows the gap between disadvantaged pupils and others in Progress 8 scores for 2016 and for 2017:
In this chart a gap of 0.5 indicates that, on average and taking account of Key Stage 2 attainment, disadvantaged pupils attained around half a grade lower than for other pupils. This is roughly equivalent to four grades across 8 subjects.
However, the two series are not directly comparable since they use different scales. In 2016, the familiar 8-1 scale for GCSEs was used, with one point separating each grade. In 2017, the 9-1 scale was used for English and maths, whilst legacy GCSEs were scored using a revised set of points scores. We wrote about it here.
Taking the data in the chart at face value, the gap overall has widened but only very slightly. It’s a different picture, however, when we look at individual elements:
- in English, the gap has widened substantially;
- in mathematics, the gap as widened;
- in EBacc subjects, the gap has narrowed;
- in other subjects, the gap has narrowed slightly.
This seems to contradict the narrowing of the gap shown in the DfE gap index and also in threshold measures. It also contradicts the modelling we did previously which suggested that the gap in EBacc and open elements would increase slightly because of the different values given to A*-G grades.
So, what is causing these apparent contradictions? In previous years the gap in these elements has narrowed but this has been almost all due to a reduction in the entry gap – disadvantaged pupils taking more subjects that count, particularly in the EBacc element.
DfE data shows that the overall entry for EBacc has decreased slightly but that this is made up of an increase for disadvantaged pupils and a decrease for other pupils. It looks very likely, therefore, that a further closing of the entry gap has masked a widening of the gap caused by changes in points allocated to A*-G grades.
However, what about English and mathematics? As the DfE point, out the impact of the change to 9-1 grading on the gap index is likely to be small because it is based only on the ranking of grades. Threshold measures just show the percentage of pupils just getting at or above a given grade so they are not very sensitive to overall grade distributions.
A detailed analysis of grade distributions will be needed to understand this better. But some plausible explanations are:
- relative to the A*-G scale, the new 9-1 scale gives greater value to higher grades;
- the changes in the nature of the examination and syllabus have greater impact on disadvantaged pupils;
- in English, the larger difference might be explained by the removal of the combined language and literature examination.
Does all of this matter? Should schools be concerned? If the changes seen in English and mathematics in 2017 are also seen when the 9-1 scale is used in EBacc subjects (2018 onwards) and other subjects (2019 onwards) then we might expect to see the Progress 8 gap widen further. This change might, of course, be masked if the entry gap continues to close.
For individual schools, trying to compare gaps in Progress 8 scores from one year to the next will be a minefield. Using something like the DfE gap index might give a better indication, but the methodology for that calculation – whilst appropriate for calculations at national level – can be misleading at school-level.
So, it looks like much work will need to be done to help inspectors, governors and others understand that any comparison of changes in Progress 8 gaps over time will need to be done with considerable caution!
Want to stay up-to-date with the latest research from Education Datalab? Sign up to our mailing list to get notifications about new blogposts, or to receive our half-termly newsletter.