This is the first year that results for reformed, linear A-Levels in all subjects have been published, with the exception of some ancient and modern foreign languages.

This means we can now see how the attainment of pupils has changed over the five year period from 2015. Reformed A-Levels began to be awarded in 2017, replacing the previous modular versions in which pupils could resit modules.

Overall, as the chart above shows, attainment at A*-A and A*-C has fallen slightly since the reforms.

A note on coverage

Unless otherwise stated, figures in this post relate to England only – though results for Wales and Northern Ireland are also available on our results microsite.

As we write here, there have been changes in which subjects pupils have been entering over the last five years. Switching to subjects which are graded more severely may explain some of the dip in attainment. An increase in the proportion of 18-year-olds taking A-Levels may also explain some of the dip. These pupils will tend to come from the lower end of the Key Stage 4 distribution.

The chart below shows how the percentage of grades A*-A awarded has changed on a subject-by-subject level since 2015.

In some it has hardly changed, but it has changed markedly in others, such as the sciences, which have seen increases in entries. There has also been a large change in grades awarded in geography – though that is less easily explained by changes in entry numbers.

For the most part, though, the (green) dot for 2019 is the lowest of the three years plotted.

Of course, the system is designed to ensure that nothing much changes when new specifications are introduced. Under an approach to awarding grades known as comparable outcomes, the percentage of pupils achieving each grade in any given subject will be much the same from year to year unless there is a change in the prior attainment of the cohort, or senior examiners have compelling evidence of a change in the standard of exam scripts. The chart above suggests this has been the case with the sciences, but we can’t say which of these factors is having the bigger effect.

The case for comparable outcomes

The comparable outcomes approach ensures that the first few cohorts of students who take a new specification are not unfairly penalised due to teachers and examiners being unfamiliar with the new demands.

Without comparable outcomes, we would expect grades to fall when new specifications are introduced, and then to improve, eventually stabilising. This is known as the saw-tooth effect.

The unfairness arises from the fact that the first cohorts to take a new specification would tend to have achieved better grades had they entered the previous version.

Yesterday’s Daily Telegraph reported an A-Level specification in which a score of 55% would secure a grade A. It would be reasonable to ask why Edexcel set such a difficult set of papers (and it seems that they intend to take action to improve matters for next year’s students). But at the same time, we would generally expect the first cohort of students to achieve lower exam marks than subsequent cohorts, and it is only fair that grade boundaries move to accommodate this.

But did the right pupils get the A*-A grades?

Comparable outcomes helps Ofqual and the awarding bodies decide where to set the grade boundaries for a specification.

It doesn’t help ensure that students receive the right grades.

Earlier in the year, Ofqual published some fascinating analysis of marking consistency. Briefly, it summarises the results of a double marking project in which each exam script is marked by a chief examiner as well as a standard examiner. The marks awarded by the chief examiner is considered “definitive”.

For maths and science subjects, the standard examiner and chief examiner awarded the same grade in well over 80% of cases. However, corresponding figures were around 60% in English and history, indicating that marking is somewhat more subjective.

This has led the Higher Education Policy Institute to suggest that around a quarter of grades awarded may be ‘wrong’[1] although we cannot be entirely sure, since a single chief examiner’s mark is being treated as definitive. Perhaps there is a lack of consistency even among chief examiners.

Either way, some students will be winners and some will be losers under the current approach to marking. But things haven’t necessarily been better in the past.

We’ll be back with more analysis of this year’s results shortly, but do take a look now at this year’s results data on our microsite, which allows you to explore trends in A-Level and AS-Level entries and attainment in every subject.

And sign up to our mailing list to be notified about the rest of the analysis that we’ll be publishing today.