Award of GCSEs and A-Levels in 2020

Readers of my blog based in England may know that due to COVID-19, GCSEs (typically taken at age 16) and A-Levels (age 18) are not going ahead as exams this year. Yesterday, the Office for Qualifications and Examinations Regulation (Ofqual) published a consultation on the methods to be used to ensure fairness in the award of these important qualifications. I intend to respond to this consultation, which is only open for two weeks, and have produced a draft response below. Before I submit it, I would welcome any feedback. Equally, others should feel free to borrow from my response if it helps them.

Centre Assessment Grades

To what extent do you agree or disagree that we should incorporate the requirement for exam boards to collect information from centres on centre assessment grades and their student rank order, in line with our published information document, into our exceptional regulatory requirements for this year?

Agree

To what extent do you agree or disagree that exam boards should only accept centre assessment grades and student rank orders from a centre when the Head of Centre or their nominated deputy has made a declaration as to their accuracy and integrity?

Strongly Agree

To what extent do you agree or disagree that Heads of Centre should not need to make a specific declaration in relation to Equalities Law?

Disagree

To what extent do you agree or disagree that students in year 10 and below who had been entered to complete exams this summer should be issued results on the same basis as students in year 11 and above?

Strongly Agree

To what extent do you agree or disagree that inappropriate disclosure of centre assessment judgements or rank order information should be investigated by exam boards as potential malpractice?

Neither Agree not Disagree

Do you have any comments about our proposals for centre assessment grades?

  1. While a separate Equalities Law declaration is not necessary, the Head of Centre should be able to declare that they have taken equality law into consideration as part of their declaration.
  2. Ofqual should liaise with the National Governance Association and with teaching unions to provide guidance to governing bodies and staff on appropriate challenge and support to schools in order to ensure processes underlying Head of Centre declaration are appropriately evidenced.
  3. While I understand and support the motivation for labelling inappropriate disclosure of centre assessments as malpractice, care must be taken and guidance given to centres over what is deemed “inappropriate”. I would not want to be in the situation where a teacher is unable to calm a student in a way they normally would, for example by telling them that “I can’t see any way you won’t get a Grade 7”. There may be an equalities implication for those students suffering from extreme anxiety, and this should be considered when drawing up guidance for centres.
  4. While I accept that there is little time to provide detailed guidance for centres to follow when drawing up rank-order lists, the publication of examples of good practice may help centres, and I would recommend this is considered.

Issuing Results

To what extent do you agree or disagree that we should incorporate into the regulatory framework a requirement for all exam boards to issue results in the same way this summer, in accordance with the approach we will finalise after this consultation, and not by any other means?

Strongly Agree

Do you have any comments about our proposal for the issuing of results?

None

Impact on Students

To what extent do you agree or disagree that we should only allow exam boards to issue results for private candidates for whom a Head of Centre considers that centre assessment grades and a place in a rank order can properly be submitted?

Agree

To what extent do you agree or disagree that the arrangements we put in place to secure the issue of results this summer should extend to students in the rest of the UK?

Strongly agree

To what extent do you agree or disagree that the arrangements we put in place to secure the issue of results this summer should extend to all students, wherever they are taking the qualifications?

Neither agree nor disagree

Do you have any comments about the impact of our proposals on any particular groups of students?

  1. Unfortunately, I see no other option than that proposed for private candidates. However, I am concerned that the definition of “properly” in the criterion given is made much more explicit and in objective terms to the heads of centres.
  2. I suggest legal advice is sought over the enforceability of arrangements within centres outside the UK, in particular over the implications of breach of a head of centre’s declaration before proceeding with treating them the same as those within the UK.
  3. I am concerned over the impact of the proposed arrangements for some groups of students who may be differentially affected by the change in routine due to lockdown, e.g. those with Autistic Spectrum Conditions (ASC). In order to be as fair as possible to these students, I suggest that explicit guidance be given to centres emphasising that centres are free to disregard any dip in attainment since lockdown when coming up with their rank-order list, and again emphasising their duties under equalities legislation.

Statistical standardisation of centre assessment grades

To what extent do you agree or disagree with the aims outlined above?

Agree

To what extent do you agree or disagree that using an approach to statistical standardisation which emphasises historical evidence of centre performance given the prior attainment of students is likely to be fairest for all students?

Agree

To what extent do you agree or disagree that the trajectory of centres’ results should NOT be included in the statistical standardisation process?

Agree

To what extent do you agree or disagree that the individual rank orders provided by centres should NOT be modified to account for bias regarding different students according to their particular protected characteristics or their socio-economic backgrounds?

Agree

To what extent do you agree or disagree that we should incorporate the standardisation approach into our regulatory framework?

Agree

Do you have any comments about our proposals for the statistical standardisation of centre assessment grades?

  1. I am unclear from the consultation on whether standardisation is to occur on an exam-board basis or across exam boards. If it is on an exam-board basis, it is not clear what will happen when centres have changed exam board over the time window used to judge prior grade distribution at the school, especially if the change is for the first time this year.
  2. I have several statistical concerns over the proposed methodology, given the level of detail discussed so far. In particular,
    (i) there is a recognition that small centres or small cohorts will be difficult to deal with – this is a significant issue, and may be exacerbated depending on the definition of cohort (see #3, below), leading to significant statistical uncertainty;
    (ii) it is hugely important to avoid 2020 results being affected by outlier results in previous years. One possibility is to use median results from the previous three years – I would avoid using mean results or a single year’s results.
    Given these concerns, my view is that it would be more appropriate to award a “grade range” to students (e.g. “9-7”, which may of course include degenerate ranges like just “7”). This allows statistical uncertainty arising from the various measures integrated into the standardisation algorithm to be explicitly quantified and provide a transparent per-pupil result. It would allow universities and sixth-forms to decide for themselves whether to admit a pupil optimistically, pessimistically or on the basis of the interval midpoint.
  3. It is unclear from the consultation whether the past grade distributions used will be on a per-subject basis. If not, this is likely to violate proposed Aim 1 of the standardisation process. However, if so, this is likely to result in some very small cohorts for optional subjects at particular centres, so extreme statistical care must be taken in using these cohorts as the basis for grading in 2020. A possible solution us to produce grade ranges, as above.
  4. From a statistical perspective, estimation of grade distributions at a per-centre level (rather than estimation of mean grade, for example) is fraught with danger and highly sensitive to cohort size. It is very important that you do not consider the empirical frequency distribution of grades in a centre over the last 1,2 or 3 years as the underlying probability distribution but rather as a sample from the latter, using an appropriate statistical method to estimate the distribution from the sample. Such methods would also allow the incorporation of notions of variance, which could be factored into the “grade ranges” for students, explained in #2. As an extreme example: if a centre had no Grade 6’s last year, only 5’s and 7’s, we should not bias our model to no Grade 6’s this year, surely.
  5. There is an additional option for standardisation, not considered in the consultation document, which is less subject to the statistical problems of distribution estimation. You could extract just one or two parameters from your model (e.g. desired mean, desired standard deviation) and use these to normalise the distribution from each centre, rather than fit the complete distributions. Such aggregate statistics will be less susceptible to variation, especially for smaller cohorts.
  6. I am unclear how it is possible to award grades to students at centres without any historical outcomes and with no prior attainment data or prior attainment data covering a statistically-insignificant portion of the cohort. For these centres, some form of moderation or relying on Autumn term exam results may be required.
  7. I am concerned by the statement in the consultation that “we will evaluate the optimal span of historical centre outcomes (one, 2 or 3 years). We will select the approach that is likely to be the most accurate in standardising students’ grades.” There is no discussion of how “most accurate” can be judged; there is no data upon which to make this decision, so I would urge caution and an outlier-rejection strategy (see #2 above).
  8. While I broadly agree that there is insufficient data upon which to base rank-order modification based on protected characteristics or socio-economic backgrounds, of the three approaches discussed in the consultation document, the “second approach” is currently very vague and needs further refinement before I can offer an opinion on it. I am happy to be contacted for further comment on this in the future.
  9. I am concerned by the absence of a mechanism to flag unusual rank order differences between subjects in a centre. It should be possible to identify, for example, pupils ranked very high in Subject A and very low in Subject B compared to the typical centile differences in rankings between these subjects, for further investigation by the exam boards. The sensitivity of such a test could be set an appropriate level to the amount of staff time available to investigate.

 

Appealing the results

To what extent do you agree or disagree that we should not provide for a review or appeals process premised on scrutiny of the professional judgements on which a centre’s assessment grades are determined?

Agree

To what extent do you agree or disagree that we should not provide for a student to challenge their position in a centre’s rank order?

Agree

To what extent do you agree or disagree that we should not provide for an appeal in respect of the process or procedure used by a centre?

Strongly disagree

To what extent do you agree or disagree that we should provide for a centre to appeal to an exam board on the grounds that the exam board used the wrong data when calculating a grade, and/or incorrectly allocated or communicated the grades calculated?

Strongly Agree

To what extent do you agree or disagree that for results issued this summer, exam boards should only consider appeals submitted by centres and not those submitted by individual students?

Strongly disagree

To what extent do you agree or disagree that we should not require an exam board to ensure consent has been obtained from all students who might be affected by the outcome of an appeal before that appeal is considered?

Agree

To what extent do you agree or disagree that exam boards should not put down grades of other students as a result of an appeal submitted on behalf of another student?

Strongly agree

To what extent do you agree or disagree that exam boards should be permitted to ask persons who were involved in the calculation of results to be involved in the evaluation of appeals in relation to those results?

Disagree

To what extent do you agree or disagree that exam boards should be able to run a simplified appeals process?

Neither agree nor disagree

To what extent do you agree or disagree that we should not provide for appeals in respect of the operation or outcome of the statistical standardisation model?

Strongly agree

To what extent do you agree or disagree with our proposal to make the Exam Procedures Review Service (EPRS) available to centres for results issued this summer?

Strongly agree

Do you have any comments about our proposals for appealing results?

  1. I disagree with the absence of an appeal procedure against centre procedure. While recognising the difficulties faced by centres and the exceptional circumstances, there is an element of natural justice that must be maintained. Without such an appeal process, there is no safeguard against centres using completely inappropriate mechanisms to derive grade and rank orders, beyond the signed statement from the head of centre. While the consultation suggests that detailed guidance will not be sent to centres on the procedures they should follow, it is reasonable to expect a centre – if challenged by a sufficient number of candidates – to explain the procedure they did follow, and for an appeal body to find this to be reasonable or unreasonable in the circumstances. The outcome of any successful appeal may have to be the cancelling of all grades in a certain subject at a certain centre, requiring a fall-back to the Autumn 2020 exams, but the mere existence of such a mechanism may help focus centres on ensuring justifiable procedures are in place.
  2. The consultation document leaves open the question of what role staff of exam boards who were involved in the calculation of results would have in appeals. It appears proper for them to be involved in providing evidence to an independent appeals committee, but not to form such a committee.

 

An Autumn exam series

To what extent do you agree or disagree that entries to the autumn series should be limited to those who were entered for the summer series, or those who the exam board believes have made a compelling case about their intention to have entered for the summer series (as well as to students who would normally be permitted to take GCSEs in English language and mathematics in November)?

Agree

To which qualifications the emergency regulations will apply

To what extent do you agree or disagree that we should apply the same provisions as GCSE, AS and A level qualifications to all Extended Project Qualifications and to the Advanced Extension Award qualification?

Strongly agree

Do you have any comments about the qualifications to which the exceptional regulatory measures will apply?

None

Building the arrangements into our regulatory framework

To what extent do you agree or disagree that we should confirm that exam boards will not be permitted to offer opportunities for students to take exams in May and June 2020?

Disagree

To what extent do you agree or disagree with our proposals that exam boards will not be permitted to offer exams for the AEA qualification or to moderate Extended Project Qualifications this summer?

Disagree

Do you have any comments about our proposals for building our arrangements into our regulatory framework?

I have sympathy with the proposals in this section, but they need to be balanced against the harm done to those candidates who will be unable to use centre-based assessments and against Ofqual’s duties under the Equalities legislation, given that this may disproportionately affect disabled students (see pp.51-52 of the consultation document.) On balance, it may be better to leave this as an operational decision between exam boards and exam centres to allow exams in May and June, if possible, only for these students.

 

Equality impact assessment

Are there other potential equality impacts that we have not explored? What are they?

As previously noted, I am concerned over the impact of the proposed arrangements for some groups of students who may be differentially affected by the change in routine due to lockdown, e.g. those with Autistic Spectrum Conditions (ASC). In order to be as fair as possible to these students, I suggest that explicit guidance be given to centres emphasising that centres are free to disregard any dip in attainment since lockdown when coming up with their rank-order list, and again emphasising their duties under equalities legislation.

We would welcome your views on how any potential negative impacts on particular groups of students could be mitigated:

If Ofqual were to adopt a “grade range” approach, outlined above, then the quoted research into the reliability of predicted GCSE, AS and A-levels prior to 2015 could be used to inform the degree of uncertainty in the range, mitigating the impact on particular groups of students.

10 thoughts on “Award of GCSEs and A-Levels in 2020

  1. Updates:

    * thanks to @Yorkshire_Steve for pointing out that Ofqual guidance currently says “where additional work has been completed after schools and colleges were closed on 20 March, Heads of Centre should exercise caution where that evidence suggests a change in performance”, which partially addresses one of my concerns. (see https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/878869/Summer_2020_grades_for_GCSE_AS_A_level_EPQ_AEA_in_maths_-_guidance_for_teachers_students_parents_UPDATED_09APR2020.pdf, p.5)

    * thanks to @robotmaths for pointing out that my emphasis on mean and standard deviation might be taken as a suggestion to fit to a normal distribution, which I would *not* suggest. I will rephrase this section before submission.

    Like

  2. Ofqual published their consultation response yesterday at https://www.gov.uk/government/consultations/exceptional-arrangements-for-exam-grading-and-assessment-in-2020. Unfortunately, most of the concerns I raise in my original article – while often being acknowledged in their response – are not addressed through the outline of a mechanism to solve them.

    In particular, I am disappointed that details of the standardisation process to be followed have not yet been released.

    Some key points:

    – Ofqual recognises the concerns raised over small cohorts (small numbers of pupils taking a given subject in a given centre) and the impact on statistical standardisation, but offers no solution. They state that “we will ensure that the standardisation model is sensitive to the size of error in the statistical predictions for small centres”, but this is a vague formulation. A statistically appropriate way to do this is to award “grade ranges” rather than grades, as I outlined in my original article. However, there is no hint of this approach. The closest hint we get to Ofqual’s intentions is the statement “in certain circumstances (such as for small centres and low entry subjects), it may be appropriate to place more weight on centre assessment grades than previous centre performance.” This suggests they may be considering an approach where grade uncertainty modulates standardisation correction. Such an approach would have no basis in improving fairness for individual students, though it may be politically expedient in reducing the level of disquiet from exam centres.

    – I am pleased to see Ofqual re-emphasising several times the importance of an Autumn term exam series, after the Guardian article suggesting this might not go ahead (https://www.theguardian.com/education/2020/apr/22/schools-and-exam-boards-undermine-promise-to-pupils-of-september-tests). However, the detailed decisions on the Autumn term series have been postponed, so we cannot yet be sure.

    – Despite more respondents disagreeing than agreeing with the proposals set out in the consultation over absence of an appeal over the process or procedure used by centres, Ofqual are proposing to go ahead without such an appeal process. My view on this is explained in my original article.

    – The one significant change to Ofqual’s views on appeal appears to be that they are now considering further (but not yet announcing) a possibility for centres to appeal on the basis of significant demographic shifts.

    Like

  3. Indeed, Ofqual are being somewhat coy about the detail: I was surprised to read on page 8 “Nonetheless, we have decided to adopt the proposed aims [of the standardisation process] but we have decided to reorder them such that aim iii, regarding the method’s transparency and simplicity, appears at the end of the list so as to not overstate its importance.” I find it odd that “the method’s transparency” could ever be “overstated”.

    And their statement on page 10 that “…the statistical standardisation model should place more weight on historical evidence of centre performance…than the submitted centre assessment grades…” is, I think, bureaucratic code for “if a school submits grades which don’t match our statistical model, then the school’s grades will be ignored, and the grades determined by our model will be the grades that are awarded”.

    That their model is quite likely to be simple, rather than complex, is hinted at by the statements on page 11 “…we have decided … that the trajectory of centre’s results should not be included in the statistical standardisation process…” and “…rank orders provided by centres should not be modified for potential bias regarding different students according to their particular protected characteristic or their socio-economic backgrounds”.

    So some people will certainly be disadvantaged, which is of course a great pity for those individuals. “Total fairness”, though, is probably impossible. And so the pragmatist in me leads me to think that what would be good for this year would be for the grades to be manifestly fairer than the grades given for the exams over recent previous years, which were notoriously unreliable: on average, 1 grade in 4 was wrong (https://www.hepi.ac.uk/2019/02/25/1-school-exam-grade-in-4-is-wrong-thats-the-good-news/). So how could we measure the reliability of this year’s outcomes to prove – or indeed – disprove this?

    Having said what they’re not going to do, it would, I think, be extremely helpful to everyone if they were to be explicit about what they are going to do. That would enable schools to ‘second guess’ the outcome, and so – if they choose – submit grades that they know, in advance, would have a high likelihood of being confirmed rather than over-ruled. That, in my view, would be a ‘good thing’.

    In England, schools can make some sensible guesses, for there are many references to comparisons with “previous years”, suggesting that the key comparator will be a three-year average. That said, there is no guidance as to what to do with any variability. If all schools go for their three-year-average-plus-a-bit-but-not-too-much, the aggregate will blow grade inflation sky high, and the boards will intervene heavily, revising all those plus-a-bits downwards.

    In Scotland, the situation is even more murky. A letter dated 21 May 2020 from Scotland’s Chief Examiner to the Scottish Parliament’s Education and Skills Committee (https://www.sqa.org.uk/sqa/94257.html) states that their “methodology for moderation” will indeed be published, but only AFTER the results are awarded, and NOT before teachers have to submit their assessments. So Scottish teachers will be throwing darts at a moving dartboard, in the dark. And a dartboard with 13 (yes, 13!) different grades. That, to my mind, is asking for big trouble. Teachers will be upset if their submitted grades are over-ruled; students will be dismayed, and could probably (unfairly) blame the teachers; parents will be angry because the grounds for appeal are highly technical and very limited. Everyone’s a loser.

    Which makes me wonder why – in both England and Scotland – teachers are being asked to submit grades at all. They have no validity, for they can all be over-ruled unilaterally, without consultation, without appeal. And if they are over-ruled, the boards will use the submitted rank orders – which they have undertaken not to change – and determine the grades by superposing grade boundaries wherever they choose to get what they and the regulators believe to be “the right answer”.

    The boards need the rank orders, but not the grades. So why submit grades? It’s just asking for trouble.

    Liked by 1 person

  4. Absolutely agree with your concerns about problem with small cohorts. These problems are even worse for the top and bottom grades which are at the tail ends of the bell-like grade distributions. 2017-2019 A-level results at my son’s school, which has 1100 students from year 7 to year 13:
    Chemistry (10-20 entries): the A* and A rates fluctuate from 0% to 9% and 20% to 36% respectively,
    Physics (10-20 entries): the A* and A rates fluctuate from 10% to 33% and 20% to 32% respectively,
    Further Maths (4-5 entries): the A* rates fluctuate from 0% to 40%.

    Like

Leave a comment