I hope your results have been kind to you and your students; I hope you are still standing after the annual tightrope walk. However well things went, whether you are a senior leader, Head of department or teacher you’re going to want to try to extract as much information as possible from the results so that you can celebrate all the positives and learn from the negatives. But it’s complicated. Heads will have to balance providing simple headlines for most governors, the parents’ newsletter and local press (they love a good set of twins with the same grades) with the more forensic analysis that tells you the real story and reveals where you need to improve.
Here are some thoughts about dealing with the post-mortem.
Did we do well?
I think it’s a mistake to do anything that reinforces the tendency to tell this story in one data measure. Always look at multiple data-points and then decide if that constitutes success. No one measure is the story. Ever. You might want to look at the following:
- % of grades a 4+, 5+/C : get a feel for your standing in the new murky pass/fail zone.
- % of grades at 7+, 8+, 9+/A, A*: look at your top end. You might be doing really well – or not very well – at the top end independent of your success with 4/5/C grades.
- Look at Attainment 8 as a measure of overall achievement but also look at the profile with various thresholds: What % of students gained 40, 50, 60, 70 points? This could be a useful trend.
- Look for multiple successes: What % of students got 3+ A/A*/7+, or 5+ A/A*/7+, or 8 A/A*/7+?
- Individual success stories. Whatever the global picture is, you’ll have some personal triumphs and they always matter.
Comparison of all the above versus projections using FFT20 and/or FFT50. There’s no point celebrating great raw outcomes or beating yourself over low raw outcomes until you’ve compared with their starting points.
Don’t do a P8 calculation prematurely – wait until this comes out officially (he says knowing everyone will do this anyway.) It’s a mug’s game because the A8 median profile that determines P8 will shift and you won’t know what it is yet. As schools get better at playing the A8 games, scores are likely to rise so the likelihood is that P8 scores will fall in the end if you start off using last year’s profile, adjusted for the new GCSE scores.
Are we getting better?
This is a key question. If you can generate the same data in successive years, you can see if things got better. But make sure things are generally comparable – otherwise it’s not meaningful. Your overall results in headline A8/P8 might go down but you still find more students have gained say 5 or more A/A*/7 grades – which is a cause for celebration. Look for anything that’s improved.
However, where things are not the same, be very clear that you can’t say that things are better or worse. Is the % of grades at 4+ broadly equivalent to last year’s C+? It might be. But 5 and 6 are new – they don’t translate directly to C and B – so beware false comparisons. New exams will be very different; there will be volatility – so whilst looking at how things are changed, don’t rush to judgement about cause and effect.
After the next two years, we’ll get some stability and trends will return to some sense of normality. However, even there, you need to be aware of the impact of the near zero-sum effect of our anti-grade-inflation system. You should not expect continual year on year improvement. The system does not allow every school to improve maths and English results, for example. A saw tooth is quite likely, especially if you have a lot of students in the 3-5 zone.
It’s important to look to see if any group is underperforming – but be very careful about the overstating the degree of difference. Divide any group into two subgroups and one will perform better than the other. You need to see a very big difference before you worry. Compare gender differences with the national picture before you start looking for internal factors to write an action plan for.
Also watch out for sub-group success/failure proliferation. In my experience, most subgroups will perform at a level close to the overall level for the school: If you have a Cake of Joy, every slice will have joy in it. Each group has done well. If you make more slices, that doesn’t mean you did any better. If you have a Cake of Doom, every slice is likely to be doom-ridden. If you make more slices of doom and list them all, you didn’t do any worse. It’s an illusion both ways.
One group I think is worth giving special attention is High prior attaining pupil premium students. These students may be getting Bs and some As instead of A/A* -and fall slightly under the radar unless you’re looking.
This needs some care to ensure you are comparing like for like. It’s always slightly annoying for the option subject with 30 students to be excessively lauded for getting 100% A*-C alongside the full cohort of 150 students who took Maths or Science with a more mixed picture – especially if that option subject was generally chosen by a higher attaining cohort. It’s useful to compare subjects along several data lines: eg % above 4+ as well as % above 7+. Very often a subject has a great top end despite a relatively low ‘pass’ rate – or the other way around. It’s important not to be hasty in saying subject X has done ‘better’ than subject Y.
Be careful with Science. Where students are selected out to take separate sciences, then of course those subjects have better raw scores and of course the remaining students have lower raw scores. You need a way to combine them. My preference is to record % of all grades: eg Combined counts for 2, separate sciences all add up and then you work out the global pass rates at each level for comparison with English and Maths. Always put a column for ‘all sciences’ in any table so that people don’t make false comparisons with single, double, separate sciences and other subjects.
Internal comparisons that have more meaning might use comparisons to FFT predictors or residuals. The problem with residuals is that its a zero-sum so you always have winners and losers. Still, it is useful to know, before you celebrate your department’s results, whether in fact, your students’ grades were on a par with what they got elsewhere.
So what? What went well; want went wrong?
There are so many variables that play into exam outcomes and you only have a short timeframe to put things in place for next year so that you can change things in light of your exam results. Things to consider are:
Are there particular groups or teachers who seemed to struggle? Does this triangulate with other indicators? The teacher or similar groups might need some extra guidance and support.
Within each subject are there obvious indicators of where things have gone well or badly – particular modules or papers were scores were lower? This might inform the timings of the scheme of work or the plan for revision.
When you look at papers or the online question by question analysis, can you see particular questions and topics where students are not scoring well? This can help you with planning teaching and revision. It’s this kind of detail you want to find if you can. Often there are no neat patterns – so you just have to try to teach everything again next year as well as you can.
Beyond the subject-specific stuff, there might be whole-school culture issues to address – but I would say that you would know this well ahead of results day and your plans are unlikely to change as a response. The agenda for improving behaviour and teaching and learning with quality assurance systems in support might need to change a bit. You might also want to look at your assessment regime – and the paradigm shift I’ve talked about elsewhere might be the way to go.
Good luck! I hope you’ve been treated fairly and kindly by the system. Keep going. That’s all you can do. Keep moving forward, the best you can and hope that you get the support and the rewards you deserve.