As someone who has raised multiple concerns about the validity of inspection processes over several years, I often receive messages from people on the receiving end. This year alone, at least 10 different people have shared their bad experiences; their feelings of injustice, insecurity, demoralisation, disillusionment – after an Ofsted inspection that has taken them to the brink. It’s common to hear that a lead inspector forms a view that, once voiced, never shifts; all they do is line up evidence to support their case. To me, this is systemic – it’s not about a few rotten apples who don’t get in line with the current framework and myth-busting.
Very recently I was contacted out of the blue by a serving inspector who wanted to express their concerns from the perspective of conducting inspections for Ofsted in its current form, based on the latest training. The inspector told me the following:
- “It’s fair to say at this point that nothing that I’ve seen in the last 12 months gives me faith in the system. It’s bonkers that a team of inspectors is expected to reach complex judgements in such a short space of time.”
- “The fact that the lead inspector must write a report where the overall judgment is aligned to the 4 key judgements, and then these 4 judgements must be aligned to the evidence forms, encourages the inspectors to make sure their evidence forms fit what they think might be the final judgment, rather than gather the evidence without prejudice and see how the evidence stacks up at the end.”
- “The whole process provides the tiniest snapshot of school life, with sweeping judgements made from this snapshot. Activities like book scrutiny and pupil panels are particularly vulnerable to this.”
The biggest vulnerabilities of the inspection process suggested by this inspector are as follows – I have fleshed out the headlines provided to me:
‘Inspection management’ by schools: Some schools get their arguments and evidence lined up nicely, others let the inspection happen to them. Some schools get support from executives at their academy trust HQ, others don’t. This means inspection can’t get a ‘true’ picture that is consistent from school to school; it’s as much about the performance around the inspection process as the quality of education.
Composition of inspection teams : Team members have their different biases, backgrounds and experiences, all making a major impact on how they judge what is seen; their personal values come into play. It’s much more subjective than is reasonable to expect.
Ridiculously small sample sizes: For example, inspectors are forced into judging a whole sixth form on a few lessons and meetings; across a whole school they have to extrapolate a great deal from limited pupil panels and book scrutinies. This has always been a major concern of mine – so it is interesting to hear that an experienced inspector also feels that the sampling is inadequate as a basis for making the judgements.
Variation in ownership of basic processes: Some inspectors choose the books/pupils, some let the school choose. This sways things massively! It can’t be consistent with a valid, reliable system that basic inspection processes are so open to this variability if it makes such a difference to what is seen.
The final fit driving the evidence: There is a need for the final judgement to fit with the evidence while not deviating too much from the judgement indicated by the public data, which often means that people compile evidence in light of the expected final judgement, rather than reach a conclusion based on evidence gathered. (This reinforces the case made by Ross McGill in sharing the retrospectively amended evidence forms he obtained from an FOI request. ) The rules, written and unwritten, about the consistency between the key judgements – e.g. Teaching, Learning and Assessment and Outcomes are usually the same – will usually determine the overall effectiveness, yet there’s no reason why these things should be tied to each other.
So – there you have it. The voice of someone with many years’ experience working in challenging schools and MATs, serving as an inspector and finding it hard to justify the process and outcome of the work that is done. I’ve heard similar stories from others too – the enormity of the task of trying to capture what a school does well and what it needs to improve on short-form inspections is often described as especially challenging to manage. Corners are cut; hunches relied on; major extrapolations made.
We know the framework is under review but already we’ve been told that grading is going to remain. We also know that notions of ‘culture’ and of the ‘human element’ of inspection are being given greater emphasis. My interpretation of this is that Ofsted officials realise that they can’t ever demonstrate reliability so they have to resort to giving greater value to their very fuzzy processes. That would be fine if all we had were reports with strengths and areas to improve that people actually had to read. But with hard-edged judgements, we’re meant to believe that Requires Improvement anywhere is definitely worse than Good anywhere, with all the consequences that fall on schools in that category. Fuzzy-edged feels and all the flawed processes described by my inspector contact aren’t acceptable in that context. Not when we have teachers and Headteachers fleeing the profession in droves.
One day Ofsted folk will accept some responsibility for that.. one day. Meanwhile, they’d better not blow the opportunity to make meaningful changes in this new framework. Right now, sorry to say, I’m not optimistic.