Evaluating Teaching and Learning: The Departmental Review

One of the key issues for us, as it is in any school, is to ensure that the quality of teaching and learning is as good as it can be.  This requires us to engage every teacher and every department in a continuing cycle of evaluation, feedback and planned improvement. Over the last two years the main vehicle for this process has been our Departmental Review, as I described in this post.  The key aspect of this process is that individual observations and scrutiny processes are conducted under the umbrella of a whole departmental review so that collective learning is undertaken in parallel with the development of individuals:

The Departmental Review concept
The Departmental Review concept

Originally, our main aim was to ensure that we captured as much developmental value as we could from the formal observation process.  We even ditched giving OfSTED grades for a year to reclaim the core purpose of observation as a feature of CPD rather than primarily an accountability mechanism.  However, with the framework changing so many times since our last full inspection in 2006 (yes… that’s right!) we invited a team of external observers last summer to help keep us up-to-date and we’ve returned to the grading this year as part of the second cycle of the process.  We’ve learned a great deal about the value and impact of external visitors making snap-shot judgements and about our self-belief in terms of the quality of what we’re doing.

 In thinking about the next cycle, next year,  two key observations have been important to me:

  •  I watched a lesson alongside an observer taught by someone who I believe is a cast-iron teaching expert, who year on year secures extraordinary outcomes and who I feel knows their subject so well that if they think teaching a certain way is appropriate, no-one bar none (and certainly no inspector) could really argue. So how on Earth did we end up accepting that this lesson segment was judged ‘Good’ without running the observer out of town?  I’m ashamed of myself for allowing that to happen. Not enough differentiation? Get away…..  Nothing about the overall, long-term experience of learning in this teacher’s lessons is less than outstanding; it was the snap-shot observation process that was flawed.
  • In the second year of our Departmental Review cycle we have kept the Line Managers the same as in the first.  I was involved with Art (as featured here), History, Maths and DT – a good cross-section. After two years I now feel I know the teachers in these departments in a well-rounded sense; some I know so well that a one-off lesson observation couldn’t really change my view of their overall impact on learning outcomes. It could be dazzling or it could have some weaknesses but I know enough context to put that in the right perspective. I also now understand in some detail how progress and feedback are monitored over time, what excellent work looks like by the end of the course in each subject across the ability and age range and what the key areas of concern are around matching pedagogical developments to measurable outcomes.  The point is that it has taken me two years to develop this knowledge… and now the observations just slot in to a big picture without being overly important in themselves.

I have also been keen to embed the thinking that underpinned my post ‘How do I know how good my teachers are?” into our formal processes more explicitly.

With these ideas in mind, we’ve been looking to develop what I call a ‘longitudinal’ process, that moves us far far away from the limits of snap-shot observations.  Our most recent Middle Leaders meeting explored this issue and there seemed to be a few key tensions to absorb:

1.  We want a highly developmental process where lesson feedback helps us to improve;…but we can’t meaningfully separate that entirely from accountability responsibilities.

2.  Snap-shot processes are inherently limited but adding other elements and taking a more longitudinal view does also add to the level of scrutiny (albeit scrutiny that already exists).  It’s an unavoidable double-edge:  a chance to demonstrate the good work you are doing is also another moment of scrutiny.

3.  Middle leaders are primarily interested and skilled in supporting in a collegial style but are also responsible for maintaining standards in their area – which must include securing improvement and tackling underperformance whenever issues are identified.

4. Student work, with the organic record of feedback, represents the best evidence of the routine practices of individual teachers and of the department in securing progress over time but work sampling can be cumbersome and is, de facto, a source of scrutiny pressure.

5. Subject specific processes allow for more subtle fine-tuning but there is a need for a fair and transparent process that allows standards to be consistent.

 Thinking about all of this we’re suggesting that the best way to move forward is to develop the Departmental Review Process to include more formalised elements and here is the current proposal:

Departmental Review: work in progress.
Departmental Review: work in progress.

 

Most of these things are happening already; we just need to make sure that they happen consistently.  The exact timing and sequence of these elements is flexible so departments can absorb them in a way that works for them.

There are two elements that need to be more explicit:

 The Assessment, Feedback and Progress Review:  Making sure the Line Manager and HoD engage in dialogue across the year about the nature of marking and feedback and how this leads to progress as shown in books, tests, pieces of work, folders – or wherever.  Regular engagement with this would avoid a cumbersome one-off collection process ; however it is done,  line managers and HoDs need a good overview of the nature of students’ work in each class and how this is informed by the feedback dialogue. Line managers are obliged to educate themselves about the long-term outcomes in their subject areas…so that any observation has a broader context. It isn’t good enough to extrapolate from what you see in just one lesson…

Student Focus Group:  This is for the HoD to organise or delegate,  with a simple report-back to the department and line-manager on the key strengths identified and suggestions that students make.  It’s a powerful source of constructive information that HoDs can manage with appropriate ground rules and so on.  I’ve suggested this is biennial because that keeps it in proportion in terms of effort and information value.

There is also the opportunity for departments to re-draft the school’s OfSTED-referenced lesson observation template into a be-spoke departmental version that is more relevant for their purposes.  In each of my line-management areas, I feel we’d get better value from a subject-specific set of observation criteria and that will be an area for us to develop in the coming year.

So, that’s where we are now.  No pain no gain as they say…. but I’m optimistic that this approach, enabling the scrutineers to getter ever closer to the true picture of learning and achievement, will be successful. And if OfSTED ever do come… I will say, ‘Sorry, we don’t do that false snap-shot thing; it’s not good enough for us… we have a better way.’

 Update January 2014: Today we decided to ditch lesson observation grades.  With growing disquiet around grading and evidence from Professor Rob Coe amongst others, highlighting the flaws in the grading process, we’ve decided not to give them.  Instead we will develop our use of feedback following lesson observations so that perceived strengths and areas for development are expressed clearly.  For example, if a lessons are slightly less than ideal, the scale of improvement needs to be conveyed in a way that is different from when lessons are actually quite poor.   With Lesson Study finding favour with more teachers, the snap-shot drop-in approach seems less and less satisfactory. 

 

29 comments

  1. I’m beginning to think that perhaps you should run Ofsted. Your first task would be to slim it down considerably and focus purely on the regulatory role, as happened with the QCDA transition into Ofqual, though putting the ‘rump’ into DfE was always going to attract even more political intervention in curriculum and qualifications issues. I really like the student focus group idea not only as a source of valuable information for your teachers, but as a validation of your students’ expertise in subjects, the bit that I gather is missing from certain Ofsted inspections, hence your reason for de-drafting their observation template …

    Like

  2. Seems like a model approach underpinned by the reality that we must be held accountable for what we do. Thank you for sharing this information.

    Like

  3. I like the idea of the depts. using the Ofsted subject specific guidelines [to encourage them to be familiar with it] and our own school learning and teaching priorities to create their own agreed obs criteria for one of our observations [joint one with LM and LnT leader] We too have done away with grades for our other obs [joint LnT leader and random/chosen colleague] after a pleasant Ofsted in October and want to experiment with different plans/approaches to decide which tactics will best develop our learning and teaching over the next couple of years [in the happy knowledge that Ofsted won’t bother us!] Our learning walks, mixed with surveys are aimed at gathering student views and the information gained supports individual, dept and whole school planning.

    I agree with Chris re validation and Ofsted-guess they do this already of sorts when they ask for your rank order and match their views of observed teaching against yours and constantly question the rigour of your reviewing system.

    Thanks for sharing your ideas-they are great to include in the weekly learning thought booklets we issue.

    Apologies for the errors in the first post I was typing on my knee watching cricket and pressed post by error!

    Like

  4. This is a fabulous holistic view on real school improvement. I used to treat Ofsted like a driving test. You don’t fail your driving test for not looking in the mirror, you fail because the examiner did not see you look in the mirror. I would tell them, via the students, exactly what I was doing. “I’m not going to share the exact objectives today as I don’t want to destroy the discovery…” “Obviously, I can’t teach you anything without knowing what you know so ….” and 20 minutes later “I wonder if you have made progress, lets find out ….” It was a whole nonsense performance, but proved to be very effective in getting “Outstanding” grades. Never assume the inspector can see what you are doing.

    Like

  5. […] The hardest thing I find to deal with is where a person is basically competent but their attitudes or behaviours are problematic.  They annoy you, frustrate you or disappoint you.. but they’re not really doing a terrible job.  It can be hard sometimes to define objectively the issues you have with someone; and that means you need to find ways to work with them and turn them around.  You also need to develop a rounded view of someone’s capabilities;  something we’re trying to do with our Departmental Review system. […]

    Like

  6. […] Here are two blogs – here and here – from another Headteacher, Tom Sherrington, who’s abolished graded observations and argues that graded observations are only retained by other schools “for reasons of inertia or educational dogma”. He outlines his alternative approach here. […]

    Like

Leave a comment