How do I know how good my teachers are?

At the heart of the discourse about effective schooling is the well-evidenced view that teacher quality plays a massive role in determining student outcomes. John Hattie, Dylan Wiliam, Michael Wilshaw, Michael Gove…they’d all agree on this. We’d all agree on it. As a Headteacher it is one of my core responsibilities, no…it is THE key responsibility, to ensure that teacher quality and the quality of teaching are as good as they can possibly be. I try hard to create the conditions for great teachers to grow and to thrive...but how do I know how good they are and what impact they are having?

There are broadly three inter-related areas that combine to develop a rounded picture of a teacher’s effectiveness:

Data:

Most obviously this is about examination results and internal assessment data. If a teacher can secure good assessment outcomes, you’re inclined to be less concerned about how they achieve that. There are degrees of success too; sometimes results are good but not excellent; sometimes the rate of improvement is slower than it could be; it is a subtle business and you need to know about the ability profile of each class and other factors. Beyond the numbers, of course, there is much much more to learning than can be measured. It is possible to grind out results from uninspiring teaching (I’ve done it myself). Conversely, teachers who shine in observations might not be quite nailing the exam preparation and results might be disappointing. So – data is only one factor and it cuts both ways. There are other metrics – such as information on behaviour incidents and referrals – that might tell you a teacher rarely uses or is over-reliant on support systems. Again, context is key- but it is all part of the picture.

Observation:

This is the headline grabber; the big focus during OfSTED inspections and a bone of contention with some unions. (‘Surveillance’? Get over yourselves…) Seeing a teacher in action first hand is a rich source of information but we need to be cautious. Whether it is a drop-in or a full-blown formal observation, it doesn’t always follow that what you see is typical…. things might not be working well or you might be seeing a one-off performance. Observations are always slightly artificial because of the observer effect; they are limited to being snap-shots in a continuum of lessons – so you never see a full learning episode – and, ultimately, what you really care about are the 99% of lessons that you don’t see. Over time, you accumulate information about a teacher over multiple observations of all kinds… but need to be careful not to fix your view of someone based on the past. People change – they might improve or they might drift. The more current your observation data-set is, the better – and of course, observations can be done by lots of different people.

Knowledge:

This is the cumulative store of micro-feedback that accrues over time around every teacher in a school. Teachers generate feedback continually – from students, via parents, via colleagues, from line managers, through conversations, snatched glimpses of lessons, comments in staff meetings, parents’ evenings, CPD events, email exchanges… drip, drip, drip. Teachers have reputations – it is unavoidable. This could be because they are inspiring, strict, funny, eccentric, know their subject, soft, talk too much, make lessons exciting….. In my experience, this knowledge store is under-estimated in the formal accountability processes. If I’m asked how I know the strengths of my teachers, there is truth in saying ‘I just do’. Students and parents will rave about some teachers and not about others; – that tells you a lot. I reckon my daughter’s evaluation of her teachers would be a fair indication of what I’d see in her lessons; I know them in ways that I bet their Headteacher doesn’t. This information seeps out and around us…. And it gets back to me as Head one way or another. Again, there is context. RateMyTeacher, for example, is a disgusting disgrace –I wish we could shut it down. You obviously need to apply a filter to this noise of feedback… but it is real enough; it matters; it counts – and in many cases, it is more accurate than the one-off observations, most often in a teacher’s favour.

The important point is that all three forms of data inter-relate in a complex non-linear fashion. Ideally, a teacher will rate highly in all three areas. That is the sign of really great teacher – when they create a virtuous circle. Their lessons are great – evidenced by any number of observations; their teaching generates excellent outcomes and both of these things create strongly positive reputational feedback – the knowledge data. But it is quite common that only two would apply. I’ve known every scenario:

  • A teacher who has a reputation as a fabulous teacher, who produces superb lessons during formal observations… but where, frustratingly, the results aren’t what we’d expect. Often this is due to some technical issue with matching the curriculum with the assessment or preparation for formal exams. But you have hope. Usually these issues can be resolved with support.
  • A teacher who gets great results and who scores highly on the reputational scale, but underperforms during formal observations. Here, you need to have confidence in the two positive data-sets and question whether the observation process has given you good information. Is it fair to over-ride the other data-points in your knowledge bank, based on a couple of lessons that didn’t impress? You need to work with the teacher but take care not to over-state the hoop-jumping aspect of formal observation.
  • Finally, a teacher who seems to get great results and can nail an Outstanding formal observation but, for one reason or another – generates negative reputational feedback; either parental or student complains, concerns from colleagues or line managers and so on. Here, you need to be super cautious but it can indicate that day-to-day lessons may not be providing the rich learning experience that they might be. (For example, I can think of a teacher I’ve known who made students copy extensive notes off the board literally every single lesson – oh, except during the OfSTED observation. Seriously!) Of the three, this is the greatest problem. It is hardest to tackle and often suggests some attitudinal issues that are tricky to resolve.

Obviously, falling down in more than one area is where more serious support and intervention are required and ‘capability’ normally only kicks in if you’re worried about all three. You may notice that there some omissions. Teachers need to do a lot more than meet basic professional standards and follow school policies; it doesn’t matter if they are a ‘great person’ or give a lot time to extra-curricular activities when you are evaluating their work as teacher. Some people work incredibly hard and give their all for the students – but that isn’t enough to make them effective. We need to be honest about that. Weak teachers are not bad people and often play an important role in the community…. And the converse is also true!

What does this tell us?

Firstly is suggests that external accountability processes are flawed in a fundamental way. There is a place for external inspection of lesson quality – but the whole process needs to be more sophisticated, taking much more account of the school’s view of its teachers. Is it possible or meaningful to assess the quality of teaching in a school by seeing 30 or 40 or 50 half-lesson snapshots? It would certainly tell you a lot about the school but it won’t be the full story. I’m confident that I know how good the teaching and teachers are in my school – and I’m not sure that inspection processes allow me to get that across.

Secondly – and here is the main point – it tells us that the 99% of non-observed lessons are the ones we should be more bothered about. So much energy is wasted on hoop-jumping for inspection – but it is all the other lessons that drive excellent assessment outcomes and generate positive feedback from students, parents and everyone else. What we should be doing is worrying less about the snapshots, the one-off showcase circuses, and worrying more about ensuring our routine practice is as strong as it can be. Securing strong assessment outcomes and having lessons that are engaging and inspiring are not mutually exclusive! We can aim at both. To stop the hoop-jumping, we should apply a more critical eye to our own practice, ensuring assessment evidence feeds back into the learning in our lessons. Doing this individually and in our teams allows us to move forward without feeling we are wasting energy on artificial external accountability.

Finally, the message for leaders at any level is that we need to generate a rounded view and be cautious in making partial judgements…. How well do you know your staff? How do you know what you know? Which bits of information do you value over others? Let’s make sure we are acting intelligently, using the most sophisticated tools we have to see things in the round so that we can create the trust culture we need for growing outstanding teachers across our schools.

UPDATE:

Soon after writing this, the Government made an announcement about pay scales being linked to performance.  I wrote this article for Labour Teachers.   http://www.labourteachers.org.uk/blog/2012/12/18/performance-related-pay-wrong-diagnosis-wrong-solution/

35 comments

  1. “Reputation, reputation, reputation” as some clever person once wrote! Interesting point today by Dylan Wiliam, that the teacher impact made upon students by great teachers extends beyond the typical end of year data too – echoing throughout their education, often throughout their lives. It goes far beyond the singular hoop of the lesson observation or the data and hardens into a great reputation.

    As a subject leader I want to see that commitment from my colleagues to being better every day, because we all can be, and that level of commitment is usually my marker for how good a teacher is and will be. The immediate response to a lesson observation, good or bad, and the subsequent self-reflection is also key. Everyone can screw up a lesson – we all do it – but recognising the ‘why’ is the thing, and taking up deliberate practice to improve upon our pedagogy. That is where the leader helps – coaching, supporting risk taking, innovative and challenging pedagogy. Forget the hoop jumping, build the consistent habits. For me it always come back to Hattie’s definition of passion and consistent deliberate practice. I re-read those passages over and over.

    Also, your daughter sounds like she could be a sage teacher-coach – a bright future awaits!

    Like

  2. Tom – Great post – I entirely agree – we need to take account of a wide range of both quantitative and qualitative data to understand great teaching … And provide staff with the time and trust to work together in their departmental teams so that they share their great ideas with each other …

    Like

  3. The problem with your favoured scenario is that, for instance, ineffective or bullying subject leaders, or a few stroppy students with vociferous parents, can skew your view. There is a simple answer. Businesses and commercial HR departments cannot do without it. Proper 360 degree feedback on everyone.

    Like

    • I think 360s can work but they are still snapshots. In my experience very negative feedback gets a lot of scrutiny; if it clashes with other information, you tend not to retain it – if is doesn’t you’d go in and see for yourself. Conversely positive reputational feedback can make you re-consider a negative view based on other sources of information

      Like

    • Then this would not be a 360 degree review. I think Tom’s post is all about using a variety of sources to judge teacher effectiveness in the “round”. In my experience I have found students to be incredibly perceptive and actually quite reasonable in giving me feedback on learning. Perhaps stropiness happens when we don’t allow any feedback?

      Like

  4. Yes measuring anything in education ain’t easy, especially performance of individuals, whether senior managers, teachers or students. High stakes school/college accountability just complicates it even more. Curiously, universities don’t have such direct pressures placed on them (yet), though some are struggling with the effect of raised tuition fees on entry numbers and the new REF exercise designed to assess the impact of research. I am moving more towards the measuring of softer indicators such as ‘how resilient are they?’ or ‘do they contribute as part of a team?’. Even the CBI is starting to recognise this. DfE will probably be last to do so …

    Like

  5. Another thought provoking piece. The 360 degree view of a teacher’s performance is crucial. Yet Performance Management tends all too often to be a dialogue with two people rathet than involving students!

    Like

  6. How right you are, Tom!

    The great intangible, unmeasureable, Ofsted-proof quality of a magical, dynamic spark that either happens between student and teacher, or it doesn’t. The frustrating one, which would be good to be able to pin down, is when the spark disappears – sometimes within a lesson – for no apparent reason, but then may return without the teacher doing anything especially different. That’s why it all goes back to a trusting, long-term relationship between everyone in the classroom that really defines the “great” teacher. My only concern is that, if Heads rely TOO heavily on the natural, instinctive or anecdotal gut-reaction of who their “great” teachers are, it becomes almost too subjective, and is then in danger of becoming an old-fashioned popularity contest – a difficult one to quantify (or, indeed, improve on) for the professional.

    GREAT session on “rainforest” at SSAT this afternoon!

    Like

    • Being able to understand the complete teacher is key . . . using a variety of data and information to challenge or support; students and parent view, formattive and summative assessments, work scrutiny, learning walks, lesson observations, behaviour data, timekeeping and management of school systems all add up to a profile that can be shared with staff. This then forms the qualitative conversations that take place during the appraisal meetings, identifying training and development needs, tied into the whole school and Dept. action planning. Easy, happy Xmas.

      Headteacher in B&H.

      Like

  7. […] Know your staff and have a convincing story about them, just in case one of your best teachers has a bad lesson. You need to be able to point to evidence of previous lesson observation records and student outcome data to prove to the inspector that the relatively poor one-off lesson s/he has just observed is not representative of them generally. Tom Sherrington (@Headguruteacher) is excellent on knowing your staff. http://headguruteacher.com/2012/12/04/how-do-i-know-how-good-my-teachers-are/ […]

    Like

  8. the 99% that we don’t see is the most important bit – I totally agree. Having recently secured my first head post, to start in september, this is something I’m now constantly thinking of; how I will form an ‘opinion’ on this 99% on all staff within a reasonable time limit. A recent course, effective lesson obs, started to touch on this using the triangulation model of data, obs and other areas – I like the title of ‘knowledge’. I think i’ll use this and aim to add in all the other things that could pop up within ‘knowledge’ – interesting article, thanks.

    Like

    • Thanks. It takes time to build up the knowledge. I observed all my staff in my first term..and had a feedback discussion after each lesson. No judgements, just a chat. It really helped to contextualise data and formal observations records from before. Then knowledge builds from there. Good luck in your job.

      Like

  9. Great article. Can I ask though… We accrue lots of ‘knowledge’ over time, lots of which is fairly intangible (negative becomes more tangible as it is probably in a frustrated email from a parent or formal complaint from a students), so do you leave it as intangibleand have it in the back of your mind or do you seek ways to make it tangible so that it is then used transparently for performance appraisal?

    I always think that one of the big challenges to the lesson observation element is the bias that comes along with it. (More likely to grade a lesson good if you go into the room thinking the teacher is good, for example). I wonder if more people gave more importance to the ‘knowledge’ that we had whether this would make that problem better or worse?

    Like

    • Great question. Hard to answer – although I think the evidence is that bias is a strong effect. I take parental complaints very seriously; one is usually just a minor issue easily solved or challenged: but multiple concerns raised tell you something a lesson obs never could. Conversely, it only takes one parent to rave about a teacher to tell you they must be doing something right. I suppose we need to examine our bias at all times.

      Like

Leave a comment