In the last week, a series of events and meetings at my school signalled collectively that we’ve turned a corner with our view of some key processes.
- How we evaluate and improve the quality of teaching overall.
- The role of lesson observations
- The way we regard our action research activities as a feature of self-evaluation and CPD within a broader evidence-based professional culture
The major changes in our thinking include the following:
- We recognise that conclusions from one-off lesson observations are limited and, therefore, cannot be used in isolation as a meaningful way to evaluate what we do.
- We understand that learning is not always visible or measurable, and this needs to be recognised in the gathering of evidence to inform our evaluation of the impact of the strategies we use as we seek to maximise students’ learning.
- We have the confidence to apply evidence-based thinking to a critical evaluation of the prevailing orthodoxies around ‘good practice’, as determined by OfSTED criteria or by our own ingrained biases. We don’t need to accept the letter of what OfSTED might say about good practice in our context; we might just know better than anyone what works and what doesn’t.
To some extent, these changes have been developing at KEGS for some time. They are signalled in our longitudinal Departmental Review approach, described in this post, and in the way we’ve implemented the legal requirement for PRP. We’ve been working on a developing a strong research-engaged culture for several years and increasingly we’ve been talking about methodology and evidence. Having joined NTEN, we’re finding that Lesson Study is offering us something much deeper than many lesson observations.
As I outline in this post ‘How I do know how good my teachers are?‘, the need to triangulate several sources of information over time has been in our thinking for a while. However, until now, we’ve still been wedded to the compliance culture of ‘OfSTED readiness’. No-one wants to get an inspection judgement below the level they feel they deserve; the stakes are too high to mess about with that. We’ve always adapted our school criteria to keep pace with the OfSTED criteria; we’ve been grading lessons and our lesson observation criteria have included things like evidence of differentiation, and (deep breath), rapid and sustained progress – within the lesson.
Well, things are changing.
Interestingly, during the week I had a tutorial session with one of my Y13 students, discussing Thomas Kuhn’s ideas about scientific revolutions and their relevance in the 21st century. This is the subject of his Pre-U Global Perspectives and Research essay. I’d suggest that Kuhn’s Paradigm Shift concept is relevant in education; it could be that we’re on the cusp of a significant system-wide shift right now. Certainly this seems to be true at KEGS.
These are the events last week:
Monday SLT meeting :
We decided to abandon grading of lessons. It’s been on the agenda for a while but we’ve finally decided to take the step. Our hesitation has come from thinking that if OfSTED do come and we’re not battle-ready, sharply aware of their criteria, we might drift – or at least some teachers might. However, we’ve all found that the grading has been problematic, not only in having certainty about what grade to give but also in the psychological impact of getting anything less than Outstanding. The setback costs outweigh any benefits of ‘telling how it is’. Even people who do get Grade 1 for a lesson risk thinking they have nothing to improve. Some people are sad that they won’t get the affirmation of a resounding Outstanding for their lessons – but it’s a price worth paying.
We also discussed the issue that, without grades, it is necessary to convey the scale of improvement needed when giving feedback. Two teachers may need to improve their questioning – but one might have a significant need relative to the other who may just need to tweak here and there to be better still. So, we have decided to work on a language of feedback that makes this clear. We will also accelerate our SDP objective to develop subject specific lesson observation criteria.
In order to gauge the overall effectiveness of teaching in the school, we will generate data for presentation to Governors and inspectors based on a rounded triangulation of lesson observations, examination outcomes and the more nebulous reputational knowledge that is gathered continually. A teacher who doesn’t appear to turn on the style in lesson observations but has a strong reputational standing and grinds out great results will contribute to our self-evaluation as ‘Outstanding’ overall. We’ll be less impressed with the show-pony who can’t deliver in the long run.
Later on Monday, our colleague Tim Worrall attended the seminar held at Teach First HQ with Prof Rob Coe, Mary Myatt, David Didau et al. His report-back confirmed the evidence base for making this decision. Quite honestly, I don’t know how OfSTED will continue to justify their current approach in the face of this evidence.
See these posts for some of the key source material:
Professor Robert Coe: Classroom Observation: It’s harder than you think
David Didau: The Cult of Outstanding
Tuesday Teaching Staff Meeting:
One colleague described this as the best staff meeting we’ve had for years. Why? Because lots of different people spoke and we were starting to share our practice in a fresh, open, almost confessional manner. It’s no longer unacceptable to stand up and tell people where things have gone wrong; in fact, it’s become just as meaningful as sharing successes.
The meeting consisted of a short intro from me. I set out the wider context of an emerging evidence-based culture in education. I stressed that our Teaching and Learning workshops – our action research groups – will need to report the evidence and methodology as well as the conclusions when we share our work in the next carousel session in May – our fabulous Market Place CPD. I stressed that reporting an inconclusive or negative outcome would be important to share.
Our Leading Edge Director, Jane Breen then set out a range of research methodologies and pointing to the book by Gary Thomas. (I’ve found that the Amazon preview covers quite a lot of material; worth a read.)
We then heard from colleagues reporting some of their experiences with data gathering as part of their CamSTAR research projects:
Delphine reported on her work on group work in French, stressing the need for triangulation. Her conclusions were that students’ perceptions can be wrong: they rated group work above working individually but she found this was based on their sense of enjoyment, not their learning. In the tests, they did as well or better working individually. Her message was to be open about what your data might suggest and to triangulate your professional perception with student feedback and assessment data.
Alex and Jane reported on the process of gathering student insights using post-its. Jane’s had worked well: students wrote responses to key questions about a learning process on post-its during a lesson; they could ask questions and she could direct them to probe further, leading to a useful set of comments. She felt the interaction during the data gathering process was better than a survey conducted online. Alex had over-cooked it. He’d asked 28 students to generate free-text responses to a detailed set of questions – leading to a massive 28x 25 pile of post-its! A nightmare to avoid.
Jenny described a fascinating project in history based on the idea that student bring a complex set of misconceptions into the classroom that they then share with each other in discussions. She wanted to capture this and had recorded extended group discussions about a particular topic during a lesson. The problem was that she ended up with 7 recordings (one for each group) lasting 45 minutes each. It was simply too much information. The process had potential and could be refined; the point is to generate data that is manageable without losing value. Recording a couple of groups for a much shorter period would still yield interesting information. (She did also say that there should have been 8 recordings – but one student insisted he could be in charge of the recording device but had messed it up.)
Tim and Paul gave presentations about using Survey Monkey and our own moodle VLE (KEGSnet) as means of generating surveys to get student feedback. Both systems generate automatic reports, are easy to use and don’t cost anything.
The final 15 minutes of the hour were given to us to gather in our research groups to consider how data use as we continue with the work. My group is based in the physics department; we are already gathering assessment data but we felt that a sample of student interviews would help to gain their perspective on the success of our test-retest initiative.
Thursday Departmental Review meeting with MFL team.
Finally, a lunchtime meeting with the MFL team was a useful opportunity to pull of these ideas together. As the new line-manager of this team the meeting was to plan the detail of the annual departmental review. It’s an interesting team where each member is regarded as a strong teacher but where the range of teaching styles is quite broad. They don’t always agree on what constitutes ‘good practice’, and, as a non-specialist, my task is to help them all develop without imposing my own biases unduly. We discussed the question ‘how do we know what works and what could be done better?’ – rather than ‘are we meeting the OfSTED criteria?’ It’s a new era.
We talked about gathering a range of information: looking at students’ work, checking progress against the plan of action that follows each set of exam results, conducting a student focus group and also observing lessons. It was a great opportunity to get the spirit of the lesson observations across: They are only one part of the process; we’re not interested in show lessons; we’re not giving grades but still we’ll need to be clear that some lessons will better than others; we need to be aware of the limitations of lesson observations and be conscious that our biases might lead us to make false conclusions.
Thursday: Middle Leadership Time
Instead of a large-scale meeting, we had another session of ‘Middle Leadership Time’. Essentially this allows middle leaders to collaborate with each other in a range of different ways. It’s part of a developing culture of professionalism that we’re becoming more confident with. Ultimately each person is accountable for the outcomes for students; these sessions enable people to work in ways that they feel support their work, free from external directives.
So, there you go. Paradigm shift in action. It’s early days but I’m confident that this is the right direction.