This tweet below is one my most-ever liked and RT’d tweets It clearly resonates with people. It’s worth exploring the reasons why that might be. Some of the reasons are highlighted in the thread that follows and some of the replies.
A lot the issues are covered in a previous post about what I suggest needs to be a paradigm shift in assessment thinking.
Some aspects are covered in my recent blog post for the Guardian Teacher Network
I’ve also covered a lot of this ground in Chapter 5 of The Learning Rainforest
Here, I’ll just reiterate why half-termly data drops are a terrible idea:
To be clear, I am talking about the process, common to lots of secondary schools, where teachers are required to feed progress and/or attainment information in a common format into a centralised data base.
Distort curriculum flow to generate assessment info
In order to feed the data machine teachers are meant to have some form of assessment on which to base the data they are entering. If these come every six weeks, it means needing to tailor the curriculum material and flow of lessons to that students can be assessed at a high enough frequency to match the data collection process. This is the tail wagging the dog, especially when this is imposed on all subjects regardless of their natural assessment mode (i.e. maths tests vs English assessment criteria) or how many lessons are taught (i.e. Science vs weekly RE lessons).
Tell us nothing about actual learning needs.
If the data machine says Grade B, Grade 5, Exceeding, Working Towards, Secure, 70% – or anything else, nobody knows what any student needs to work on. So the data itself doesn’t inform teaching. That information is already there in the raw material that informed the data-drop, which means the data-drop itself doesn’t add anything. There is unlikely to be the level of moderation required to know that a 5 in History means the same as it does in Science or that a 7/Secure in English means the student is doing better than they are in Maths where the grade entered is 6 or ‘Emerging’ – or whatever. The macro data sets up false comparisons.
Overshadow formative assessment.
Formative assessment should be happening all the time. This is where the real action is. This is the important stuff, where the learning happens, where the real meaning lies. If we institutionalise data-drops to the level where they matter enough to be done every half-term, – with all the managerial hullabaloo that ensues, chasing people up, fussing over missed grades, missed deadlines and how it all looks to parents – then we’re really losing perspective on what counts. Most the data-dropped could be erased with no impact on learning.
Inefficient way to flag concerns.
If I’m concerned about my own students, I will know already; I don’t need to flag it via the sledgehammer of a data system. I just need to act. If I am a leader worried about the performance of a teacher or class somewhere in the school, the centralised data is too processed to tell me the true picture – I need to see the raw data, so I need to keep close to that. If I want to know who is a cause for concern in Year 8 or Year 9, I could ask for teachers to tell me their names. In any case, if the data machinery is the means by which a previously unknown concern is flagged (every six weeks) then things are going wrong elsewhere. Termly data would easily be enough to shine a light in the darkest corners if that is needed,
Nobody looks at most of it.
In my experience, in a typical secondary school, the scale of the data being collected is so massive as to be unwieldy and unworkable: 200 students per year, x 10 subjects = 2000. Multiply by 5 year groups = 10,000. Add an ‘attitude to learning grade’ = 20,000. Add a predicted attainment grade’ = 30,000. And so on. Light it all up in Red , Amber and Green – it’s massive matrix of information, each piece of which is already stored somewhere else where it is more useful. How much time do leaders have to explore the patterns to add value to what is already known? Not enough. Most the data is wasted….
Let’s not pretend that data drops are quick and easy. Each entry point requires thought, time, and multiple mouse clicks; engaging with the data takes time….. If you half the number of data drops you half the workload for almost exactly the same meaning. It’s a no-brainer.
And lots of you seem to agree. So, come on people, get it in perspective. If you think your school’s success is driven by high frequency data drops, I’m going to challenge you and say, no, it’s not. That’s an illusion/delusion you have created and your previous assertions of the value of it may be the reason you feel you can’t now change. But saving face isn’t a good reason. Don’t defend it – test it out and see what happens.