With the publication of the Education Endowment Foundation report on Cognitive Science Approaches in the Classroom, there’s been a flurry of discussion about what it says and what the implications are for teachers. This is great to see – (once you filter the inevitable over-simplifications and weird misplaced triumphalism. You see – I told you).
I think the report does a really good job of explaining what it set out to do and what it found, with caveats and boundaries aplenty. It’s actually incredibly difficult (and foolhardy) to summarise and simplify because essentially it’s a round-up of a very wide range of specific studies into a selection of specific strategies each of which varies in the extent of the evidence base and the degree of clarity around the conclusions that could be drawn. Even in cases where the results appear more positive, there’s a follow-up list of caveats such as:
- The number of studies was limited.
- Studies were mainly primary – or secondary
- Studies were mainly in maths and science
- Studies were done by researchers rather than regular teachers.
And so on.
This matters. One possible overarching conclusion is that, so far, there aren’t actually that many studies that explore the impact of specific ideas in real classroom contexts. The report summary says as much:
Even approaches with indicative evidence of promise like retrieval practice, spaced practice, and the use of worked examples are, as yet, only supported by a few studies that examine their impact in everyday classroom conditions—delivered by teachers over long periods of time.
There are lot of reasons for that. One eternal paradox is that, in theory and in practice, strategies are difficult to isolate – they vary in nature and intensity over time; they interact with each other and there are multiple contextual factors that mediate how any strategy is implemented. There are basically too many complex interacting variables to tie down – and yet this is what a study tries to do: to isolate a definably distinct strategy and create a test that provides a meaningful control test for comparison. In doing so, the interactivity is weakened and/or the impacts are hard to measure. In quite a lot of studies the extent of the control processes reduces their legitimacy as ‘in practice’ studies because they are performed by researchers in time limited studies, not teachers who know their classes and grind out learning gains in the long run.
What it doesn’t mean:
There have already been some rather lazy takes on the report including taking ‘limited evidence’ to imply that studies were done but weren’t successful. As the report summary states: It is important to note that a lack of evidence is not the same as evidence that an approach is not successful. This echoes the comments Hattie made here about Visible Learning: “Visible Learning is a literature review, therefore it says what HAS happened not what COULD happen.“
Another bad take is the idea the report shows ‘cognitive science doesn’t always work’. I think that indicates some basic lack of research/science literacy as it lazily conflates a whole framework for understanding learning with a selection of specific strategies. Cognitive science is always at play in any and every learning scenario – it’s just a case of understanding it and harnessing it effectively. What these people really mean is that the strategies some people have developed in response to cognitive science might not always work. It might also indicate that in some scenarios, we’re less clear how cognitive science applies. But it’s fairly nonsensical to talk about cognitive science as if it only applies here and there, somehow outside of certain learning processes.
On the flip-side, as any reading of this report will emphasise loud and clear, there is no support in the evidence for the rigid formulaic check-listing of tightly defined strategies that everyone must do every lesson. Many cogsci sceptics are really just reacting against the imposition of specific protocols that are foisted on teachers under the banner of ‘evidence says’. For example, if you insist all lessons have 3, 4 or 5 parts, always start with a five-a-day knowledge quiz or that all medium-term curriculum plans must provide evidence of interleaving (seen them all) – you’re not doing it right! There is really no evidence that anything must happen every lesson in the realm of teaching strategies. Even if we suggest things should happen frequently or regularly – it still doesn’t mean every lesson, whatever your lesson observation checklist might say. Any leader moving rapidly to impose school-wide mandates about specific forms of retrieval practice, for example, is running on fumes of validity and is only adding fuel to the sceptics’ fire.
What does it mean?
In nearly all my CPD sessions I spend some time taking about the concept of evidence-informed wisdom. Research isn’t telling us what to do. It can’t. However, it can inform our decisions. Teachers have wisdom in bucketloads – from their experience of previous scenarios accumulated over time; from their interactions with the specific students they are teaching now; from their engagement with the curriculum and its modes of assessment. Into that mix comes some ideas derived from studies into learning, memory, observed classroom practices… and that all has to be filtered, assimilated and woven into the fabric of our understanding of what we should do in order to secure our students’ learning and the things they and we need to do today and tomorrow.
In that context, being evidence-informed should probably include the following elements, ideally, all five:
Awareness: Being aware of some key findings from the range of research that exists including its limits and boundaries. We’re not plunging into our practice based solely on our hunches or some ancient CPD session- we’re aware of the ever-evolving evidence background that might give us additional guidance and insight.
Model: Engaging with a reasonably coherent and communicable model for learning that explains what you’re doing and how learning then happens in your context. We can give a logical rationale for why our students are likely to learn from the activities we engage them in and the resources we provide, without resorting to folk science and reliance on assertions of our powers of intuition.
Intentionality: Approaching the process of teaching with some intentionality, harnessing ideas from cognitive science and other forms of research, taking limitations into account, to eliminate or emphasise practices according to those ideas – supported by the underpinning model or evidence from studies or both.
Ecology: Being conscious that no part of the matrix of ideas and factors we’re engaging with exists in isolation. Ecology suggests an understanding that we’re continually and responsively blending inter-connected elements – the students’ motivation, prior knowledge; their degree of success; their responses to set-backs; the complexity of the material; the intensity and frequency of specific practice tasks; the extent to which we chunk ideas; the precise sequence of instructional elements.
Enquiry: At all times, we embrace an a underlying spirit of enquiry such that we’re asking whether all of our students are learning in the way that we’d hope and, if not, why not. We’re not merely delivering a strategy – we’re always checking to see if it’s working to the extent that that is possible to do. Importantly – some students learning something isn’t good enough. We need them all learning so let’s work hard to find out.
Of course there is then a debate about which research we’re aware of and what the model we’re using looks like. However, I think this is where being ‘evidence-informed’ really sits. If you are taking ideas from researchers such as Coe, Shimamura, Willingham, Wiliam, Rosenshine et al and blending them into your own schema for the teaching and learning process, your subsequent decisions are going to be evidence-informed; you don’t need to wait for a large-scale formal study to test out exactly what you’re doing in similar conditions. That could be a very long wait. If, for example, I make a logical case for using strategies for generative learning and the Generate-Evaluate cycle that forms part of Shimamura’s MARGE, checking how well they work, then I’m on secure ground.
Earlier this week, twitter sharing came into its own with a glorious gift. Dan Willingham shared a link to a new, free ebook from Arthur Shimamura – downloadable […]
One of the very best pieces of work in communicating education research is the What Makes Great Teaching report from 2014. Around this time Professor Rob Coe (@ProfCoe), […]
This week I received a delivery of Dan Willingham’s Why Don’t Students Like School 2nd Edition from the bookshop and I’ve been reading through it, taking it in, […]
One of the most powerful ideas I’ve engaged with recently is using a diagram to visualise a shared model of the learning process; using it to get a […]
Other things you might be deploying could include:
- A good blend of questioning techniques that make everyone think and practise whilst also allowing you to make responsive decisions about the selection of practice tasks and explanatory inputs.
- A varied diet of retrieval practice strategies so that knowledge is tested in a range of ways for a range of purposes, building confidence, fluency and supporting problem-solving, ensuring elements of knowledge are always woven into a wider frame of interconnected ideas.
- Explaining ideas with close attention to students’ prior knowledge and preconceptions and concrete experience so that new ideas can take hold, reshaping their own schema, allowing them to explain concepts in their own terms.
- Modelling and scaffolding writing, weaving together knowledge of various kinds so that students’ develop ever more complex schema for writing – recognising what excellence looks like and building a repertoire of vocabulary, structure and elements of style – along with the stamina and fluency needed.
- Giving feedback in a way that takes account of the way students respond to it, being positive and specific but attending to students’ motivational responses and building their capacity, their agency, for self-assessment and self-generated feedback.
As a general rule, it’s always highly instructive to run through any learning sequence for any curriculum area, examining our understanding for how, exactly, students will learn from it. What assumptions are we making about their thinking processes, their prior knowledge, the need for practice and consolidation – etc? What implications does this have for the sequence and structure of lesson elements? This is rich material for team meetings.
When I’m invited to support schools and colleges with CPD around teaching and learning – or I’m simply asked to […]
Some of the most interesting discussions I’ve had with teachers in recent times have been about the challenge of making […]
Learning is complicated so it can be useful to use conceptual models to help understand and discuss it. For me, […]
Of course, another feature of evidence-informed practice will be the things you don’t do:
- We’ve removed mention of learning styles, recognising that VAK/VARK has been soundly debunked.
- We don’t waste time putting key facts on display to the students’ top-right of the board -because ‘left brain-right brain’ blah blah blah
- We don’t do lazy group work where two students talk and the others hang on their shirt-tails; we structure collaborative learning intentionally so that goals for the group explicitly take account of every individual student’s learning; roles and goals are clear.
- We don’t do spurious engagement or ‘for the sake of it’ movement activities that divert thinking away from the meaning and flow of the core material in hand; we ensure students have hands-on experiential elements in the curriculum where they are necessary to build secure rich schema for the ideas they’re exploring.
- We don’t allow ourselves and our students to fall for the illusion of knowing that stems from prioritising neat task completion, providing excessive prompts and ever-present scaffolds. We check that students know and can do things themselves – books shut, scaffolds away.
- We don’t bang on about Bloom’s pyramid or ‘regurgitating facts’. We understand that deep knowledge is the goal, that understanding is remembering in disguise and that Bloom never made a hierarchy where ‘mere knowledge’ was at the bottom. We see knowing things as the platform for knowing more things, reading more fluently and opening the doors to new worlds, sparking imagination and curiosity. And you won’t be pleading that boring old cogsci stifles creativity.
More thinking on this issue here:
There are several superb summaries of educational research that have been compiled into easily accessible websites and articles in pdf format that can be read online and shared with staff. Although they are easy to find via an internet search, I am pulling them together into one place for easy access. I’ll keep adding to […]
I recently read this fabulous book by Carlo Rovelli. The book works on many levels. It is an attempt to explain the emerging theory of quantum gravity. This has numerous mind-blowing conceptual elements – such as the idea that spacetime is quantized, that time isn’t really a fundamental thing that ticks forward – it’s just […]
It’s continually frustrating to me that so much of the discourse in our debates and in the reading of education research reduces teaching and learning effects to crude averages and sweeping assumptions, ignoring complexity and context At the same time, it’s equally frustrating that some people’s response to this complexity is to reinforce a […]
These are the slides from my talk at ResearchEd 2014. The aim of the talk is to look at four different kinds of research and to consider the extent to which teachers might accept the findings and then allow them to influence their practice. I’ve chosen four contrasting forms of research. 1. John Hattie’s meta-analysis […]
This is an excellent book. It is an attempt to distil the key messages from the vast array of studies that have been undertaken across the world into all the different factors that lead to educational achievement. As you would hope and expect, the book contains details of the statistical methodology underpinning a meta-analysis and […]
I love the idea of ‘evidence-informed wisdom’. I honestly can’t remember where I first encountered this but, essentially, it’s the idea that, as teachers we are faced with making hundreds of decisions a day – largely about how to question, how to motivate and how to adjust explanations, feedback, and the pace and depth of […]
When Oliver Caviglioli and I sat down to think about our Teaching WalkThrus project, an early consideration was the problem of “implementation” and the practices that surround it. It’s a common and highly problematic concept in CPD that teachers are meant to receive ideas through training that they then deliver; they implement. This leads to […]