Exams debate is far too narrow…. as usual. What are the alternatives?

I always think that if you want to criticise an idea, you really should be prepared to offer a better one. So far in the debate stirred up by Michael Gove’s back-door announcement of a return to O Levels, there has been very little in the way of serious evaluation of competing alternatives – and quite a lot of shouting! (And of course all of this is a world away from the discussions we could be having about the features of an outstanding 21st Century education system…in which exams would only play a small part.)

However, let’s just focus on exams for now.  Firstly, what is good about GCSEs? They succeed in allowing students to be assessed in many difference subjects and the A*- G scale covers a wide range of abilities. A collection of GCSEs is certainly a reasonable measure of overall educational success. Importantly, all students can be included in one system. Also, the system is highly flexible and allows for significant breadth and student choices.  So far so good.

So what are the problems? Already, the range of qualifications on offer is stretching the one-system credentials of GCSEs. Frankly it is a bit of a dog’s breakfast. At KEGS we have been drawn to IGCSEs in various areas, not because they are more difficult but because they are better courses. Science Certificates are free from artificial assessments like the ISAs which really bear no relation to any real scientific process. IGCSEs are also free from Controlled Assessment which kills the love of learning in several areas. They also have content that is often a much better preparation for A Level.  I don’t like the rhetoric of “dumbing down” which is poorly evidenced, unhelpful and generally insulting to the endeavours of teachers and students. However there is a structural flaw in GCSE exams that, in attempting to provide an assessment of students with a wide ability range,  limit the opportunity for students to fully demonstrate the extent of their ability; there is a definite ceiling. When a student gains an A* it is impossible to know if they could have been challenged still further.  Worse than this by far is the fact that after years of stressing C grades as the standard to reach, grades E-G have been severely de-valued. They are ‘fail’ grades – and that has to be wrong.

Is it possible to keep the elements we like whilst addressing the flaws? Various models spring to mind that might help to think about this:

‘Throwing a Ball’:  Assessing the ability to throw a ball as far as possible is simple; you learn, practice and then throw – maybe a few times to get a ‘personal best’.  The result is simple: the exact distance thrown.  There is no limit, no artificial ceiling and, for the least able throwers, no barrier to entering the same measuring system as the best.  Hmmm.  Can exams be as open-ended as this?

Piano Exams: You learn the piano and, when you are ready, irrespective of age, you enter for an exam at the appropriate level. If the entry level has been judged well you can pass it – but also gain Merit or Distinction for doing especially well.  You also get the raw mark for total clarity and, helpfully, some personalised feedback from the examiner. It really is a good system.  You can go through the learning process, passing lots of grades and reaching your best level.  You need not ever fail – although, of course, there are standards to reach; it is not a freebie!  Is this what exams should be like? This concept is akin to the computer-game experience of working within a level in order to move up to the next one; these ideas are built into the Khan Academy maths programme that provides incremental challenge and reward on a continuum.

With well-planned measurement points, at various stages of schooling, students could list their achievements to-date: I have Passed Grade 4 Maths, and Grade 5 English etc….For further details, here are my scores… Wouldn’t this be better than getting an E or F at GCSE? Obviously the nature of these tests would need to be developed to make the system manageable; perhaps computer adaptive tests could be developed for many subjects which allow students to have a personalised test depending on the quality of their responses. (The MidYIS/YELLIS baseline tests from the CEM centre in Durham work like this.)

A Baccalaureate or ‘Graduation Diploma’:  If we assessed students with open-ended examinations and gained a range of scores, it would be possible for students to have an overall score allowing them to meet the standard for graduating at various levels.  So, rather than failing, a student might simply graduate at Foundation Level with a score of 326;  they may also graduate at Distinction Level with a score of 872.  Ideally there would be no upper limit to the score.  Alongside this we could generate a standard Diploma transcript showing how well the student performed in all the components;  this would give the details employers would look at.  As with the IB, a total score only really has meaning if you know how it is made up from its components.  With this system, all students would be on a continuum, there would be no ceilings or ‘sheep and goats’ factor.  Another feature of an over-arching structure like this is that students have the opportunity to gain credit for areas of learning beyond traditional knowledge-driven curricula.  In the IB, the Theory of Knowledge component and the Extended Essay are key components as is the requirement to complete a number of hours of ‘creativity, action and service’.  Similarly with the IB Middle Years Programme, students can engage in an extended personal project that is given value in the system.  I like that idea.  Coursework became overwhelmingly cumbersome but a strand of open-ended learning with an extended response within Key Stage 4 sounds good to me.   With this kind of structure we could also deal with the difficult issue of parity between elements of academic and vocational learning; students could gain points for different kinds of activities or courses so that they contributed to the final score without creating crude apples = oranges equivalence between terminal exams.

If we are serious about improving our system, we need to look at the details of what a better system might look like.  It might be worth considering the idea of constructing something that is really a post-18 diploma building on A level and BTEC blocks, absorbing post-16 terminal exams all together. There is an important debate to be had here too given that the standard 3 A level offer is far inferior to the breadth of the IB – and yet schools that offer 4-5+ A levels as standard, as we do, face significant financial pressure in sustaining the breadth that students thrive on at this level.   In the short-term, if GCSEs need a quick-fix, it might be worth looking at extension questions within the existing exams (both at Higher and Foundation level) to ensure we are not placing limits on anyone; I’d also cancel all controlled assessment and not replace it with anything. What we don’t want is a simplistic solution that locks in narrowly-defined academic elitism for the highest attainers at the expense of everything and everyone else.

This could all be nonsense…but, if so, what would be a better idea?

10 comments

  1. Extremely interesting post. However, I think that the fundamental problem is the way that simple ‘benchmarks’ are created for league tables that then become the de facto pass/fail.

    If we take your piano exam as an example, it would be easy to imagine a league table category being ‘the number of pupils achieving a level 4 or above in the following subjects…’. Ofsted judgements would then be based on how many pupils reached the benchmark or floor standard in relation to national averages, schools would focus on borderline level 3/4 pupils, teachers would focus on formulas and strategies to achieve level 4 etc.

    In my opinion it is more important to change how we assess the effectiveness of schools. We need to find some way of doing it that makes sure we are providing the best for our young people whilst also recognising a much wider range of provision and achievement.

    Like

  2. Thanks for the comment. I agree re league tables – any assessment method will lead to rankings and over- simplified evaluations of school effectiveness. Exams don’t define schools or students but I still feel a pass culture could be established; a Grade 1 pass is better than a low mark Grade 3 fail even if a de facto pass is re- established over time. New algorithms for reliable value- added measures also need to emerge if NC levels are going.

    Like

  3. I wholeheartedly concur with MRCJDEAN’s comment. I come from New Zealand where the primary method of evaluating and comparing schools is the ERO (ofsted equivalent). Their inspections are at least as wide-ranging as those here. They don’t ‘grade’ a school, they report in detail on its performance in a wide range of areas (primarily including its delivery of the National Curriculum, but also its engagement with the community, the tone of the school etc). Like here, those reports are publicly available and widely read. They matter, and are a lot less reductive than the absurdity of 5xA*-C.

    Then again, New Zealand’s national assessment system, the NCEA, left behind norm referencing a long time ago and is a fully-articulated standards-based assessment scheme. It works for the schools and the students, offering them tools to provide a nationally-recognised qualification for their locally-developed learning programmes – rather than working to the de-facto syllabus asserted by an examining board as is essentially what I observe here.

    When it comes to ranking – New Zealand lets PISA take care of that.

    I often wonder why New Zealand, with all it’s commonalities with the UK, is not cited as an example of a better way to do things. I assume the reason is that New Zealand’s schools and communities have the say. I guess, in the end, the politicians don’t really want that!

    Chris Waugh
    (London Nautical School)

    Like

    • Chris, thanks for the comment. I agree with you. Exams should focus on allowing students to gain qualifications and norm referencing is massively flawed. The process of evaluating schools through a narrow analysis of exam outcomes has a distorting effect. If we had more confidence in the fairness and objectivity of the inspection regime we’d be in a better place. Too much of all of this is politically motivated and exam boards have far too much power without the accountability to match.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s