National Funding Formula for Schools: A Critique

England’s Department for Education released its long-awaited Phase 2 of the consultation on a national funding formula on the 14th December 2016. I have been heavily involved in determining the funding formula for schools in one of the most diverse English local authorities, and so have some detailed thoughts on this process. As a prelude to my own response to the funding formula consultation, I thought it might be helpful to others to lay out my comments against the paragraphs of the Government’s consultation document as a “guide to reading”. I have focused on the areas I know best, which relate to funding arriving at schools rather than proposed funding to be distributed to LAs still, such as funding for growth, central school services, etc.

The DfE seems to be considering two quite distinct drivers for the decisions being proposed. Many decisions use LA formulae and averages between LAs to drive appropriate funding formulae. Elsewhere, clear politically-driven approaches come through – the drive to increase the proportion of funding going into pupil-led factors, etc. These have been presented in a jumbled up fashion that makes it hard to understand the relative impact of these considerations. It would be a relatively straight-forward mathematical task to set up and solve an optimization problem to minimize school funding turbulence when moving to a funding formula using these formula elements. It is disappointing that the DfE has not done this to at least provide an element of transparency in the proposals, as deviation from any such minimal-turbulence formula should indicate the presence of evidence being used to drive a decision. Put plainly: changes to school funding should be either to even up funding between LAs or to achieve a certain outcome.

I have chosen to blog here about the nuts and bolts, and save a formal consultation response, or any overall conclusions, for a future post. I hope my fellow consultation readers and I can have a conversation about these points in the mean time.

As a result of this decision, the remainder of this post is fairly lengthy, and will only make sense if you read it alongside the DfE’s paper. Happy reading!

The Gory Details

1.12 and again in 2.20. This is flawed reasoning. The DfE is correct that if pupils tend to share deprivation (or any other) characteristics, then allocation of funding through these factors achieves the same end result as allocation through basic per-pupil funding. But this is true either in areas of high uniform deprivation or in areas of low uniform deprivation. As a result, the appropriate methodology to use LA formulae to determine the desirable size of deprivation factor would be to specifically look at the formulae of LAs with wide variations in deprivation from school to school, providing a low weighting to formulae of LAs with less varying deprivation, not to simply assume that deprivation funding needs to increase. (Which, incidentally, I am not against, I just want to see evidence before making decisions. Typically such evidence comes from boundary discontinuity studies between schools near borders of LAs. We therefore have a once-in-a-generation opportunity to grasp the nettle and do this properly, before a national funding formula arrives and discontinuities – and hence evidence – disappears.)

1.16. The lump sum is a critically important factor in school funding, especially in areas with schools of widely varying size. The DfE claim that they “cannot see any clear patterns in the specific lump sum values.” Yet it is unclear what analysis has been conducted to discern a pattern. I would not expect any pattern to emerge from the analysis published, because no correlation is looked for between lump sum and school size variability. Nor can this be extracted from the published LA pro-forma summaries. The DfE does note a pattern in this paragraph that a majority of LAs set the same lump sum for secondaries as for primaries, but this could well be only because it was a requirement for the first year of the recent reforms to funding formulae!

2.7 – 2.9 and 2.51-2.56. It is very clear that the DfE has set the maximisation of funding allocated through pupil-led factors as an objective, as evidenced by the title of this section and the explicit references to the aim within the paragraphs. The claim in Paragraph 2.8 is that this is to ensure that “funding is matched transparently to need”. I do not believe this maximisation of funding through pupil-led factors is consistent with matching funding to need. If the Government truly wishes to be fair in its distribution of funding, then with similar school population characteristics, every school should receive the same disposable per pupil funding. Unless lump sums are set to reflect the genuine fixed costs of running a school then in practice the Government will be creating significant inequality of access to education by ensuring that pupils attending large schools attract a significantly greater disposable per pupil funding.

2.13. While I recognise the potential need for an increase in funding when moving from KS1/2 to KS3 and KS4, reception classes are also generally more expensive to run than KS1/2 classes due to the nature of the curriculum in R. By setting a single rate across the primary sector, the funding formula will differentially impact negatively on infant schools, where reception classes make up a greater proportion of the children.

2.16. The consultation document claims that “reception uplift” has “a very small impact on schools’ budgets.” I would like to see what evidence has been used to come to this conclusion. No doubt it has a very small impact on overall school budgets nationally, but I expect that for small schools it could have a considerable impact. Maintained schools have to wait for about 7 months before their census data results in funding changes; academies for nearly a year. In a school with 100 pupils, having 5 more pupils than expected should rightly result in a significant “reception uplift.”

2.21. No justification is given for the figure of 18% given for additional needs factors. The text implies that this goes beyond LA averages and is a result of a conscious Government decision to increase AEN funding – such a decision should be evidence based.

2.26. Some “magic numbers” appear here also: 5.4% for pupil-level deprivation (FSM/FSM6) versus 3.9% for area level (IDACI). These numbers appear to have been plucked out of the air. Presumably there is some statistical evidence to support these figures – it would have been useful to have this sent out with the consultation.

2.28. This is confused. The claim seems to be that Ever6 FSM rate should be higher at secondary schools than primary schools because (i) the overall primary:secondary ratio is less than 1 (so what?) and (ii) the Pupil Premium is the other way round. But the DfE also sets the pupil premium rate (and why are these two not combined anyway since they’re both Ever6 based?) It seems that those setting the Pupil Premium rate want to tug the ratio one way and those setting the funding formula want to pull it back the other way. Most odd.

2.33. The IDACI index is being used in a questionable way here. An IDACI index is a probability that a given child, chosen at random from a geographic area, lives in an income-deprived household. It is not a measure of the severity of deprivation. Thus I can see no justification for funding being allocated by IDACI score in anything other than a purely proportional way, e.g. a child living in an area with IDACI score 0.8 should surely attract twice the IDACI funding of a child living in an area with IDACI score 0.4. Yet looking at Figure 5, we can see that children in Band C (IDACI 0.35 to 0.4) attract the same funding as those in Band D (IDACI 0.3 to 0.35). This makes no sense to me. As an aside, the banding also makes very little sense – why classify pupils into bands if you already know the IDACI score of that pupil’s address: just use it directly, avoiding cliff edges of over/under-funding around the band’s boundaries.

2.34. In line with my comments on 2.21 and 2.26, the “magic number” here is even more alarming. The DfE have looked at how much LAs allocate to low prior attainment (4.3%) and decided to nearly double this to 7.5%. The only justification given for this radical shift is that KS2 attainment is a good predictor for attainment at secondary school. There are several holes in this argument. Firstly, what is “prior attainment”? For primary schools, this used to be EYFS points scores. Then it became whether a child achieved a Good Level of Development in EYFS. Now it is likely to be based on a totally different on-entry baseline assessment in Reception. None of these are comparable, and the baseline Reception assessments are very much questionable and under review at the moment. Secondly, for secondary schools prior attainment means KS2 results. The same KS2 results that have changed so radically in 2016 that we have no knowledge whether these are likely to be good predictors for secondary school performance. Thirdly, even if we ignore these serious methodological concerns, correlation between poor attainment (actually it should be SEN) and prior attainment is cause for a factor greater than zero. Simply no justification is given for why this factor should be doubled. Perhaps it should, perhaps it shouldn’t. Why?

2.42. The move to use EAL3, i.e. funding is attracted for children with English as an Additional Language for the first three years of their education is an interesting one. Currently LA practice varies here. For a fixed pot of EAL funding, there is an argument to be had over whether children would benefit more from considerable funding in year 1 for intensive English tuition to allow them to access the curriculum, rather than more “smeared out” three year funding at a lower level per year. Once again, it would be useful to see the research that suggests that one or the other approach actually reaps the greatest benefit before mandating EAL3.

2.43. More magic numbers here: uplift from 0.9% to 1.2%. Why? Evidence?

2.52. This paragraph makes it clear that the proposal is explicitly to starve small schools of funding, by purposely under-funding the lump sum, in order to ensure that they “grow, form partnerships and find efficiencies.” Rather than starving schools of funds, it might be better properly fund the lump sum while providing time-limited financial enticements for schools to merge where that is possible, as is currently the case.

2.53. There is a methodological error in this paragraph. They state that they looked for a correlation between average school size and lump sum size and found none. Nor should they expect to find one. Imagine LA1 with schools each of 100 pupils and LA2 with schools each of 1000 pupils. There will be no difference in allocation of funding between schools in these LAs no matter what lump sum value is used. However if we now imaging LA3 where half the schools have 100 pupils and half have 1000 pupils, then the impact of lump sum changes will be dramatic here. So the correlation should be with the variation in school size, not with the average school size.

2.57. A sparsity factor is only a sensible option given the choice to under-fund fixed costs in a lump sum. If these were properly funded, a sparsity factor would be unnecessary.

2.59. The detailed calculations for the function of the sparsity factor are omitted from the consultation document – instead a link is provided to another document. The functioning leaves a lot to be desired. For example, primary schools are eligible if they have an average of less than 21.4 children per year group and the average distance between this school and their next-nearest school is at least two miles. The first of these criteria is essentially an admission that schools will less than one form entry are underfunded under the national funding formula. The second is more complex but equally serious, especially for small village schools sitting on the edges of towns. Imagine two schools, separated by a little more than two miles. It may well be that between the two schools is an area of dense population while following the line connecting these two schools out into the countryside leads to very sparsely populated areas. The distance for the children at the countryside end might be much more than 2 miles, yet the average will be less than two, and the school will not attract funding. If thresholds of distance must be used, why is it done on average distance rather than the number of pupils for whom that distance is more than the threshold? Finally, these thresholds necessarily lead to unfairness across the two sides of the threshold. If the lump sum were set to a value reflecting the fixed costs of running a school, none of this tinkering would be necessary.

2.60. The steep tapering proposed for the primary sparsity factor is grossly unfair to schools with average year group sizes around 25 – they get none of the benefit compared to their colleagues with smaller classes, yet they see the full impact of an under-funded lump sum which can be safely ignored by large primaries.

2.61. Even if we accepted the sparsity factor, the maximum value of £25k for primaries on top of the £110k lump sum still under-represents the fixed costs of running a school. Meanwhile, the use of a greater lump sum of £65k for secondaries seems inconsistent with the simplification proposed to use a single lump sum across all phases.

2.77 – 2.79. This part of the consultation, on area cost adjustment, refers to a technical note that does not yet appear to have been published on the consultation website. I reserve judgement on this issue, noting that the devil is likely to be in the detail, and that any methodology for taken into account labour market costs needs to avoid cliff edges where schools on one side of an artificial geographical boundary benefit significantly compared to those on the other, an issue the national funding formula was supposed to address.

2.81-2.82. It is of course welcome that any reduction in school budgets is phased in over time so that schools are able to cope with “the pace […] of those reductions.” However, it is not very clear what this means in practice. What does it mean for a school to “cope” with its reduction in funding – does it mean a reduction in expenditure with negligible loss in educational outcomes, or with “acceptable” loss in educational outcomes? If the latter, what is acceptable? If the former, what evidence do we have that the current MFG of -1.5% per annum has negligible impact on educational outcomes?

2.83-2.85. It is far less clear that any kind of “floor” is an equitable way of smoothing change, indeed it goes against the very aim of an equal funding formula for all. Some schools will receive more funding simply because they historically did, and others will therefore receive less as a result, from any fixed education funding pot. If a floor is required in order not to damage school performance in the long run, this suggests that funding reductions implied by the national funding formula are simply unsustainable in those schools. Therefore instead of clamping maximum loss to 3%, the DfE should be asking why some schools lose more than 3% and whether this is justifiable for those schools. If not, the formula is at fault and should be changed for all schools, not just those below -3%.

2.86. By maintaining the floor as a per pupil funding guarantee, the Government could potentially introduce severe differentials between schools. In particular in areas of high existing lump sum where there are some small schools that grow to be of comparable size to large schools, the formerly small school would be very significantly over-funded compared to its neighbour, for no good reason.

3.11. The consultation states here that “we have considered carefully the potential impact on pupil attainment in schools likely to face reductions as a result of these reforms,” yet this analysis is not presented. Instead we are simply told that “we see excellent schools at all points of the funding spectrum,” which is no doubt true but fairly meaningless when it comes to assessing the educational impact. A good starting point would be to look at what correlation exists between disposable income per pupil, i.e. per pupil funding once realistic fixed costs are subtracted, and progress measures at the school.

2016 SATS: Scaled Scores

On the 3rd of June England’s Department for Education released information about how to turn children’s answers in their KS1 tests into a scaled score. I am profoundly disappointed by the inconsistency between this document and the fanfare produced over the abolition of levels.

By introducing these scaled scores, the DfE has produced a new level of achievement known in their paper as “the expected standard on the test”. Note that this is quite a different thing to “the expected standard” defined in the Interim Teacher Assessment Frameworks at the End of KS1. Confused? You should be.

When moving from a level-based “best fit” assessment to the new assessment framework (see my earlier blog post for my concerns on this), a key element of the new framework was that a pupil is only assessed as having met the expected standard if they have attained “all of the statements within that standard and all the statements in the preceding standards” (boldface in original). As schools up and down the country struggle to produce systems capable of tracking pupil progress, I’ve been waiting to see how the Department intends to square this assessment approach with testing. Now the answer is in: they don’t.

Let me explain why. To simplify matters, let’s look at a stripped down version of the “expected standard” for KS1 mathematics. Let’s imagine it just consists of the first two statements:

  • The pupil can partition two-digit numbers into different combinations of tens and ones. This may include using apparatus (e.g. 23 is the same as 2 tens and 3 ones which is the same as 1 ten and 13 ones).
  • The pupil can add 2 two-digit numbers within 100 (e.g. 48 + 35) and can demonstrate their method using concrete apparatus or pictorial representations.

Leaving aside the apparatus question (guidance here states that children were not allowed apparatus in the test, so quite how that’s supposed to measure the expected standard is a mystery), the question remains – how do you convert assessment of each individual strand into an assessment of whether the expected standard is met. Let’s assume our test has a question to test each statement. The teacher assessment guidance is straightforward, if flawed: assess each strand individually and only if all strands have been reached has the “expected standard” been reached. Translating this into our imaginary test, this would mean: mark each question individually, and only if all questions are above their individual pass mark, the standard has been met. Is this the approach taken? Not at all. The approach taken is exactly that used under levels: add up all the marks for all the questions, and if the total is above a threshold then the “expected standard on the test” has been met, i.e. it is a best fit judgement. Yes, that’s right, exactly the kind of judgement railed against by the Department for Education and the Commission on Assessment without Levels – we are back to levels. For better or for worse.

The total mismatch between the approach enforced in testing and the approach enforced in teacher assessment has obviously been spotted by the DfE because they themselves say:

The tests are also compensatory: pupils score marks from any part of the test and pupils with the same total score can achieve their marks in different ways. The interim teacher assessment frameworks are different.

Frankly, this is a mess.

Key Stage 2 test results are out on the 5th of July. I expect a similar approach then, except this time those results form the basis of the school accountability system.

The NAHT is quite right to call for school level data from the flawed 2016 assessments not to be used for external purposes and to question the whole approach of “secure fit”.

Book Review: Out of the Labyrinth: Setting Mathematics Free

This book, by Kaplan and Kaplan, a husband and wife team, discusses the authors’ experience running “The Math Circle”. Given my own experience setting up and running a math circle with my wife, I was very interested in digging into this.

The introductory chapters make the authors’ perspective clear: mathematics is something for kids to enjoy and create independently, together, with guides but not with instructors. The following quote gets across their view on the difference between this approach and their perception of “school maths”:

Now math serves that purpose in many schools: your task is to try to follow rules that make sense, perhaps, to some higher beings; and in the end to accept your failure with humbled pride. As you limp off with your aching mind and bruised soul, you know that nothing in later life will ever be as difficult.

What a perverse fate for one of our kind’s greatest triumphs! Think how absurd it would be were music treated this way (for math and music are both excursions into sensuous structure): suffer through playing your scales, and when you’re an adult you’ll never have to listen to music again.

I find the authors’ perspective on mathematics education, and their anti-competitive emphasis, appealing. Later in the book, when discussing competition, Math Olympiads, etc., they note two standard arguments in favour of competition: that mathematics provides an outlet for adolescent competitive instinct and – more perniciously – that mathematics is not enjoyable, but since competition is enjoyable, competition is a way to gain a hearing for mathematics. Both perspectives are roundly rejected by the authors, and in any case are very far removed from the reality of mathematics research. I find the latter of the two perspectives arises sometimes in primary school education in England, and I find it equally distressing. There is a third argument, though, which is that some children who don’t naturally excel at competitive sports do excel in mathematics, and competitions provide a route for them to be winners. There appears to be a tension here which is not really explored in the book; my inclination would be that mathematics as competition diminishes mathematics, and that should competition for be needed for self-esteem, one can always find some competitive strategy game where mathematical thought processes can be used to good effect. However, exogenous reward structures, I am told by my teacher friends, can sometimes be a prelude to endogenous rewards in less mature pupils. This is an area of psychology that interests me, and I’d be very keen to read any papers readers could suggest on the topic.

The first part of the book offers the reader a detailed (and sometimes repetitive) view of the authors’ understanding of what it means to do mathematics and to learn mathematics, peppered with very useful and interesting anecdotes from their math circle. The authors take the reader through the process of doing mathematics: analysing a problem, breaking it down, generalising, insight, and describe the process of mathematics hidden behind theorems on a page. They are insistent that the only requirement to be a mathematician is to be human, and that by developing analytical thinking skills, anyone can tackle mathematical problems, a mathematics for The Growth Mindset if you will. In the math circles run by the authors, children create and use their own definitions and theorems – you can see some examples of this from my math circle here, and from their math circles here.

I can’t say I share the authors’ view of the lack of beauty of common mathematical notation, explored in Chapter 5. As a child, I fell in love with the square root symbol, and later with the integral, as extremely elegant forms of notation – I can even remember practising them so they looked particularly beautiful. This is clearly not a view held by the authors! However, the main point they were making: that notation invented by the children, will be owned and understood by the children, is a point well made. One anecdote made me laugh out loud: a child who invented the symbol “w” to stand for the unknown in an equation because the letter ‘w’ looks like a person shrugging, as if to say “I don’t know!”

In Chapter 6, the authors begin to explore the very different ways that mathematics has been taught in schools: ‘learning stuff’ versus ‘doing stuff’, emphasis on theorem or emphasis on proof, math circles in the Soviet Union, competitive versus collaborative, etc. In England, in my view the Government has been slowly shifting the emphasis of primary school mathematics towards ‘learning stuff,’ which cuts against the approach taken by the authors. The recent announcement by the Government on times tables is a case in point. To quote the authors, “in math, the need to memorize testifies to not understanding.”

Chapter 7 is devoted to trying to understand how mathematicians think, with the idea that everyone should be exposed to this interesting thought process. An understanding of how mathematicians think (generally accepted to be quite different to the way they write) is a very interesting topic. Unfortunately, I found the language overblown here, for example:

Instead of evoking an “unconscious,” with its inaccessible turnings, this explanation calls up a layered consciousness, the old arena of thought made into a stable locale that the newer one surrounds with a relational, dynamic context – which in its turn will contract and be netted into higher-order relations.

I think this is simply arguing for mathematical epistemology as akin to – in programming terms – summarizing functions by their pre and post conditions. I think. Though I can’t be sure what a “stable locale” or a “static” context would be, what “contraction” means, or how “higher order relations” might differ from “first order” ones in this context. Despite the writing not being to my taste, interesting questions are still raised regarding the nature of mathematical thought and how the human mind makes deductive discoveries. This is often contrasted in the text to ‘mechanical’ approaches, without ever exploring the question of either artificial intelligence or automated theorem proving, which would seem to naturally arise in this context. But maybe I am just demonstrating the computing lens through which I tend to see the world.

The authors write best when discussing the functioning of their math circle, and their passion clearly comes across.

The authors provide, in Chapter 8, a fascinating discussion of the ages at which they have seen various forms of abstract mathematical reasoning emerge: generalisation of when one can move through a 5×5 grid, one step at a time, visiting each square only once, at age 5 but not 4; proof by induction at age 9 but not age 8. (The topics, again, are a far cry from England’s primary national curriculum). I have recently become interested in the question of child development in mathematics, especially with regard to number and the emergence of place value understanding, and I’d be very interested to follow up on whether there is a difference between this between the US, where the authors work, and the UK, what kind of spread is observed in both places, and how age-of-readiness for various abstractions correlates with other aspects of a child’s life experience.

Other very valuable information includes their experience on the ideal size of a math circle: 5 minimum, 10 maximum, as they expect children to end up taking on various roles “doubter, conjecturer, exemplifier, prover, and critic.” If I run a math circle again, I would like to try exploring a single topic in greater depth (the authors use ten one hour sessions) rather than a topic per session as I did last time, in order to let the children explore the mathematics at their own rate.

The final chapter of the book summarises some ideas for math circle style courses, broken down by age range. Those the authors believe can appeal to any age include Cantorian set theory and knots, while those they put off until 9-13 years old include complex numbers, solution of polynomials by radicals, and convexity – heady but exciting stuff for a nine year old!

I found this book to be a frustrating read. And yet it still provided me with inspiration and a desire to restart the math circle I was running last academic year. Whatever my beef with the way the authors present their ideas, their core love – allowing children to explore and create mathematics by themselves, in their own space and time – resonates with me deeply. It turns out that the authors run a Summer School for teachers to learn their approach, practising on kids of all ages. I think this must be some of the best maths CPD going.

Assessment of Primary School Children in England

Some readers of this blog will know that I am particularly interested in the recent reform of the English National Curriculum and the way that assessment systems work.

This week the Commission on Assessment without Levels produced their long-awaited report, to which the government has published a response. Both can be read on the Government’s website. In addition, the Government has published interim statutory frameworks for Key Stage 1 and Key Stage 2 assessment. I set out below my initial thoughts on what I believe is a profoundly problematic assessment scheme.

Please let me know what you think of this initial view – I would be most interested to hear from you.

1. Aims and Objectives

The chairman of the commission states in his foreword that the past has “been too dominated by the requirements of the national assessment framework and testing regime to one where the focus needs to be on high-quality, in-depth teaching, supported by in-class formative assessment.” I have no doubt he is right, but I hope to provide an alternative view in this post – that the proposed interim assessment frameworks exacerbate this problem rather than solve it.

2. High Learning Potential

I find it extraordinary that the Commission does not provide insight into how they expect systems of assessment to cater to children who learn at a faster rate than their peers. Considerable space is given – rightly – to those children who learn at a rate slower than their peers, and the DfE response says “we announced the expert group on assessment for pupils working below the level of the national curriculum tests on 13 July 2015. We are keen to ensure that there is also continuity between this group and the Commission.” This is most welcome. Where is the balancing expert group on assessment for pupils working above the level of the national curriculum tests? How will this group be catered to? The only mention of “the most able” in the commission report says “all pupils, including the most able, do work that deepens their knowledge, understanding and skills, rather than simply undertaking more work of the same difficulty or going on to study different content.” The problem is that the statutory assessment frameworks provide no way to differentiate between schools which are working hard to “deepen the knowledge, understanding and skills,” of pupils who are already attaining at the expected standard at Key Stage 2 and schools which are not. This makes this group of pupils very vulnerable, as it dramatically reduces the statutory oversight of their progress.

3. Mastery

The commission has attempted to grasp the nettle and tried to come up with a definition of what they see as “mastery” in their report – this much used word by purveyors of solutions for the new National Curriculum. The fundamental principles outlined are, I think, uncontroversial – ensuring children know their stuff before moving on. “Pupils are required to demonstrate mastery of the learning from each unit before being allowed to move on to the next” – this is just good practice in any teaching, new or old curriculum – when combined with providing children enough opportunity to demonstrate mastery. However, they then muddy the waters with this quote about the new national curriculum: “it is about deep, secure learning for all, with extension of able students (more things on the same topic) rather than acceleration (rapidly moving on to new content)”. This all depends on the definition of “rapidly”. Of course children should not be moved onto content until they are secure with prior content. Of course it might be possible to identify lots more content on “the same topic” without straying into content from a later key stage (though I have yet to see good examples of this – publish them, please!) But let’s be clear: the national curriculum does not say that acceleration is unacceptable. It says “Pupils who grasp concepts rapidly should be challenged through being offered rich and sophisticated problems before any acceleration through new content” and “schools can introduce key stage content during an earlier key stage, if appropriate.” There is a difference still here between the national curriculum view, which I support (accelerate only if ready) and the commission’s perception of the national curriculum (don’t accelerate). Whether this revolves around a different definition of “accelerate” or a fundamental difference of opinion is less clear, but this issue needs to be addressed.

4. Levels: What’s The Real Issue?

The commission set out in detail what they feel are the problems with assessment by level. In summary, they are:

a. Levels caused schools to prioritise pace over depth of understanding

The Commission reports that, under the old national curriculum, despite a wider set of original purposes, the pressure generated by the use of levels in the accountability system led to a curriculum driven by Attainment Targets, levels and sub-levels, rather than the programmes of study.”

This is probably quite true, and it seems will be at least as true under the proposed new interim teacher assessments: these are dominated by a set of tick-box statements which are narrower than those found in the published programmes of study, recreating and entrenching the same problem.

b. Levels used a “best fit” approach, which left serious gaps of understanding often unfilled

If any schools used levels alone to pass information about pupil attainment to the next year group teacher, then that school was – in my view – being woefully negligent in their assessment policy. Of course more information on what pupils are secure in and what they are not secure in needs to be passed between professionals than purely a best fit level; in my view this is a specious argument against levels – it is actually an argument against poor assessment. And I think we can all get behind that.

Now let us consider what happens when we move from a best fit approach to the “lowest common denominator” approach appearing in the recent statutory frameworks: To demonstrate that pupils have met the standard, teachers will need to have evidence that a pupil demonstrates consistent attainment of all the statements within the standard. Certainly an advantage is that when a pupil moves into the next key stage, a stamp of “met the standard” should mean that new teachers have a meaningful baseline to work from (though still no understanding of where the pupil exceeds this baseline and by how much; simply reporting this information between key stages would still be woefully inadequate.) The disadvantage is likely to come from those children with unusual learning profiles. I was surprised that the commission report actually identifies autistic children: “there were additional challenges in using the best fit model to appropriately assess pupils with uneven profiles of abilities, such as children with autism.” It might certainly be easier for teachers to reach an assessment that an autistic child has “not met the standard” because he or she has a particular blind spot on one part of the curriculum, but it is certainly no more helpful for the teachers this child will move onto to be told “not met” than it is to be told “Level 6”, and arguably much less so. Again, we can agree – I think – that a profile of what children can achieve should be produced to go alongside summary attainment indicators, whether these are “secondary readiness” or “Level 4b”.

I hope I have outlined above where I think Government thinking is achieving well and where it lags behind a reasonable standard for assessment of our children. This doesn’t stop me coming up with a best fit summary assessment: Requires Improvement.

How (not) to Assess Children

Last month, the UK’s Department for Education launched a formal consultation to replace the statutory assessment in primary schools throughout England. The consultation is still running, and can be found at https://www.gov.uk/government/consultations/performance-descriptors-key-stages-1-and-2, and runs until the 18th December. Everyone can respond, and should respond. In my view, this proposal has the potential to seriously damage the education of our children, especially those who are doing well at school.

Currently, English schools report a “level” at the middle of primary school and the end of primary school in reading, writing, maths and spelling, punctuation and grammar. At the end of primary school, typical levels reported range from Level 3 to Level 6, with Level 4 being average. The new proposals effectively do away with reporting a range of attainment, simply indicating whether or not a pupil has met a baseline set of criteria. In my view this is a terrible step backwards: no longer will schools have an external motivation to stretch their most able pupils. In schools with weak leadership and governance, this is bound to have an impact.

I have drafted a response to the consultation document at https://www.scribd.com/doc/246073668/Draft-Response-to-DfE-Consultation.

My response has been “doing the rounds”. Most recently, it was emailed by the Essex Primary Heads Association to all headteachers in Essex. It has also been discussed on the TES Primary Forum and has been tweeted about a number of times.

I am not the only one who has taken issue with this consultation: others include http://thelearningmachine.co.uk/ks1-2-statutory-teacher-assessment-consultation/ and http://michaelt1979.wordpress.com/2014/11/13/primary-teachers-a-call-to-arms/.

Please add your say, and feel free to reuse the text and arguments made in this document.

Review: The Learning Powered School

This book, The Learning Powered School, subtitled Pioneering 21st Century Education, by Claxton, Chambers, Powell and Lucas, is the latest in a series of books to come from the work initiated by Guy Claxton, and described in more detail on the BLP website. I first became aware of BLP through an article in an education magazine, and since found out that one of the teachers at my son’s school has experience with BLP through her own son’s education. This piqued my interest enough to try to find out more.

The key idea of the book is to reorient schools towards being the places where children develop the mental resources to enjoy challenge and cope with uncertainty and complexity. The concepts of BLP are organised around “the 4 Rs”: resilience, resourcefulness, reflectiveness, and reciprocity, which are discussed throughout the book in terms of learning, teaching, leadership, and engaging with parents.

Part I, “Background Conditions”, explains the basis for BLP in schools in terms of both the motivation and the underlying research.

Firstly, motivation for change is discussed. The authors argue that both national economic success and individual mental health is best served by parents and schools helping children to “discover the ‘joy of the struggle’: the happiness that comes from being rapt in the process, and the quiet pride that comes from making progress on something that matters.” This is, indeed, exactly what I want for my own son. They further argue that schools are no longer the primary source of knowledge for children, who can look things up online if they need to, so schools need to reinvent themselves, not (only) as knowledge providers but as developers of learning habits. I liked the suggestion that “if we do not find things to teach children in school that cannot be learned from a machine, we should not be surprised if they come to treat their schooling as a series of irritating interruptions to their education.”

Secondly, the scientific “stable” from which BLP has emerged is discussed. The authors claim that BLP primarily synthesises themes from Dweck‘s research (showing that if people believe that intelligence is fixed then they are less likely to be resilient in their learning), Gardner (the theory of multiple intelligences), Hattie (emphasis on reflective and evaluative practice for both teachers and pupils), Lave and Wenger (communities of practice, schools as an ‘epistemic apprenticeship’), and Perkins (the learnability of intelligence). I have no direct knowledge of any of these thinkers or their theories, except through the book currently under review. Nevertheless, the idea of school (and university!) as epistemic apprenticeship, and an emphasis on reflective practice ring true with my everyday experience of teaching and learning. The seemingly paradoxical claim that emphasising learning rather attainment in the classroom leads to better attainment is backed up with several references, but also agrees with a recent report on the introduction of Level 6 testing in UK primary schools I have read. The suggestion made by the authors that this is due increased pressure on pupils and more “grade focus” leading to shallow learning.

The book then moves on to discuss BLP teaching in practice. There is a huge number of practical suggestions made. Some that particularly resonated with me included:

    • pupils keeping a journal of their own learning experiences
    • including focus on learning habits and attitudes in lesson planning as well as traditional focuses on subject matter and assessment
    • a “See-Think-Wonder” routine: showing children something, encouraging them to think about what they’ve seen and record what they wonder about

Those involved in school improvement will be used to checklists of “good teaching”. The book provides an interesting spin on this, providing a summary of how traditional “good teaching” can be “turbocharged” in the BLP style, e.g. students answer my questions confidently becomes I encourage students to ask curious questions of me and of each other, I mark regularly with supportive comments and targets becomes my marking poses questions about students’ progress as learners, I am secure and confident in my curriculum knowledge becomes I show students that I too am learning in lessons. Thus, in theory, an epistemic partnership is forged.

There is some discussion of curriculum changes to support BLP, which are broadly what I would expect, and a variety of simple scales to measure pupils’ progress against the BLP objectives to complement more traditional academic attainment. The software Blaze BLP is mentioned, which looks well worth investigating further – everyone likes completing quizzes about themselves, and if this could be used to help schools reflect on pupils’ self-perception of learning, that has the potential to be very useful.

In a similar vein, but for school leadership teams, the Learning Quality Framework looks worth investigating as a methodology for schools to follow when asking themselves questions about how to engage in a philosophy such as BLP. It also provides a “Quality Mark” as evidence of process.

Finally, the book summarizes ideas for engaging parents in the BLP programme, modifying homework to fit BLP objectives and improve resilience, etc.

Overall, I like the focus on:

  • an evidence-based approach to learning (though the material in this book is clearly geared towards school leaders rather than researchers, and therefore the evidence-based nature of the material is often asserted rather than demonstrated in the text)
  • the idea of creating a culture of enquiry amongst teachers, getting teachers to run their own mini research projects on their class, reporting back, and thinking about how to evidence results, e.g. “if my Year 6 students create their own ‘stuck posters’, will they become more resilient?”

I would strongly recommend this book to the leadership of good schools who already have the basics right. Whether schools choose to adopt the philosophy or not, whether they “buy in” or ultimately reject the claims made, I have no doubt that they will grow as places of learning by actively engaging with the ideas and thinking how they could be put into practice, or indeed whether – and where – they already are.