# The Growth Mindset

Over the last 5-10 years, the Growth Mindset has become a very popular feature of many schools across England. I have seen it implemented in a couple of schools, and I’m also aware that its initiator, Carol Dweck, gave an interview a couple of years ago where she criticised some implementations as “false growth mindset”.

In order to learn a bit more about the original research conducted by Dweck, I decided over the holiday to read her early book, ‘Self-theories: Their role in motivation, personality, and development’, Psychology Press, 1999. I have no background in psychology and a very limited background in educational theory, but I still want to know how much I can get from this as a parent, as an educator, and as a member of a school board.

As notes to myself, and for others who may be interested, I’m reporting the main take-away messages I got from the book in this post. I do not question the validity of any claims – I am not knowledgeable enough to do so – and I’m also very conscious that I have not had time to follow up the references to read the primary research literature. Instead, I cite below the chapters of the book in which the references can be found, should blog readers be interested in following up more deeply.

### Two Theories of Intelligence

Dweck defines the seeking of challenge, the value of effort, and persistence in the face of obstacles as ‘mastery-oriented approaches’. She aims to knock down several ‘commonly held’ beliefs about what fosters such approaches: they are not more common in students with high ability, they are not necessarily improved by success in tasks, they are not improved by praise of students’ intelligence, and they are not even typically associated with students who have a high confidence in their intelligence. So what are the best approaches to fostering such qualities?

Dweck contrasts two theories of intelligence, which I’ve heard referred to in schools as “the fixed mindset” and “the growth mindset”. In the original research in this book, she refers to these as “The Theory of Fixed Intelligence” / “The Entity Theory” and “The Theory of Malleable Intelligence” / “The Incremental Theory”. In an experimental setting, failure is reported to motivate some students and demotivate others, in an apparently fairly bimodal distribution (Chapter 2).

To my mind, what’s missing from this discussion is a shared understanding of what intelligence actually is (Dweck picks this up much later in Chapter 9, on IQ tests). Intelligence, to me, describes the ability to learn and think – this seems to be a qualitative rather than a quantitative property. We could, of course, talk about speed or depth or some other quantification, and I’m aware that there’s a huge volume of work on this topic, about which I know little (any pointers for good books on this?) A principled definition of intelligence seems relevant because while I think nobody would say that a person’s knowledge is fixed, there is clearly a difference of opinion over the ability to gain such knowledge and skills – do people differ solely in the rate of development of knowledge / skills, or in the maximum level of knowledge / skills, or something else? And if there are such limits on the rate of change today for Person X, will those limits be different in the future for the same person? If the rate of change can change, can the rate of change of the rate of change change? And so, ad infinitum. And should we even care? Chapter 9 discusses pupils’ own views, with Dweck suggesting that entity theorists associate intelligence with inherent capacity or potential, while incremental theorists associate intelligence with knowledge, skills and effort. This actually surprised me – it seems that the perspective of the incremental theorists makes the very concept of intelligence – as distinct from knowledge, skills, and effort, superfluous. But it also seems to be somewhat inconsistent, because in Chapter 11 we learn that incremental theorists tend not to judge their classmates’ intelligence based on their performance in school. Perhaps the incremental theorists just have a hazier conception of intelligence in the first place?

What’s clear is that Dweck has no truck with those claiming that Growth Mindset means that “everyone can be an Einstein if you put in the effort” – it’s just that she strongly argues that potential cannot be readily measured based on current attainment – that there may well be undiscovered Einsteins in bottom set classes. These are not the same thing at all.

### The Impact of Theories of Intelligence

Dweck then goes on to show that students’ theories of intelligence impact their choice of goals, with students holding the entity theory more likely to chose performance goals, given an option. She shows this to be a causal link, via appropriately designed experiments to temporarily alter students’ theories of intelligence.

Dweck shows that the goals given to students impact on whether they react with a “helpless” or a “mastery” response, even for the same task. Students given a “performance goal” are much more likely to produce a helpless response than those given a “learning goal”. Performance goals are fairly ubiquitous in the English education system, as individual target grades shared with pupils. I wonder whether her observation carries forward into this setting?

Dweck argues that pupils holding an entity model can sabotage their own attainment – withholding effort so that if they do poorly, they can blame their own lack of effort whereas if they do well, they feel validated in their innate intelligence (Chapter 6).

In Chapter 12, Dweck discusses pupils’ views of the belief in the potential to change and improve, and the impact of intelligence models on this belief – which plays out unsurprisingly. I’m more interested in similar beliefs held by teaching staff and how / whether they impact on their practice (does anyone know of any studies on this topic?)

One area where I found the book less precise is whether students can simultaneously be “more of an entity-theorist” in some subjects and “more of an incremental-theorist” in others. Often this was dealt with as if these were universal theories, but my limited experience suggests that students may, for example, hold largely incremental theories in sport while largely entity theories in maths. (Again, anyone know of studies on this topic?)

### Changing Theories of Intelligence

So how do we change mindsets? One method Dweck refers to throughout, is to actually teach pupils about theories of intelligence. Another is to focus on the type of praise given: to emphasise an incremental model, praise successful strategies used on tasks they’ve clearly found challenging; quick correct answers should be responded to with apologies for wasting their time, and by setting more appropriate and challenging problems. This is subtly different advice to “praising only effort”, an approach I’ve seen some schools adopting when trying to apply the growth mindset. The best approach seems to be to ensure that challenge level is appropriate for each pupil, ensuring alignment between effort and outcome. Unfortunately, many primary schools in England are running in directly the opposite direction at the moment (see my blog post here); I do wonder what impact this is likely to have on the mindset of high-attaining pupils in the English education system.

In Chapter 15, Dweck looks at the kind of criticism and praise that reinforces these differing views. Criticism suggesting alternatives, e.g. “You’ve not quite done that completely. Maybe you should think of another way,” caused a reinforcement of incremental theories, whereas criticisms of the individual, e.g. “I’m disappointed in you”, tended to emphasise entity theories. More strikingly, Dweck argues strongly that positive praise targeted at inherent traits, e.g. “you’re smart!”, “you’re very good at this” or “I’m proud of you” can reinforce the entity theory, whereas praise such as “you’ve found a great way to do that – can you think of any other ways?” reinforces the incremental theory. While the former type of praise is definitely well received, and gives a temporary boost, Dweck argues that it sets pupils up for failure when they encounter difficulties and draw the inverse conclusion – “if I’ve not been successful, then I’m not smart, and you’re not proud of me”.

Finally, we only need to consider changing mindsets after mindsets are embedded. Dweck spends some space (Chapter 14) on arguing that the helpless-/mastery- dichotomy in responses is present even in 3.5-year-olds (where she associates this with a ‘theory of badness’ held by the children, rather than a ‘theory of intelligence’) so the mindset issue seems to be an issue for all phases of education.

### Conclusions

Praise and Criticism. Students receive criticism and praise throughout their learning journey, and trying to change verbal feedback through training of staff is one thing to look at. However, it strikes me that one formalised arena for feedback, shared across parents, children and teachers, is in written “reports home”. I suspect it would be relatively easy to survey these reports for the type of language used, and compare this against the evidence Dweck presents on forms of criticism and praise. I’d be very interested in any schools that may have tried to survey or manage report language to align it with growth mindset principles. This also extends to grades: following Dweck’s results in Chapter 16 on “process praise”, it would seem far better to send home a report saying “worked on some great methods for X” rather than “Grade B”, or “could try alternative strategies for staying focussed” rather than “Grade C”.

Elective Remedial (Catch-up) Classes. Another interesting implication for schools and universities alike is the use of elective remedial classes. Several of Dweck’s studies seem to show that for those pupils who hold an entity theory of intelligence, it’s precisely those pupils who don’t need the remedial classes who are happy to attend them. Institutions should think about how to get around this problem.

School Transitions. There are implications for managing the transition from primary to secondary school, revealed by Dweck’s study of grade school to junior-high transition in the US; perhaps secondaries – jointly with primaries, even – could explicitly teach about theories of intelligence as part of the induction process, like the study at UC Berkeley reported in Chapter 5. I wonder whether any secondaries have tried this?

Mental Health. Mental health in educational settings is a hot topic at the moment. Given Dweck’s theories about self-esteem and its link to mindset, can recent work of schools and universities on mental health be improved by engaging with these ideas? For example, can mental health issues be avoided by trying to foster a growth mindset, and has any significant evidence been collected in this regard?

Grouping by attainment. I have seen many discussions of Growth Mindset that have suggested that grouping pupils by attainment runs counter to the principles outlined here. But interestingly, this is not what Dweck says (Chapter 17). She says that within the entity framework, this might be true, but attainment grouping within the incremental framework is not inherently problematic – it’s just an acknowledgement of fact. I would note that such groups are often referred to in education as “ability groups” rather than “attainment groups” – perhaps reflective of the entity theory. This issue potentially becomes even more acute when considering streaming and/or selective entry testing.

Gifted and Talented Programmes. There appear to be several implications for gifted and talented programmes (G&T) in schools (Dweck deals explicitly with this in Chapter 16, but does not draw out all the conclusions). Firstly, and essentially, we need to ensure all students are challenged, or they will not experience difficulty and effort; at the high-attaining end, this may or may not come from a G&T programme, depending on the pupil and the school approach to differentiation, but it cannot be absent. Secondly, perhaps the name G&T is problematic – Dweck herself says that “the term ‘gifted’ conjures up an entity theory,” and it’s not hard to imagine children in G&T programmes worrying more about losing G&T status than improving their knowledge and skills.

Teacher Mindsets. Although it would seem natural for teachers to have an incremental theory / growth mindset, my observations suggest this is not always the case. I wonder whether any schools have undertaken studies of their own teaching staff in this regard – this could be very interesting.

### Beyond Intelligence

Chapter 10 shows that very similar observations apply to personal and social relationships, and Chapter 13 argues that theories of intelligence are also closely associated with the formation of stereotypes. Chapter 17 describes a link with self-esteem, and suggests that parents and teachers alike can model feeling good about effortful tasks, as a route to self-esteem within the incremental model. and that entity models are correlated with depression and anxiety (Chapter 7).

Overall, this book has given me plenty to think about as a parent, and a fair bit to think about as an educator too. I’d be really interested in hearing people’s suggestions for more reading on the topics above, especially if any of the studies I suggest above have already been done in the psychology or education literature.

Readers who enjoyed this post might be interested in my other educational posts.

# Teaching Curriculum Beyond Year Group

I have lost track of the number of times that I’ve been told by parents of primary-age children in England that schools are claiming that they are “not allowed” to teach content beyond that set out for the child’s year group in the English National Curriculum, ever since the curriculum reforms in 2014.

This myth seems to be so embedded that I have heard it myself from numerous headteachers and teaching staff.

Instead of spending the time explaining the actual situation afresh each time I am asked, I have instead put it down as this brief explanatory blog post. I hope people find it helpful.

Firstly, different schools will have different policies. It may be school policy to do / not to do something with the curriculum, but this is determined by the school alone, acting in line with the statutory framework. For academies, the statutory framework is typically minimal. Maintained schools must follow the statutory National Curriculum, and – in practice – every academy I’ve come across also abides by these regulations.

Presumably, the myth started because the National Curriculum Programmes of Study [Maths, English] are set out as expectations by year group. However, the programmes very clearly state:

“Within each key stage, schools therefore have the flexibility to introduce content earlier or later than set out in the programme of study.”

“schools can introduce key stage content during an earlier key stage, if appropriate.”

(see Section “School Curriculum” in either the Maths or the English Programme of Study.)

This must be read in the context of the broader thrust of the programmes, which state:

The expectation is that the majority of pupils will move through the programmes of study at broadly the same pace. However, decisions about when to progress should always be based on the security of pupils’ understanding and their readiness to progress to the next stage. Pupils who grasp concepts rapidly should be challenged through being offered rich and sophisticated problems before any acceleration through new content. Those who are not sufficiently fluent with earlier material should consolidate their understanding, including through additional practice, before moving on.

So, put simply, schools can certainly teach children content above their year group. But only if they’re ready for it. Common sense, really.

If you really want to know more about my views on education, then please click on the “Education” link on this blog post to find related posts.

# Structures in Arithmetic Teaching Tools

Readers of this blog will know that beyond my “day job”, I am interested in early mathematics education. Partly due to my outreach work with primary schools, I became aware of several tools that are used by primary (elementary) school teachers to help children grasp the structures present in arithmetic. The first of these, Cuisenaire Rods, have a long history and have recently come back in vogue in education. They consist of coloured plastic or wooden rods that can be physically manipulated by children. The second, usually known in this country as the Singapore Bar Model, is a form of drawing used to illustrate and guide the solution to “word problems”, including basic algebra. Through many discussions with my colleague, Charlotte Neale, I have come to appreciate the role these tools – and many other physical pieces of equipment, known in the education world as manipulatives – can play in helping children get to grips with arithmetic.

Cuisenaire and Bar Models have intrigued me, and I spent a considerable portion of my Easter holiday trying to nail down exactly what arithmetic formulae correspond to the juxtaposition of these concrete and pictorial representations. After many discussions with Charlotte, I’m pleased to say that we will be presenting our findings at the BSRLM Summer Conference on the 9th June in Swansea. Presenting at an education conference is a first for me, so I’m rather excited, and very much looking forward to finding out how the work is received.

In this post, I’ll give a brief overview of the main features of the approach we’ve taken from my (non educationalist!) perspective.

Firstly, to enable a formal study of these structures, we needed to formally define how such rods and diagrams are composed.

Cuisenaire Rods

These rods come in all multiples up to 10 of a single unit length, and are colour coded. To keep things simple, we’ve focused only on horizontal composition of rods (interpreted as addition) to form terms, as shown in an example below.

In early primary school, the main relationships being explored relating to horizontal composition are equality and inequality. For example, the figure below shows that black > red + purple, because of the overhanging top-right edge.

With this in mind, we can interpret any such sentence in Cuisenaire rods as an equivalent sentence in (first order) arithmetic. After having done so, we can easily prove mathematically that all such sentences are true. Expressibility and truth coincide for this Cuisenaire syntax! Note that this is very different to the usual abstract syntax for expressing number facts: although 4 = 2 + 1 is false, we can still write it down. This is one reason – we believe – they are so heavily used in early years education: truths are built through play. We only need to know syntactic rules for composition and we can make many interesting number sentences.

From an abstract algebraic perspective, closure and associativity of composition naturally arise, and so long as children are comfortable with conservation of length under translation, commutativity is also apparent. Additive inverses and identity are not so naturally expressed, resulting in an Abelian semigroup structure, which also carries over to our next tool, the bar model.

Bar Models

Our investigations suggest that bar models – example for $20 = x+2$ pictured below –  are rarely precisely defined in the literature, so one of our tasks was to come up with a precise definition of bar model syntax.

We have made the observation that there seem to be a variety of practices here. The most obvious one, for small numbers drawn on squared paper, is to retain the proportionality of Cuisenaire. These ‘proportional bar models’ (our term) inherit the same expressibility / truth relationship as Cuisenaire structures, of course, but now numerals can exceed 10 – at the cost of decimal numeration being a prerequisite for their use. However, proportionality precludes the presence of ‘unknowns’ – variables – which is where bar models are heavily used in the latter stages of primary schools and in some secondary schools.

At the other extreme, we could remove the semantic content of bar length, leaving only abutment and the alignment of the right-hand edges as denoting meaning – a type of bar model we refer to as a `topological bar model’. These are very expressive – they correspond to Presburger arithmetic without induction. It now becomes possible to express false statements (e.g. the trivial one below, stating that 1 = 2).

As a result, we must be mathematically precise about valid rules of inference and axiom schemata for this type of model, for example the rule of inference below. Note that due to the inexpressibility of implication in the bar model, many more rules of inference are required than in a standard first-order treatment of arithmetic.

The topological bar model also opens the door to many different mistakes, arising when children apply geometric insight to a topological structure.

In practice, it seems that teachers in the classroom informally use some kind of mid-way point between these two syntaxes, which we call an `order-preserving’ bar model: the aim is for relative sizes of values to be represented, ensuring that larger bars are interpreted as larger numbers. However, this approach is not compositional. Issues arising from this can be seen when trying to model, for example, $x + y = 3$. The positive integral solutions are either $x = 2, y = 1$ leading to $x > y$ or $x = 1, y =2$, leading to $y > x$.

Other Graphical Tools and Manipulatives

As part of our work, we identify certain missing elements from first-order arithmetic in the tools studied to date. It would be great if further work could be done to consider drawings and manipulatives that could help plug these gaps. They include:

• Multiplication in bar models. While we can understand $3x$, for example, as a shorthand for $x + x + x$, there is no way to express $x^2$
• Disjunction and negation. While placing two bar models side-by-side seems like a natural way of expressing conjunction, there is no natural way of expressing disjunction / negation. Perhaps a variation on Pierce’s notation could be of interest?
• We can consider variables in a bar model as implicitly existentially quantified. There is no way of expressing universal quantification.
• As noted above, these tools capture an Abelian semigroup structure. We’re aware of some manipulatives, such as Algebra Tiles, which aim to also capture additive inverses, though we’ve not explored these in any depth.
• We have only discussed one use of Cuisenaire rods – there are many others – as the recent ATM book by Ollerton, Williams and Gregg makes clear, many of which we feel could also benefit from analysis using our approach.
• There are also many more manipulatives than Cuisenaire, as Griffiths, Back and Gifford describe in detail in their book, and it would be of great interest to compare and contrast these from a formal perspective.
• At this stage, we have avoided introducing a monus into our algebra of bar models, but this is a natural next step when considering the algebraic structure of so-called comparative bar models.
• My colleague Dan Ghica alerted me to the computer game DragonBox Algebra 5+, which we can consider as a sophisticated form of virtual manipulative incorporating rules of inference. It would be very interesting to study similar virtual manipulatives in a classroom setting.

An Exciting Starting Point

Charlotte and I hope that attendees at the BSRLM conference – and readers of this blog – are as excited as we are about our idea of the potential for using the tools of mathematical logic and abstract algebra to understand more about early learning of arithmetic. We hope our work will stimulate some others to work with us to develop and broaden this research further.

Acknowledgement

I would like to acknowledge Dan Ghica for reading this blog post from a semanticist’s perspective before it went up, for reminding me about DragonBox, and for pointing out food for further thought. Any errors remain mine.

# Primary Assessment

Readers of this blog will know that I have been critical of the Government’s assessment system for the new National Curriculum in England [1,2,3]. I therefore greet the Secretary of State’s recently launched consultation over the future of primary assessment with a cautious welcome, especially since it seems to follow well from the NAHT’s report on the topic.

What is Statutory Assessment for?

The consultation document states the aim of statutory assessment as follows:

Statutory assessment at primary school is about measuring school performance, holding schools to account for the work they do with their pupils and identifying where pupils require more support, so that this can be provided. Primary assessment should not be about putting pressure on children.

Firstly, let me lay my cards on the table: I do think that school “performance” deserves to be measured. My experiences with various schools suggests strongly that there are under-performing schools, which are in need of additional support to develop their educational practice. There is a subtle but telling difference between these perspectives, my own emphasising support while the Government’s emphasising accountability. While some notions of accountability in schools appear uncontroversial, the term has recently  become associated with high-stakes educational disruption rather than with improving outcomes for our children. We can definitely agree that primary assessment should not be about putting pressure on children; unfortunately, I don’t believe that the consultation proposals seriously address this question.

Consultation Questions

In this section, I focus on the questions in the Government’s consultation on which I have a strong opinion; these are by no means the only important questions.

Q2. The EYFSP currently provides an assessment as to whether a child is ‘emerging, expecting [sic] or exceeding’ the level of development in each ELG. Is this categorisation the right approach? Is it the right approach for children with SEND?

Clearly the answer here primarily depends on the use of these data. If the aim is to answer questions like “how well-aligned – on average – are children with the age-related expectations of the early-years curriculum at this school?” then this assessment scheme is perfectly reasonable. Nor does it need to be tuned for children with SEND who may have unusual profiles, because it’s not about individual pupils, nor indeed for high attaining children who may be accessing later years of the national curriculum during their reception years. But if it’s about understanding an individual learning profile, for example in order to judge pupil progress made later in the school, then any emerging / expected / exceeding judgement seems far too coarse. It groups together children who are “nearly expected” with those well below, and children who are “just above expected” with those working in line with the national curriculum objectives for half way up the primary school – or beyond.

Q3. What steps could we take to reduce the workload and time burden on those involved in administering the EYFSP?

Teacher workload is clearly a key issue. But if we are talking seriously about how to control the additional workload placed on teachers by statutory assessment, then this is an indication that our education system is in the wrong place: there should always be next to no additional workload! Assessment should be about driving learning – if it’s not doing that, it shouldn’t be happening; if it is doing that, then it should be happening anyway! So the key question we should be answering is: why has the statutory framework drifted so far from the need to support pupils’ learning, and how can we fix this?

Q5. Any form of progress measure requires a starting point. Do you agree that it is best to move to a baseline assessment in reception to cover the time a child is in primary school (reception to key stage 2)? If you agree, then please tell us what you think the key characteristics of a baseline assessment in reception should be. If you do not agree, then please explain why.

[… but earlier …]

For the data to be considered robust as a baseline for a progress measure, the assessment needs to be a reliable indicator of pupils’ attainment and strongly correlate with their attainment in statutory key stage 2 assessments in English reading, writing and mathematics.

I agree wholeheartedly with the statement regarding the requirements for a solid baseline progress measure. And yet we are being offered up the possibility of baselines based on the start of EYFS. There is no existing data on whether any such assessment strongly correlates with KS2 results (and there are good reasons to doubt it). If the government intends to move the progress baseline from KS1 down the school, then a good starting point for analysis would be the end of EYFS – we should already have data on this, although from the previous (points-based) EYFS profile. So how good is the correlation of end-of-EYFS and KS2? Because any shift earlier is likely to be worse, so at least this would provide us with a bound on the quality of any such metric. Why have these data not been presented?

It would, in my view, be unacceptable to even propose to shift the baseline assessment point earlier without having collected the data for long enough to understand how on-entry assessment correlates with KS2 results, i.e. no change should be proposed for another 6 years or so, even if statutory baseline assessments are introduced now. Otherwise we run the risk of meaningless progress metrics, with confidence intervals so wide that no rigorous statistical interpretation is possible.

Q9. If a baseline assessment is introduced in reception, in the longer term, would you favour removing the statutory requirement for all-through primary schools to administer assessments at the end of key stage 1?

The language is telling here: “to administer assessments.” If this were phrased as “to administer tests,” then I would be very happy to say “yes!” But teachers should be assessing – not examining – pupils all the time, in one form or another, because assessment is a fundamental part of learning. So really the question is the form of these assessments, and how often they should be passed up beyond the school for national comparison. Here the issue is more about the framework of support in which a school finds itself. If a school is “left to its own devices” with no local authority or other support for years (a common predicament at the moment with the abolition of the Education Services Grant by the Government!) then it way well be too long to wait six-and-a-half years before finding out that a school is seriously under-performing. Yet if the school exists within a network of supportive professionals from other schools and local authorities who have the time and resource to dig deeply into the school’s internal assessment schemes during the intervening years, these disasters should never happen. A prerequisite for a good education system is to resource it appropriately!

Q11. Do you think that the department should remove the statutory obligation to carry out teacher assessment in English reading and mathematics at key stage 2, when only test data is used in performance measures?

I think this is the wrong way round. Schools should only be required to report teacher assessment (and it should be “best fit”, not “secure fit”); tests at Key Stage 2 should be abolished. This would be fully consistent with high quality professional-led, moderated assessment, and address the very real stress placed on both children and teachers by high-stakes testing schemes. Remember the consultation document itself states “Primary assessment should not be about putting pressure on children.”

Q14. How can we ensure that the multiplication tables check is implemented in a way that balances burdens on schools with benefit to pupils?

By not having one. This is yet another situation where a tiny sliver of a curriculum (in this case tedious rote learning of multiplication tables) is picked out and elevated above other equally important elements of the curriculum. Boaler has plenty to say on this topic.

Q15. Are there additional ways, in the context of the proposed statutory assessments, that the administration of statutory assessments in primary schools could be improved to reduce burdens?

The best way to reduce the burden on schools seems to be to more closely align formative and summative assessment processes. However, schools have been explicitly encouraged to “do their own thing” when it comes to formative assessment processes. The best way the Government could help here is by commissioning an expert panel to help learn from the best of these experiments, combining what has been learnt with the best international educational research on the topic, and re-introducing a harmonised form of national in-school assessment in the primary sector.

Best Fit or Secure Fit?

The consultation appears to repeat the Government’s support for the “secure fit” approach to assessment. The document states:

The interim teacher assessment frameworks were designed to assess whether pupils have a firm grounding in the national curriculum by requiring teachers to demonstrate that pupils can meet every ‘pupil can’ statement. This approach aims to achieve greater consistency in the judgements made by teachers and to avoid pupils moving on in their education with significant and limiting gaps in their knowledge and skills, a problem identified under the previous system of national curriculum levels.

The key word here is every. This approach has been one of the key differentiators from the previous national curriculum assessment approach. I have argued before against this approach, and I stand by that argument; moreover, there are good statistical arguments that the claim to greater consistency is questionable. We are currently in the profoundly odd situation where teacher assessments are made by this “secure fit” approach, while tests are more attuned with a “best fit” approach, referred to as “compensatory” in previous DfE missives on this topic.

However, the consultation then goes on to actually suggest a move back to “best fit” for writing assessments. By removing the requirement for teacher assessments except in English, and relying on testing in KS2 for maths and reading, I expect this to be a “victory for both sides” fudge – secure fit remains in theory, but is not used in any assessment used within the school “accountability framework”.

High Learning Potential

The consultation notes that plans for the assessment of children working below expectation in the national curriculum are considered separately, following the result of the Rochford Review. It is sad, though not unexpected, that once again no particular mention is given to the assessment of children working well above the expectation of the national curriculum. This group of high attaining children has become invisible to statutory assessment, which bodes ill for English education. In my view, any statutory assessment scheme must find ways to avoid capping attainment metrics. This discussion is completely absent from the consultation document.

Arithmetic or Mathematics?

Finally, it is remarkable that the consultation document – perhaps flippantly – describes the national curriculum as having been reformed “to give every child the best chance to master reading, writing and arithmetic,” reinforcing the over-emphasis of arithmetic over other important topics still hanging on in the mathematics primary curriculum. It is worth flagging that these changes of emphasis are distressing to those of us who genuinely love mathematics.

Conclusion

I am pleased that the Government appears to be back-tracking over some of the more harmful changes introduced to primary assessment in the last few years. However, certain key requirements remain outstanding:

1. No cap on attainment
2. Baselines for progress measures to be based on good predictors for KS2 attainment
3. Replace high-stress testing on a particular day with teacher assessment
4. Alignment of summative and formative assessment and a national framework for assessment
5. Well-resourced local networks of support between schools for support and moderation

# Playing with L-Systems

For today’s session of the math circle I jointly run for 5-7 year-olds, we got the kids to play with Lindenmayer Systems (L-Systems for short). L-Systems can be used as compact representations of complex geometric shapes, including fractals. The aim of the session was for children to understand that simple formulae can describe complex geometric objects, building on the intuition that properties of shapes can be described algebraically that we got through a previous session on symmetry and algebra.

I stumbled across this excellent L-System generator on the web, which was perfect for our needs as we didn’t need to install any software on the school laptops. After illustrating how the Koch Snowflake could be generated, we simply let them loose to experiment, suggesting that each time they set the number of iterations to 1 before exploring a greater depth of iteration. They seemed to really enjoy it. On a one-to-one basis, we discussed the reason that various formulae generated their corresponding shapes, trying to embed the link between the equations and the graphical representation, but the main emphasis was generating visually pleasing images.

Here are some of the curves they produced. In each case, the caption is of the form: number of iterations, angle, axiom, production rule.

I would have liked to have the time to discuss in more depth why the curve that appeared to fill the triangle had no white space visible.

Once we had finished, we finally drew together where I presented a simple L-System for the Sierpinski Triangle, an object they’d seen before in a previous session. There were several exclamations of awe, which are always great to hear!

# National Funding Formula

This post contains my personal response to the DfE’s second round consultation on the national funding formula for schools. Individuals, organisations, schools, etc., should feel free to reuse my response. My response is based on my earlier – and lengthier – analysis of the DfE’s proposals, which goes into much more depth. Those interested beyond the specific questions being asked by the DfE should consult the earlier post too.

Please note that consultation responses need to be submitted by 22nd March 2017 and can be done online. Please submit a response, even a partial one. This issue is too important to be ignored!

Q1. In designing our national funding formula, we have taken careful steps to balance the principles of fairness and stability. Do you think we have struck the right balance?

No.

The introduction of a funding floor is a fundamentally unfair approach. The stated purpose of the national funding formula was to provide fairness and end the “postcode lottery”. A funding floor, as envisaged by the consultation document, entrenches the postcode lottery in the current funding environment rather than eliminating it.

Q2. Do you support our proposal to set the primary to secondary ratio in line with the current national average?

Yes.

In the absence of further evidence, this seems like a sensible approach. However I am disappointed that further evidence has not been sought, e.g. a study of the relative primary/secondary performance across different local authorities with different ratios in place. By introducing the NFF, such studies will no longer be possible and so the Government is missing an opportunity to base these decisions on hard evidence.

Q3. Do you support our proposal to maximise pupil-led funding?

No – you should increase school-led funding compared to the current national average

This approach fundamentally ignores the different shape and makeup of schools. It would only be a reasonable approach if costs were all equally pupil-led. Instead, I support the principle of equalising disposable income per pupil. Once fixed costs are taken into account, this principle is opposed to that of maximising pupil-led funding.

Q4. Within the total pupil-led funding, do you support our proposal to increase the proportion allocated to the additional needs factors?

No – allocate a lower proportion to additional needs.

I would have said “don’t know” if given an option here, because the DfE has not presented any evidence that increasing this proportion is an appropriate decision. Equally, I do not have evidence that reducing it would be a good decision. Given local authorities central role in understanding local educational additional needs, in the absence of additional evidence, I believe LA averages should be used, which corresponds to a lower proportion compared to the proposals.

Q5. Do you agree with the proposed weightings for each of the additional needs factors?

Q5a. Deprivation – Pupil Based

No – allocate a higher proportion.

A number of studies, e.g. [1], have shown a stronger link between pupil-based deprivation indices and educational disadvantage than between area-based deprivation indices and educational disadvantage. Once again, a boundary discontinuity study would have been helpful, and I am disappointed that this work has not been undertaken.

[1] C. Crawford and E. Greaves, “A comparison of commonly used socio-economic indicators: their relationship to educational disadvantage and relevance to Teach First,” IFS Report R79, 2013.

Q5b. Deprivation – Area Based

No – allocate a lower proportion

See answer to “Deprivation – Pupil based”

Q5c. Low Prior Attainment

No – Allocate a lower proportion

There is simply no justification given in the consultation document for nearly doubling the funding going through this factor. It is frankly shocking that such radical changes can be proposed with no evidence presented in their favour.

No matter what proportion is decided upon for the initial implementation of the NFF, the Government should commit that in the future, should prior attainment metrics rise nationally, any funding lost to schools through the low prior attainment factor should be recycled into the APWU factor.

Q5d. English as an Additional Language

No – allocate a lower proportion

LAs have chosen to allocate an average of 0.9%. The proposal is to increase this to 1.2%. No evidence is presented to support this change, therefore I cannot support it.

On a related matter, for a fixed pot of EAL funding, there is an argument to be had over whether children would benefit more from considerable funding in year 1 for intensive English tuition to allow them to access the curriculum, rather than more “smeared out” three year funding at a lower level per year. Once again, it would be useful to see the research that suggests that one or the other approach actually reaps the greatest benefit before mandating EAL3.

Q6. Do you have any suggestions about potential indicators and data sources we could use to allocate mobility funding in 2019-20 and beyond?

None

Q7. Do you agree with the proposed lump sum amount of £110,000 for all schools?

7a. Primary

No – Allocate a higher amount

In order to ensure the principle of equal disposable income per pupil, the purpose of the lump sum should not be to “contribute to the costs that do not vary with pupil numbers” but rather to “be a genuine reflection of the costs that do not vary with pupil numbers”

7b. Secondary

No – Allocate a higher amount

See answer to “Primary”

Q8. Do you agree with the proposed amounts for sparsity funding of up to £25,000 for primary and up to £65,000 for secondary, middle and all-through schools?

Q8a. Primary

No – Allocate a lower amount

With a properly funded lump sum that reflects the costs that do not vary with pupil numbers, there is no need for a sparsity factor. Need for a sparsity factor is an admission that there are pupils in “non sparse” areas who are being deprived of appropriate disposable funding.

Q8b. Secondary

No – Allocate a lower amount

See answer to Primary

Q9. Do you agree that lagged pupil growth data would provide an effective basis for the growth factor in the longer term?

No comment

Q10. Do you agree with the principle of a funding floor?

No

The introduction of a funding floor completely eliminates the stated purpose of the NFF. It reintroduces and – worse – codifies – a postcode lottery based on historical funding rates.

Q11. Do you support our proposal to set the funding floor at minus 3%?

No

I do not support the funding floor.

Q12. Do you agree that for new or growing schools (i.e. schools that are still filling up and do not have pupils in all year groups yet) the funding floor should be applied to the per-pupil funding they would have received if they were at full capacity?

No

I do not support the funding floor

Q13. Do you support our proposal to continue the minimum funding guarantee at minus 1.5%?

No – the minimum funding guarantee should be higher (i.e. restrict losses to less than 1.5% per pupil in any year)

I strongly support the existence of a MFG as a way of phasing in funding changes. However, in the present financial climate a MFG of -1.5% is problematic coming on top of other issues such as the radical reduction in the education services grant to local authorities and the apprenticeship levy.

Q14. Are there further considerations we should be taking into account about the proposed schools national funding formula?

As a general principle, deviations from a move to average values across LA funding formulae should be justified and evidence-based. I have provided a lot more questions for consideration in a blog post at https://constantinides.net/2016/12/16/national-funding-formula-for-schools-a-critique/

Q15. Do you agree that we should allocate 10% of funding through a deprivation factor in the central school services block?

I am disappointed the options in this questionnaire do not allow me to state “not sure”, because – once again – no evidence has been provided to justify the figure of 10%. What I can say confidently is that the overall level of central school services block is far too low. In effect schools are seeing a significant cut to their funding because they will need to fund services previously provided through the Education Services Grant.

Q16. Do you support our proposal to limit reductions on local authorities’ central school services block funding to 2.5% per pupil in 2018-19 and in 2019-20?

No – limit reductions to less that 2.5% per pupil per year

In line with my school-level comments on MFG, it is important to have transitional arrangements in place. Given the radical cut to the Education Services Grant, it is not clear why the proposed limit of 2.5% is so much greater than the school MFG rate of -1.5%.

Q17. Are there further considerations we should be taking into account about the proposed central school services block formula?

# National Funding Formula for Schools: A Critique

England’s Department for Education released its long-awaited Phase 2 of the consultation on a national funding formula on the 14th December 2016. I have been heavily involved in determining the funding formula for schools in one of the most diverse English local authorities, and so have some detailed thoughts on this process. As a prelude to my own response to the funding formula consultation, I thought it might be helpful to others to lay out my comments against the paragraphs of the Government’s consultation document as a “guide to reading”. I have focused on the areas I know best, which relate to funding arriving at schools rather than proposed funding to be distributed to LAs still, such as funding for growth, central school services, etc.

The DfE seems to be considering two quite distinct drivers for the decisions being proposed. Many decisions use LA formulae and averages between LAs to drive appropriate funding formulae. Elsewhere, clear politically-driven approaches come through – the drive to increase the proportion of funding going into pupil-led factors, etc. These have been presented in a jumbled up fashion that makes it hard to understand the relative impact of these considerations. It would be a relatively straight-forward mathematical task to set up and solve an optimization problem to minimize school funding turbulence when moving to a funding formula using these formula elements. It is disappointing that the DfE has not done this to at least provide an element of transparency in the proposals, as deviation from any such minimal-turbulence formula should indicate the presence of evidence being used to drive a decision. Put plainly: changes to school funding should be either to even up funding between LAs or to achieve a certain outcome.

I have chosen to blog here about the nuts and bolts, and save a formal consultation response, or any overall conclusions, for a future post. I hope my fellow consultation readers and I can have a conversation about these points in the mean time.

As a result of this decision, the remainder of this post is fairly lengthy, and will only make sense if you read it alongside the DfE’s paper. Happy reading!

The Gory Details

1.12 and again in 2.20. This is flawed reasoning. The DfE is correct that if pupils tend to share deprivation (or any other) characteristics, then allocation of funding through these factors achieves the same end result as allocation through basic per-pupil funding. But this is true either in areas of high uniform deprivation or in areas of low uniform deprivation. As a result, the appropriate methodology to use LA formulae to determine the desirable size of deprivation factor would be to specifically look at the formulae of LAs with wide variations in deprivation from school to school, providing a low weighting to formulae of LAs with less varying deprivation, not to simply assume that deprivation funding needs to increase. (Which, incidentally, I am not against, I just want to see evidence before making decisions. Typically such evidence comes from boundary discontinuity studies between schools near borders of LAs. We therefore have a once-in-a-generation opportunity to grasp the nettle and do this properly, before a national funding formula arrives and discontinuities – and hence evidence – disappears.)

1.16. The lump sum is a critically important factor in school funding, especially in areas with schools of widely varying size. The DfE claim that they “cannot see any clear patterns in the specific lump sum values.” Yet it is unclear what analysis has been conducted to discern a pattern. I would not expect any pattern to emerge from the analysis published, because no correlation is looked for between lump sum and school size variability. Nor can this be extracted from the published LA pro-forma summaries. The DfE does note a pattern in this paragraph that a majority of LAs set the same lump sum for secondaries as for primaries, but this could well be only because it was a requirement for the first year of the recent reforms to funding formulae!

2.7 – 2.9 and 2.51-2.56. It is very clear that the DfE has set the maximisation of funding allocated through pupil-led factors as an objective, as evidenced by the title of this section and the explicit references to the aim within the paragraphs. The claim in Paragraph 2.8 is that this is to ensure that “funding is matched transparently to need”. I do not believe this maximisation of funding through pupil-led factors is consistent with matching funding to need. If the Government truly wishes to be fair in its distribution of funding, then with similar school population characteristics, every school should receive the same disposable per pupil funding. Unless lump sums are set to reflect the genuine fixed costs of running a school then in practice the Government will be creating significant inequality of access to education by ensuring that pupils attending large schools attract a significantly greater disposable per pupil funding.

2.13. While I recognise the potential need for an increase in funding when moving from KS1/2 to KS3 and KS4, reception classes are also generally more expensive to run than KS1/2 classes due to the nature of the curriculum in R. By setting a single rate across the primary sector, the funding formula will differentially impact negatively on infant schools, where reception classes make up a greater proportion of the children.

2.16. The consultation document claims that “reception uplift” has “a very small impact on schools’ budgets.” I would like to see what evidence has been used to come to this conclusion. No doubt it has a very small impact on overall school budgets nationally, but I expect that for small schools it could have a considerable impact. Maintained schools have to wait for about 7 months before their census data results in funding changes; academies for nearly a year. In a school with 100 pupils, having 5 more pupils than expected should rightly result in a significant “reception uplift.”

2.21. No justification is given for the figure of 18% given for additional needs factors. The text implies that this goes beyond LA averages and is a result of a conscious Government decision to increase AEN funding – such a decision should be evidence based.

2.26. Some “magic numbers” appear here also: 5.4% for pupil-level deprivation (FSM/FSM6) versus 3.9% for area level (IDACI). These numbers appear to have been plucked out of the air. Presumably there is some statistical evidence to support these figures – it would have been useful to have this sent out with the consultation.

2.28. This is confused. The claim seems to be that Ever6 FSM rate should be higher at secondary schools than primary schools because (i) the overall primary:secondary ratio is less than 1 (so what?) and (ii) the Pupil Premium is the other way round. But the DfE also sets the pupil premium rate (and why are these two not combined anyway since they’re both Ever6 based?) It seems that those setting the Pupil Premium rate want to tug the ratio one way and those setting the funding formula want to pull it back the other way. Most odd.

2.33. The IDACI index is being used in a questionable way here. An IDACI index is a probability that a given child, chosen at random from a geographic area, lives in an income-deprived household. It is not a measure of the severity of deprivation. Thus I can see no justification for funding being allocated by IDACI score in anything other than a purely proportional way, e.g. a child living in an area with IDACI score 0.8 should surely attract twice the IDACI funding of a child living in an area with IDACI score 0.4. Yet looking at Figure 5, we can see that children in Band C (IDACI 0.35 to 0.4) attract the same funding as those in Band D (IDACI 0.3 to 0.35). This makes no sense to me. As an aside, the banding also makes very little sense – why classify pupils into bands if you already know the IDACI score of that pupil’s address: just use it directly, avoiding cliff edges of over/under-funding around the band’s boundaries.

2.34. In line with my comments on 2.21 and 2.26, the “magic number” here is even more alarming. The DfE have looked at how much LAs allocate to low prior attainment (4.3%) and decided to nearly double this to 7.5%. The only justification given for this radical shift is that KS2 attainment is a good predictor for attainment at secondary school. There are several holes in this argument. Firstly, what is “prior attainment”? For primary schools, this used to be EYFS points scores. Then it became whether a child achieved a Good Level of Development in EYFS. Now it is likely to be based on a totally different on-entry baseline assessment in Reception. None of these are comparable, and the baseline Reception assessments are very much questionable and under review at the moment. Secondly, for secondary schools prior attainment means KS2 results. The same KS2 results that have changed so radically in 2016 that we have no knowledge whether these are likely to be good predictors for secondary school performance. Thirdly, even if we ignore these serious methodological concerns, correlation between poor attainment (actually it should be SEN) and prior attainment is cause for a factor greater than zero. Simply no justification is given for why this factor should be doubled. Perhaps it should, perhaps it shouldn’t. Why?

2.42. The move to use EAL3, i.e. funding is attracted for children with English as an Additional Language for the first three years of their education is an interesting one. Currently LA practice varies here. For a fixed pot of EAL funding, there is an argument to be had over whether children would benefit more from considerable funding in year 1 for intensive English tuition to allow them to access the curriculum, rather than more “smeared out” three year funding at a lower level per year. Once again, it would be useful to see the research that suggests that one or the other approach actually reaps the greatest benefit before mandating EAL3.

2.43. More magic numbers here: uplift from 0.9% to 1.2%. Why? Evidence?

2.52. This paragraph makes it clear that the proposal is explicitly to starve small schools of funding, by purposely under-funding the lump sum, in order to ensure that they “grow, form partnerships and find efficiencies.” Rather than starving schools of funds, it might be better properly fund the lump sum while providing time-limited financial enticements for schools to merge where that is possible, as is currently the case.

2.53. There is a methodological error in this paragraph. They state that they looked for a correlation between average school size and lump sum size and found none. Nor should they expect to find one. Imagine LA1 with schools each of 100 pupils and LA2 with schools each of 1000 pupils. There will be no difference in allocation of funding between schools in these LAs no matter what lump sum value is used. However if we now imaging LA3 where half the schools have 100 pupils and half have 1000 pupils, then the impact of lump sum changes will be dramatic here. So the correlation should be with the variation in school size, not with the average school size.

2.57. A sparsity factor is only a sensible option given the choice to under-fund fixed costs in a lump sum. If these were properly funded, a sparsity factor would be unnecessary.

2.59. The detailed calculations for the function of the sparsity factor are omitted from the consultation document – instead a link is provided to another document. The functioning leaves a lot to be desired. For example, primary schools are eligible if they have an average of less than 21.4 children per year group and the average distance between this school and their next-nearest school is at least two miles. The first of these criteria is essentially an admission that schools will less than one form entry are underfunded under the national funding formula. The second is more complex but equally serious, especially for small village schools sitting on the edges of towns. Imagine two schools, separated by a little more than two miles. It may well be that between the two schools is an area of dense population while following the line connecting these two schools out into the countryside leads to very sparsely populated areas. The distance for the children at the countryside end might be much more than 2 miles, yet the average will be less than two, and the school will not attract funding. If thresholds of distance must be used, why is it done on average distance rather than the number of pupils for whom that distance is more than the threshold? Finally, these thresholds necessarily lead to unfairness across the two sides of the threshold. If the lump sum were set to a value reflecting the fixed costs of running a school, none of this tinkering would be necessary.

2.60. The steep tapering proposed for the primary sparsity factor is grossly unfair to schools with average year group sizes around 25 – they get none of the benefit compared to their colleagues with smaller classes, yet they see the full impact of an under-funded lump sum which can be safely ignored by large primaries.

2.61. Even if we accepted the sparsity factor, the maximum value of £25k for primaries on top of the £110k lump sum still under-represents the fixed costs of running a school. Meanwhile, the use of a greater lump sum of £65k for secondaries seems inconsistent with the simplification proposed to use a single lump sum across all phases.

2.77 – 2.79. This part of the consultation, on area cost adjustment, refers to a technical note that does not yet appear to have been published on the consultation website. I reserve judgement on this issue, noting that the devil is likely to be in the detail, and that any methodology for taken into account labour market costs needs to avoid cliff edges where schools on one side of an artificial geographical boundary benefit significantly compared to those on the other, an issue the national funding formula was supposed to address.

2.81-2.82. It is of course welcome that any reduction in school budgets is phased in over time so that schools are able to cope with “the pace […] of those reductions.” However, it is not very clear what this means in practice. What does it mean for a school to “cope” with its reduction in funding – does it mean a reduction in expenditure with negligible loss in educational outcomes, or with “acceptable” loss in educational outcomes? If the latter, what is acceptable? If the former, what evidence do we have that the current MFG of -1.5% per annum has negligible impact on educational outcomes?

2.83-2.85. It is far less clear that any kind of “floor” is an equitable way of smoothing change, indeed it goes against the very aim of an equal funding formula for all. Some schools will receive more funding simply because they historically did, and others will therefore receive less as a result, from any fixed education funding pot. If a floor is required in order not to damage school performance in the long run, this suggests that funding reductions implied by the national funding formula are simply unsustainable in those schools. Therefore instead of clamping maximum loss to 3%, the DfE should be asking why some schools lose more than 3% and whether this is justifiable for those schools. If not, the formula is at fault and should be changed for all schools, not just those below -3%.

2.86. By maintaining the floor as a per pupil funding guarantee, the Government could potentially introduce severe differentials between schools. In particular in areas of high existing lump sum where there are some small schools that grow to be of comparable size to large schools, the formerly small school would be very significantly over-funded compared to its neighbour, for no good reason.

3.11. The consultation states here that “we have considered carefully the potential impact on pupil attainment in schools likely to face reductions as a result of these reforms,” yet this analysis is not presented. Instead we are simply told that “we see excellent schools at all points of the funding spectrum,” which is no doubt true but fairly meaningless when it comes to assessing the educational impact. A good starting point would be to look at what correlation exists between disposable income per pupil, i.e. per pupil funding once realistic fixed costs are subtracted, and progress measures at the school.