England’s Department for Education released its long-awaited Phase 2 of the consultation on a national funding formula on the 14th December 2016. I have been heavily involved in determining the funding formula for schools in one of the most diverse English local authorities, and so have some detailed thoughts on this process. As a prelude to my own response to the funding formula consultation, I thought it might be helpful to others to lay out my comments against the paragraphs of the Government’s consultation document as a “guide to reading”. I have focused on the areas I know best, which relate to funding arriving at schools rather than proposed funding to be distributed to LAs still, such as funding for growth, central school services, etc.

The DfE seems to be considering two quite distinct drivers for the decisions being proposed. Many decisions use LA formulae and averages between LAs to drive appropriate funding formulae. Elsewhere, clear politically-driven approaches come through – the drive to increase the proportion of funding going into pupil-led factors, etc. These have been presented in a jumbled up fashion that makes it hard to understand the relative impact of these considerations. It would be a relatively straight-forward mathematical task to set up and solve an optimization problem to minimize school funding turbulence when moving to a funding formula using these formula elements. It is disappointing that the DfE has not done this to at least provide an element of transparency in the proposals, as deviation from any such minimal-turbulence formula should indicate the presence of evidence being used to drive a decision. Put plainly: changes to school funding should be either to even up funding between LAs or to achieve a certain outcome.

I have chosen to blog here about the nuts and bolts, and save a formal consultation response, or any overall conclusions, for a future post. I hope my fellow consultation readers and I can have a conversation about these points in the mean time.

As a result of this decision, the remainder of this post is fairly lengthy, and will only make sense if you read it alongside the DfE’s paper. Happy reading!

The Gory Details

1.12 and again in 2.20. This is flawed reasoning. The DfE is correct that if pupils tend to share deprivation (or any other) characteristics, then allocation of funding through these factors achieves the same end result as allocation through basic per-pupil funding. But this is true either in areas of high uniform deprivation or in areas of low uniform deprivation. As a result, the appropriate methodology to use LA formulae to determine the desirable size of deprivation factor would be to specifically look at the formulae of LAs with wide variations in deprivation from school to school, providing a low weighting to formulae of LAs with less varying deprivation, not to simply assume that deprivation funding needs to increase. (Which, incidentally, I am not against, I just want to see evidence before making decisions. Typically such evidence comes from boundary discontinuity studies between schools near borders of LAs. We therefore have a once-in-a-generation opportunity to grasp the nettle and do this properly, before a national funding formula arrives and discontinuities – and hence evidence – disappears.)

1.16. The lump sum is a critically important factor in school funding, especially in areas with schools of widely varying size. The DfE claim that they “cannot see any clear patterns in the specific lump sum values.” Yet it is unclear what analysis has been conducted to discern a pattern. I would not expect any pattern to emerge from the analysis published, because no correlation is looked for between lump sum and school size variability. Nor can this be extracted from the published LA pro-forma summaries. The DfE does note a pattern in this paragraph that a majority of LAs set the same lump sum for secondaries as for primaries, but this could well be only because it was a requirement for the first year of the recent reforms to funding formulae!

2.7 – 2.9 and 2.51-2.56. It is very clear that the DfE has set the maximisation of funding allocated through pupil-led factors as an objective, as evidenced by the title of this section and the explicit references to the aim within the paragraphs. The claim in Paragraph 2.8 is that this is to ensure that “funding is matched transparently to need”. I do not believe this maximisation of funding through pupil-led factors is consistent with matching funding to need. If the Government truly wishes to be fair in its distribution of funding, then with similar school population characteristics, every school should receive the same disposable per pupil funding. Unless lump sums are set to reflect the genuine fixed costs of running a school then in practice the Government will be creating significant inequality of access to education by ensuring that pupils attending large schools attract a significantly greater disposable per pupil funding.

2.13. While I recognise the potential need for an increase in funding when moving from KS1/2 to KS3 and KS4, reception classes are also generally more expensive to run than KS1/2 classes due to the nature of the curriculum in R. By setting a single rate across the primary sector, the funding formula will differentially impact negatively on infant schools, where reception classes make up a greater proportion of the children.

2.16. The consultation document claims that “reception uplift” has “a very small impact on schools’ budgets.” I would like to see what evidence has been used to come to this conclusion. No doubt it has a very small impact on overall school budgets nationally, but I expect that for small schools it could have a considerable impact. Maintained schools have to wait for about 7 months before their census data results in funding changes; academies for nearly a year. In a school with 100 pupils, having 5 more pupils than expected should rightly result in a significant “reception uplift.”

2.21. No justification is given for the figure of 18% given for additional needs factors. The text implies that this goes beyond LA averages and is a result of a conscious Government decision to increase AEN funding – such a decision should be evidence based.

2.26. Some “magic numbers” appear here also: 5.4% for pupil-level deprivation (FSM/FSM6) versus 3.9% for area level (IDACI). These numbers appear to have been plucked out of the air. Presumably there is some statistical evidence to support these figures – it would have been useful to have this sent out with the consultation.

2.28. This is confused. The claim seems to be that Ever6 FSM rate should be higher at secondary schools than primary schools because (i) the overall primary:secondary ratio is less than 1 (so what?) and (ii) the Pupil Premium is the other way round. But the DfE also sets the pupil premium rate (and why are these two not combined anyway since they’re both Ever6 based?) It seems that those setting the Pupil Premium rate want to tug the ratio one way and those setting the funding formula want to pull it back the other way. Most odd.

2.33. The IDACI index is being used in a questionable way here. An IDACI index is a probability that a given child, chosen at random from a geographic area, lives in an income-deprived household. It is not a measure of the severity of deprivation. Thus I can see no justification for funding being allocated by IDACI score in anything other than a purely proportional way, e.g. a child living in an area with IDACI score 0.8 should surely attract twice the IDACI funding of a child living in an area with IDACI score 0.4. Yet looking at Figure 5, we can see that children in Band C (IDACI 0.35 to 0.4) attract the same funding as those in Band D (IDACI 0.3 to 0.35). This makes no sense to me. As an aside, the banding also makes very little sense – why classify pupils into bands if you already know the IDACI score of that pupil’s address: just use it directly, avoiding cliff edges of over/under-funding around the band’s boundaries.

2.34. In line with my comments on 2.21 and 2.26, the “magic number” here is even more alarming. The DfE have looked at how much LAs allocate to low prior attainment (4.3%) and decided to nearly double this to 7.5%. The only justification given for this radical shift is that KS2 attainment is a good predictor for attainment at secondary school. There are several holes in this argument. Firstly, what is “prior attainment”? For primary schools, this used to be EYFS points scores. Then it became whether a child achieved a Good Level of Development in EYFS. Now it is likely to be based on a totally different on-entry baseline assessment in Reception. None of these are comparable, and the baseline Reception assessments are very much questionable and under review at the moment. Secondly, for secondary schools prior attainment means KS2 results. The same KS2 results that have changed so radically in 2016 that we have no knowledge whether these are likely to be good predictors for secondary school performance. Thirdly, even if we ignore these serious methodological concerns, correlation between poor attainment (actually it should be SEN) and prior attainment is cause for a factor greater than zero. Simply no justification is given for why this factor should be doubled. Perhaps it should, perhaps it shouldn’t. Why?

2.42. The move to use EAL3, i.e. funding is attracted for children with English as an Additional Language for the first three years of their education is an interesting one. Currently LA practice varies here. For a fixed pot of EAL funding, there is an argument to be had over whether children would benefit more from considerable funding in year 1 for intensive English tuition to allow them to access the curriculum, rather than more “smeared out” three year funding at a lower level per year. Once again, it would be useful to see the research that suggests that one or the other approach actually reaps the greatest benefit before mandating EAL3.

2.43. More magic numbers here: uplift from 0.9% to 1.2%. Why? Evidence?

2.52. This paragraph makes it clear that the proposal is explicitly to starve small schools of funding, by purposely under-funding the lump sum, in order to ensure that they “grow, form partnerships and find efficiencies.” Rather than starving schools of funds, it might be better properly fund the lump sum while providing time-limited financial enticements for schools to merge where that is possible, as is currently the case.

2.53. There is a methodological error in this paragraph. They state that they looked for a correlation between average school size and lump sum size and found none. Nor should they expect to find one. Imagine LA1 with schools each of 100 pupils and LA2 with schools each of 1000 pupils. There will be no difference in allocation of funding between schools in these LAs no matter what lump sum value is used. However if we now imaging LA3 where half the schools have 100 pupils and half have 1000 pupils, then the impact of lump sum changes will be dramatic here. So the correlation should be with the variation in school size, not with the average school size.

2.57. A sparsity factor is only a sensible option given the choice to under-fund fixed costs in a lump sum. If these were properly funded, a sparsity factor would be unnecessary.

2.59. The detailed calculations for the function of the sparsity factor are omitted from the consultation document – instead a link is provided to another document. The functioning leaves a lot to be desired. For example, primary schools are eligible if they have an average of less than 21.4 children per year group and the average distance between this school and their next-nearest school is at least two miles. The first of these criteria is essentially an admission that schools will less than one form entry are underfunded under the national funding formula. The second is more complex but equally serious, especially for small village schools sitting on the edges of towns. Imagine two schools, separated by a little more than two miles. It may well be that between the two schools is an area of dense population while following the line connecting these two schools out into the countryside leads to very sparsely populated areas. The distance for the children at the countryside end might be much more than 2 miles, yet the average will be less than two, and the school will not attract funding. If thresholds of distance must be used, why is it done on average distance rather than the number of pupils for whom that distance is more than the threshold? Finally, these thresholds necessarily lead to unfairness across the two sides of the threshold. If the lump sum were set to a value reflecting the fixed costs of running a school, none of this tinkering would be necessary.

2.60. The steep tapering proposed for the primary sparsity factor is grossly unfair to schools with average year group sizes around 25 – they get none of the benefit compared to their colleagues with smaller classes, yet they see the full impact of an under-funded lump sum which can be safely ignored by large primaries.

2.61. Even if we accepted the sparsity factor, the maximum value of £25k for primaries on top of the £110k lump sum still under-represents the fixed costs of running a school. Meanwhile, the use of a greater lump sum of £65k for secondaries seems inconsistent with the simplification proposed to use a single lump sum across all phases.

2.77 – 2.79. This part of the consultation, on area cost adjustment, refers to a technical note that does not yet appear to have been published on the consultation website. I reserve judgement on this issue, noting that the devil is likely to be in the detail, and that any methodology for taken into account labour market costs needs to avoid cliff edges where schools on one side of an artificial geographical boundary benefit significantly compared to those on the other, an issue the national funding formula was supposed to address.

2.81-2.82. It is of course welcome that any reduction in school budgets is phased in over time so that schools are able to cope with “the pace […] of those reductions.” However, it is not very clear what this means in practice. What does it mean for a school to “cope” with its reduction in funding – does it mean a reduction in expenditure with negligible loss in educational outcomes, or with “acceptable” loss in educational outcomes? If the latter, what is acceptable? If the former, what evidence do we have that the current MFG of -1.5% per annum has negligible impact on educational outcomes?

2.83-2.85. It is far less clear that any kind of “floor” is an equitable way of smoothing change, indeed it goes against the very aim of an equal funding formula for all. Some schools will receive more funding simply because they historically did, and others will therefore receive less as a result, from any fixed education funding pot. If a floor is required in order not to damage school performance in the long run, this suggests that funding reductions implied by the national funding formula are simply unsustainable in those schools. Therefore instead of clamping maximum loss to 3%, the DfE should be asking why some schools lose more than 3% and whether this is justifiable for those schools. If not, the formula is at fault and should be changed for all schools, not just those below -3%.

2.86. By maintaining the floor as a per pupil funding guarantee, the Government could potentially introduce severe differentials between schools. In particular in areas of high existing lump sum where there are some small schools that grow to be of comparable size to large schools, the formerly small school would be very significantly over-funded compared to its neighbour, for no good reason.

3.11. The consultation states here that “we have considered carefully the potential impact on pupil attainment in schools likely to face reductions as a result of these reforms,” yet this analysis is not presented. Instead we are simply told that “we see excellent schools at all points of the funding spectrum,” which is no doubt true but fairly meaningless when it comes to assessing the educational impact. A good starting point would be to look at what correlation exists between disposable income per pupil, i.e. per pupil funding once realistic fixed costs are subtracted, and progress measures at the school.