Nonlinearity is Your Friend

My former PhD student Erwei Wang and I recently teamed up with some collaborators at UCL: Dovydas Joksas, Nikolaos Barmpatsalos, Wing Ng, Tony Kenyon and Adnan Mehonic and our paper has just been published by Advanced Science (open access).

Our goal was to start to answer the question of how specific circuit and device features can be accounted for in the training of neural networks built from analogue memristive components. This is a step outside my comfort zone of digital computation, but naturally fits with the broader picture I’ve been pursuing under the auspices of the Center for Spatial Computational Learning on bringing circuit-level features into the neural network design process.

One of the really interesting aspects of deep neural networks is that the basic functional building blocks can be quite diverse and still result in excellent classification accuracy, both in theory and in practice. Typically these building blocks include linear operations and a type of nonlinear function known as an activation function; the latter being essential to the expressive power of ‘depth’ in deep neural networks. This linear / nonlinear split is something Erwei and I, together with our coauthors James Davis and Peter Cheung, challenged for FPGA-based design, where we showed that the nonlinear expressive power of Boolean lookup tables provides considerable advantages. Could we apply the a similar kind of reasoning to analogue computation with memristors?

Memristive computation for the linear part of the computation performed in neural networks has been proposed for some time. Computation essentially comes naturally, using Ohm’s law to perform scalar multiplication and Kirchhoff’s Current Law to perform addition, resulting in potentially energy-efficient analogue dot product computation in a physical structure known as a ‘crossbar array’. To get really high energy efficiency, though, devices should have high resistance. But high resistance brings nonlinearity in practice. So do we back away from high resistance devices so we can be more like our mathematical abstractions used in our training algorithms? We argue not. Instead, we argue that we should make our mathematical abstractions more like our devices! After all, we need nonlinearity in deep neural networks. Why not embrace the nonlinearity we have, rather than compromise energy efficiency to minimise it in linear components, only to reintroduce it later in activation functions?

MNIST classification error trading off against power consumption

I think our first experiments in this area are a great success. We have been able to not only capture a variety of behaviours traditionally considered ‘non-ideal’ and harness them for computation, but also show very significant energy efficiency savings as a result. You can see an example of this in the figure above (refer to the paper for more detail). In high power consumption regimes, you can see little impact of our alternative training flow (green & blue) compared to the standard approach (orange) but when you try to reduce power consumption, a very significant gap opens up between the two precisely because our approach is aware of the impact this has on devices, and the training process learns to adapt the network accordingly.

We’ve only scratched the surface of what’s possible – I’m looking forward to lots more to come! I’m also very pleased that Dovydas has open-sourced our training code and provided a script to reproduce the results in the paper: please do experiment with it.

Deciding Theories

Every once in a while I start modelling a problem using a logical formalism and need to remind myself of the decidability of various first-order theories and the complexity results for those that are decidable. Inevitably, I end up spending a long time looking back over literature I once read, with results sprinkled in different places.

This post summarises some of this information in one place for me or others to find in the future, with a skew towards those theories of interest for arithmetic. The information here is primarily taken from Bradley and Manna supplemented with various other sources linked. The usual definitions of \Omega and O are used.

TheoryDescriptionFull theoryQuantifier-free conjunctive fragmentQuantifier elimination algorithm
T_{\mathrm{E}}
(see note 1)
EqualityUndecidableO(n \log n)
(Downey et al.)
Congruence closure algorithm
N/A
T_{\mathrm{PA}}Peano arithmeticUndecidableUndecidableN/A
T_{\mathbb{N}}
Presburger arithmetic\Omega\left(2^{2^n}\right)
(Fischer and Rabin)
O\left(2^{2^{2^{kn}}}\right)
(Oppen)
NP-complete
(Scarpellini)
Cooper’s Algorithm
(see note 2)
T_{\mathbb{R}}
(see note 3)
Real arithmetic (with multiplication)O\left(2^{2^{kn}}\right)O\left(2^{2^{kn}}\right)Collins’ Cylindrical Algebraic Decomposition
T_{\mathbb{Q}}
(see note 4)
Rational arithmetic (without multiplication), a.k.a. linear arithmetic.\Omega\left(2^n\right)
(Fischer and Rabin)
O\left(2^{2^{kn}}\right)
(Ferrante and Rackoff)
P
(a.k.a. linear programming)
Ferrante and Rackoff’s Algorithm
T^=_{\mathrm{A}}
(see note 5)
Extensional ArraysUndecidableNP-complete
(Stump et al.)
N/A

Notes on the table:

  1. T_{\mathrm{E}} is as defined in Bradley and Manna, that is it has a signature consisting of = (equality) and all constant, function and predicate symbols, a.k.a. ‘equality with uninterpreted functions’. Reflexivity, symmetry, transitivity, function congruence and predicate congruence are axioms for =, but all other functions and predicates are uninterpreted (except w.r.t. these congruence and predicate axioms). Note that this is not the theory of pure equality, for which the full theory is decidable and admits a weak form of quantifier elimination (pure equality doesn’t have the functions or predicates, see Bradley and Manna Section 10.4 for the definition of weak quantifier elimination).
  2. Presburger arithmetic as described on Wikipedia does not admit quantifier elimination (counter-example: \exists x. 2x = y). However, adding an additional countable set of predicates capturing divisibility (one per divisor) together with an appropriate axiom leads to a version admitting quantifier elimination as per this table.
  3. T_{\mathbb{R}} is here taken to have a signature of \{0, 1, +, -, \times, =, \geq\} (with - unary) and axioms corresponding to those of a real closed field (theory of reals in SMT-LIB).
  4. T_{\mathbb{Q}} is here taken to have the signature \{0, 1, +, -, = \geq\} (again with - unary). Its axioms are those of an ordered torsion-free abelian group, together with an additional axiom schema asserting divisibility: \forall x. \exists y. x = ny for every positive integer n.
  5. Using the notation of Bradley and Manna, the theory of extensional arrays is designed to capture array data structures. It has the signature \{\cdot[\cdot], \cdot\langle\cdot\vartriangleleft\cdot\rangle, =\}, with the first two symbols denoting a binary and ternary function (respectively) for accessing and modifying array elements; arrays are immutable and so the ternary operator returns the modified array. ‘Extensional’ here denotes that there is an an axiom capturing that arrays are equal iff all their elements are equal in all places. (Theory of ArraysEx theory in SMT-LIB).

Further Comments

A decision procedure for the union of quantifier-free fragments can be obtained by combining the decision procedures for the individual fragments, via the Nelsen-Oppen method, under the following conditions:

  1. Their signatures only share equality.
  2. Their theories must (individually) be stably infinite, i.e. every T-satisifiable quantifier-free formula is satisfied by a T-interpretation with a domain of infinite cardinality.

If deciding each individual theory is in NP, then deciding the combination theory is also in NP.

While investigating the quantifier elimination column of the table above, I came across the Ultimate Eliminator tool which looks like great fun.

Please let me know if you spot any errors or significant omissions that may be of interest to readers of this blog.

Schools White Paper 2022

Today saw the launch of the Department for Education’s long-awaited schools white paper “Opportunity for all: strong schools with great teachers for your child”, alongside some accompanying documents which are well worth reading, especially “The Case for a Fully Trust-Led System”. This post collects some initial thoughts on the content published today. I will focus here on the specific future plans laid out, what they’re trying to achieve, and some elements of what’s missing. I will not engage deeply with areas I know little about such as teacher training, and my reading will of course be biased to my interests in school improvement.

A Fully Trust-Led System by 2030

The paper sets out the Government’s plan for all schools to be in multi-academy trusts within a single regulatory system by 2030 and makes clear that LAs will be allowed to establish MATs as part of this process (a sensible idea, if we’re going down the all-MAT route). I am slightly worried by the wording that LAs will be able to establish MATs “where too few strong trusts exist” – I don’t think special conditions should be placed on this ability. A small capacity fund of £86m has been put aside to help expand MATs – it is not clear from the paper whether LAs can bid for access to this fund too, or how this budget was arrived at.

The white paper calls for a clear role for every part of the school system (LAs, MATs, ESFA, DfE, etc.) but is rather unclear on what it sees these roles as being. I blogged about my own views on this last month (here and here), and Sam Freedman gives his view in an Institute for Government report also last month.

The paper begins to flesh out the government’s idea of what a good MAT looks like. Firstly, on size: “We know that trusts typically start to develop central capacity when they have more than 10 schools … We expect that most trusts will be on a trajectory to either serve a minimum of 7,500 pupils or run at least 10 schools.” They also provide a list of features of a strong trust, as previously discussed in the CST’s useful discussion paper setting out their view.

I welcome the moves to address the isolation of some schools, especially those stuck in a single academy trust model: “Many of our best schools operate alone, and not enough attention has been paid to harnessing the expertise already in the system”. I blogged about some problems with SATs in one of my February posts and last year the government published views of SATs who had joined MATs, citing improved governance, leadership, sense of direction, and ability to direct financial resources where they are needed.

So how will all schools end up in MATs? Clearly the government would like them to choose to join. For those that don’t and are graded inadequate by Ofsted, there is already the expectation in the RSC guidance that they will be (re-)brokered into strong MATs. It seems that these powers are likely to be strengthened: “We will shortly be consulting on moving schools that have received two consecutive below ‘Good’ judgements from Ofsted into strong trusts to tackle underperformance.”

Whatever our views on the relative merits of MATs, the government’s reports make “interesting” use of statistics: “If all children did as well as pupils in a trust performing at the 90th percentile, national performance at key stage 2 would be 14 percentage points higher and 19 percentage points higher for disadvantaged pupils”. Here, “higher” seems to refer to a baseline of the national average rather than of the 90th percentile of LA maintained schools (say) – quite a bizarre use of statistics.

Curriculum and Assessment

The government is proposing to create a new arms-length curriculum body, building on Oak National Academy. This could be of value, but could also see a de-professionalisation of teachers: the devil will be in the implementation. I for one would love to see university engagement with this body, and will try to engage enthusiastically if possible.

I’m disappointed that “we will maintain our current system of primary assessment and world-class GCSEs and A levels”. If there’s some things that the pandemic has brought into sharp relief, they certainly include some of the more unsuitable aspects of our exam system, its lack of robustness to disruption, its reliance on norm referencing and on a huge number of GCSE grades that are of limited reliability even in the best of times.

Much is made of a “parent pledge” that schools will provide evidence-based support “if your child falls behind in English or maths”. But behind in what sense? In attainment compared to year expectations (in which case how does this address the claim to support ‘the most able’) or in progress — but if so, compared to what benchmark? And how will this be identified in any uniform way across schools? This will, apparently, be via “robust assessment” — further guidance will be issued in partnership with Ofsted in due course.

Objectives

The DfE hopes to increase to 90% the proportion of children achieving expected standard in reading, writing and mathematics at primary, and to increase the national average GCSE grade in English and maths to grade 5, both by 2030. These are consistent with “Mission 5” outlined in the previously-published Levelling-up the UK technical annex. As they note in that annex, it’s important to try to understand the level of ambition embedded in this objective. Unfortunately, despite the annex including a section “Is this mission ambitious, specific and achievable?” the section doesn’t actually provide an argument that it is at an appropriate level of ambition. The best I could find in the white paper is a citation to Slater et al. to argue that significant GCSE grade improvements can come from good teachers, but the same paper also says:

We now finally explore whether any of the few observable teacher characteristics that we have are correlated with estimated teaching effectiveness: gender, age, experience and education…. In fact, none of these variables play any statistically significant role in explaining teacher effectiveness, other than very low levels of experience showing a negative effect.

Helen Slater, Neil M. Davies, Simon Burgess, Do Teachers Matter? Measuring the Variation in Teacher Effectiveness in England

In other words, students gain significantly from good teachers, but spotting good teachers before the event — rather than as a backward-looking explanation — is hard to do. So this seems to support the idea that retention of good teachers and a focus on quality teacher CPD and mentoring is key.

What’s Missing

Quite a lot of information appears to be ‘to follow’. Some examples:

  • On governance. “So that trusts continue to be responsive to parents and local communities, all trusts should have local governance arrangements for their schools. We will discuss how to implement this with the sector.”
  • On MAT inspection. “We will launch a regulatory review in May 2022 looking at accountability and regulation – including how we will hold trusts to account through inspection in the future.”
  • On achieving full academisation. “We … will engage with the sector on how best to achieve a fully trust led system.”
  • On new collaboration expectations. “we will introduce a new collaborative standard – one of the new statutory academy trust standards – requiring that trusts work constructively with each other, their local authorities and the wider public and third sectors. We will engage with the sector, through the wider regulatory review, as we develop the detail.”

In addition, there are several other aspects that I had hoped would be better addressed within the white paper:

  • Teacher retention. There is a focus on mechanisms to attract new teachers, e.g. with the previously-announced £30,000 starting salary, but very little to retain existing good teachers. I believe this is short-sighted.
  • Teacher time. Funding for greater time ‘off timetable’ to engage in the all-important CPD, peer observation, lesson study, etc. has the potential to retain excellent staff as well as improve practice.
  • Capital. We know that many of our schools are in bad shape physically. Now we also know the huge impact that fresh air can have on the health of our staff and students alike, I would like to see a one-off significant cash injection to bring all our schools up to the high standard of ventilation we should expect for a healthy future for our school communities.

I will be following future developments with interest.

Pruning Circuits

On Tuesday, my former PhD student Erwei Wang (now at AMD) will present our recent paper “Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference” at the ACM International Symposium on FPGAs. This is joint work with our collaborator Mohamed Abdelfattah from Cornell Tech as well as James Davis, George-Ilias Stavrou and Peter Y.K. Cheung at Imperial College.

In 2019, I published a paper in Phil. Trans. Royal Soc. A, suggesting that it would be fruitful to explore the possibilities opened up when considering the graph of a Boolean circuit – known as a netlist – as a neural network topology. The same year, in a paper at FCCM 2019 (and then with a follow-up article in IEEE Transactions on Computers), Erwei, James, Peter and I showed how to put some of these ideas into practice by learning the content of the Boolean lookup tables that form the reconfigurable fabric of an FPGA, for a fixed circuit topology.

Our new paper takes these ideas significantly further and actually learns the topology itself, by pruning circuit elements. Those working on deep neural networks will be very familiar with the idea of pruning – removing certain components of a network to achieve a more efficient implementation. Our new paper shows how to apply these ideas to prune circuits made of lookup tables, leaving a simplified circuit capable of highly-efficient inference. Such pruning can consist of reducing the number of inputs of each Boolean LUT and, in the limit, removing the LUT completely from the netlist. We show that very significant savings are possible compared to binarising a neural network, pruning that network, and only then mapping it into reconfigurable logic – the standard approach to date.

We have open-sourced our flow, and our paper has been given a number of ACM reproducibility badges. Please try it out at https://github.com/awai54st/Logic-Shrinkage and let us know what you think. And, if you’re attending FPGA next week, reach out and say hello.

Better accuracy / silicon area tradeoffs through pruning circuits

Islands of Certainty

Readers of this blog may remember that my PhD student Jianyi Cheng (jointly supervised by John Wickerson) has been working on high-level synthesis, combining dynamic scheduling with static scheduling. His latest contribution, to be presented on 28th February at the ACM FPGA conference, is about finding islands of static control flow in a sea of dynamic behaviour.

Here’s the story so far:

So now, two years later, we are back at the same conference to present a method to do just that. We now have an automated flow to select parts of a program to statically schedule, resulting in a 4x reduction in area combined with a 13% boost in performance compared to a fully dynamic circuit, a result that is close to the best achievable — as shown by exhaustively enumerating different parts of the program to schedule statically.

The basic idea of the paper is to develop the concept of a static island — a part of a dataflow graph where making decisions on scheduling of operations once, at compile time, is likely to have minimal impact on performance (or may even improve it) while opening the door to static resource sharing. We can throw a boundary around these islands, synthesise them efficiently with commercial HLS tools (we use Xilinx Vitis HLS), and integrate the result into the overall dynamic circuit using our previous open-source compiler flow.

So what makes a good static island? Unsurprisingly, these islands should exhibit static control flow or control flow with balanced path timing, e.g. in a conditional statement the if and else branch should take the same time, and loops should have constant dependence distances (or none at all). Jianyi also shows that there is an advantage to having these islands consume their inputs at offset-times, e.g. for a two-input island we may wish the static scheduler to be aware that second input is available — on average — two cycles after the first. He shows precisely how to generate ‘wrapper’ circuits for these components, allowing them to communicate with a dynamically scheduled environment.

The overall design flow, shown below, is now fully automated – freeing the user from writing the pragmas we required two years ago.

MATs, SATs and Maintained Schools

This post forms the second in a short series of blog posts about the current governance structure of schools in England, ahead of the Government’s expected while paper this spring. The previous post was about the review of the Education and Skills Funding Agency. In this post I focus on the landscape of school structures: multi-academy trusts (MATs), single-academy trusts (SATs) and local authority (LA) maintained schools from the perspective of a governor / trustee.

I will draw heavily on Sam Freedman’s report this month released by the Institute for Government as well as the excellent discussion in the Confederation of School Trusts’ report “What is a Strong Trust?“. One does not have to completely agree with Freedman’s or the CST’s perspectives to find both reports clear, well-argued, and a very useful step towards a greater debate in the education sector.

Freedman notes what is obvious to anyone who has worked in or with the sector – that the current dual system of maintained schools and academies leads to significant duplication and inefficiency. He notes that the regulatory system for academies is ‘incoherent’, largely as a result of the ESFA/DfE split I dealt with in my last blog post. He mentions – quite rightly – that LAs’ statutory responsibility for SEND and certain safeguarding requirements further complicates the picture when they have no effective oversight or intervention powers over academy trusts.

Early Academisation Errors

My experience is that the rapid expansion of the academies programme after the Academies Act 2010 was mismanaged. We have been left with a patchwork of schools and a web of contractual agreements. Schools which converted early have often been left with legacy articles of association based on the early model articles which demonstrated little insight into how schools could evolve under poor governance (modern model articles are much improved, though not perfect). Regulators have been left with very limited powers to improve matters, and it should not have come as a surprise to government that – as Freedman states – “[By 2012/3] with things moving so fast it quickly became apparent that the Department for Education (DfE) was becoming overwhelmed and could not properly oversee that many schools.” The unfortunate reality is that many schools are still stuck with the legacy of poor decisions made during this period.

Three School Structures

There are currently three main models for schools in England: those which are part of a multi-academy trust (MAT), those which are a single-academy trust (SAT) and those maintained by local authorities. Often in these discussions the first two types of school are lumped together, and it becomes a discussion about academies versus non-academies, but I think these three situations deserve distinct analysis. In particular, oversight of an individual school’s performance in the maintained sector and in the MAT sector is, in my experience, stronger than in the SAT sector, which is the outlier in terms of sufficient oversight. I wonder how many SATs are maintaining an Ofsted grade above inadequate, and therefore not subject to intervention, but are nevertheless not performing at the level they might be if they had closer interaction with other schools.

It is clear to anyone who has worked with or for the education sector that schools benefit immensely from working together and supporting each other, and I agree with Leora Cruddas’s argument made at a governance meeting I attended last year that, in times of difficulty for a school, a compulsion to work with other schools is important. At the moment, this primarily comes through membership of a strong multi-academy trust, though I do not see why a strong, well-resourced and empowered Local Authority could not form an equally viable mechanism to drive school improvement.

Pragmatics: Control and Intervention

Unsurprisingly, Freedman’s paper seeks to advise Zahawi on an appropriate way forward to a more coherent fully academised education system, and without entering the discussion over whether academy trusts are the best structure to deliver school education in the first place, it is worth engaging with the recommendations he makes.

Freedman sees the future of LAs – as per the 2016 white paper – as ensuring that every child has a school place, ensuring the needs of vulnerable children are met, and acting as champions for all parents and families, and – quite reasonably – proposes greater powers for LAs to ensure they can actually fulfil these objectives.

There are some proposals made in the paper that I would fully support:

  • Setting a transparent framework of expectations for MATs and giving the regulator powers to intervene, close or merge MATs for either financial/compliance reasons or educational reasons, not only tied to Ofsted grades.
  • Ensuring that MATs take ownership of educational improvement and are not simply back-office consolidation bodies as is sometimes the case currently.
  • Giving local authorities the right of access to MAT data.
  • A single regulator for academies, ideally organised as an arm’s length body.

There are also some proposals that are less clear cut for me:

  • Giving LAs the power to direct a change in individual academy PANs and admissions policies. Let’s assume that we move to an “all-MAT” system with LA’s still retaining the statutory duty to ensure a place for every pupil. To ensure clear lines of accountability, it seems appropriate for these to be at MAT level not individual academy level: surely mechanisms can still be put in place for intervention at MAT level to ensure they play their part in the strategic place-planning of LAs, rather than micromanaging a MAT’s academies over the head of its trustee board?
  • Moving SEND and exclusions policy monitoring / appeals from ESFA to LAs. I agree that ESFA is an odd place for this to sit at the moment in terms of ensuring joined up working between the RSC offices, ESFA and the Local Authority. But moving this to LAs rather than to the DfE again seems to introduce dual lines of accountability for MATs; might it not be better for RSCs to be required to ensure MATs meet the LA’s planning needs?
  • Giving individual academies a mechanism to argue to ‘break free’ from a MAT, involving giving schools legal status independent from the MAT. I agree that there may be very good reasons for academies to want to move to another MAT if the MAT is not functioning well, however under an all-MAT system it seems that a more appropriate approach is to provide the regulator with powers to intervene at MAT level than to provide independent legal status to individual academies.

There is an important question of democratic control, which I believe is required to balance some of these suggestions. In particular: who gets to appoint trustees (and members) of an academy trust, and what geographical area is it reasonable for a MAT to cover? On the first point, in the early days of academisation, academies needed to have some staff trustees, some (elected) parent trustees and a trustee appointed by the Local Authority. The Secretary of State was empowered to change the composition under specific conditions laid out in the academy’s funding agreement / articles of association. Government views on this composition has changed over time, with parent trustees going out of and then back into fashion, while staff trustees are definitely out of fashion at the moment. Local Authorities no longer get to appoint trustees at all in recent model articles. The situation locally will vary from trust to trust, depending on when their articles were approved — differences that cannot be justified, in my view. I would suggest that trusts articles are updated and that the ability (though not the requirement) for local authorities and the DfE (via RSC offices) to appoint trustees is included in the new model. This would provide LAs and the DfE direct information on trusts, rather than having to rely on existing trust boards to provide accurate information, in addition to providing a powerful mechanism for spreading best practice across trusts.


There is a huge opportunity for development of the schools sector in England. I look forward to publication of the white paper!


Appendix: Minor Quibbles

It’s probably worth pointing out a couple of very minor inaccuracies in the Institute for Government report:

  • Financial Notices to Improve, mentioned in the report, no longer exist since September 2021, precisely in recognition of the broader ESFA role currently; they are now subsumed within the broader “Notice to Improve” title.
  • A few times in the report, the ‘Special Measures’ category is cited as giving the Regional Schools Commissioners power to re-broker academies. While there may be additional powers – depend on the trust’s funding agreement – under a Special Measures Ofsted category, it’s clear in the Schools Causing Concern guidance that RSCs have the power to re-broker any inadequate school, i.e. also those judged to have ‘Serious Weaknesses’.

Restructuring Education

In the run-up to the Government’s planned white paper on education, I hope to be publishing a few brief blog posts on the landscape of education leadership and management in England. This post focuses on the summary findings from the ESFA review led by Sir David Bell and published this week.

Summary: this review is very welcome and I am supportive of all the key recommendations.

My perspective is one of governance: I have been a long-term member of Essex Schools Forum, first as an elected representative of maintained primary schools, then as an elected representative of secondary academies. Throughout this time (and beyond) I have been involved in helping Essex shape its local funding formula and respond to the national funding formula. I have governance experience both in the maintained sector and in a single academy trust, and work with a new post-16 multi-academy trust through my workplace.

From the perspective of accountability of academy trusts, I think such a review has long been overdue, in particular over clarity of roles between the offices of the regional schools commissioners and the lines of accountability through academy funding agreements.

The findings suggest retaining ESFA’s funding delivery function as an Arms Length Body (ALB) while moving a considerable number of responsibilities to the DfE. This seems sensible. In particular, from a governance perspective, I wholeheartedly endorse the finding that “The Regional School Commissioners (RSCs) and ESFA work together to provide oversight of the school system; the RSCs focus on educational performance, ESFA on financial management, with both contributing to governance. This sometimes creates points of friction internally and a lack of clarity externally”. The proposal, to move all governance oversight not required to be at ESFA to the new regional DfE structures, also seems entirely reasonable. The review also recommends clarifying the relationship between Ofsted, the DfE and ESFA – my experience of this is that there is already clarity over the different roles of Ofsted versus the DfE and ESFA, although admittedly this knowledge is not widespread, even amongst school leaders.

In line with this move to the DfE, the proposal to move ownership of the Academy Trust Handbook to the DfE (unless scaled back to a purely financial management document) is also to be welcomed by governors and trustees.

The final sensible proposal I would highlight from the review aims to achieve greater alignment in dealing with complaints between the maintained and academy sector. As part of this process, I would urge the DfE to consider mandating a more uniform complaints policy at trust level for academies: although the model DfE policies are entirely reasonable, they are not statutory, and academies minimally complying with legislation set out in the Education (Independent School Standards)(England) Regulations 2014 essentially force complaints to be dropped or escalated to Ofsted or ESFA which could be dealt with at trust level under better procedures.

Of course there are bigger questions relating to the role of multi-academy trusts and local authorities and their interaction with the department for education, and I hope to cover some of these issues in future blog posts. But within the confines of our current system, these reforms seem very much worthwhile.

What’s the Rush?

At FPL 2021, my PhD student Jianyi Cheng (jointly supervised by John Wickerson) will present our short paper “Exploiting the Correlation between Dependence Distance and Latency in Loop Pipelining for HLS”. In this post, I explain the simple idea behind this paper and how it can significantly accelerate certain neglected corner cases in high-level synthesis (HLS).

By far the most significant way to extract high performance from a hardware accelerator in high-level synthesis is to use loop pipelining. Loop pipelining is the idea of starting the next iteration of a loop before the previous one finishes, allowing multiple iterations to be executing simultaneously. However, some loop iterations may need a result produced by earlier loop iterations, limiting the extent to which this can be done. HLS tools generally determine a ‘safe’ initiation interval – the number of clock cycles between starting two adjacent loop iterations – and then schedule the iterations statically at multiples of this interval.

This limit on initiation interval of the loop essentially derives from two properties. Firstly, if it takes a long time for the computation of a loop iteration to execute, then any iterations waiting on its result must be delayed. But secondly if an iteration’s result is only needed many iterations later, it can afford to take a long time to compute: what’s the rush? These two factors – latency and dependence distance – together determine the safe initiation interval.

The simple observation of our paper is that typically HLS tools will generally independently over-approximate latency and under-approximate dependence distance. However, there are some examples of programs where there is a correlation between dependence distance and latency. Jianyi gives this nice motivating example in the paper:

double f( double a ) {
  return (((((a+0.64)*a+0.7)*a+0.21)*a+0.33)*a+0.25)*a+0.125;
}

void example( double vec[M] ) {

  for (int i = 0; i < N; i++) {
    double e = vec[i];
    if (e > 0) vec[i+63] = f(e);
    else vec[i*i+9] = e * e;
  }

}

In this code snippet, you can see two control paths in the loop. The if branch has a long latency (it computes the Horner scheme polynomial f) but also writes to elements of vec that only get read many iterations later. Meanwhile the else branch has a short latency but can write – in the early stages of the loop at least – to values read in nearby iterations.

The end result is that the commercial tools Jianyi tried don’t cope very well with scheduling this loop. However, Jianyi has developed an approach that uses the formal verification tool Boogie to show that this loop can actually be scheduled very efficiently by exploiting this correlation.

He has developed an LLVM pass called iiProver that proves that it is safe to use a certain II with the commercial Vitis HLS tool from Xilinx. iiProver and our benchmarks are available – please take a look: https://github.com/JianyiCheng/iiProver. And you can hear Jianyi talking about his work on Youtube here: https://www.youtube.com/watch?v=SdQeBBc85jc.

It Probably Works!

Followers of my research will know that I’ve long been interested in rounding errors and how they can be controlled to best achieve efficient hardware designs. Going back 20 years, I published a book on this topic based on my PhD dissertation, where I addressed the question of how to choose the precision / word-length (often called ‘bit width’ in the literature) of fixed point variables in a digital signal processing algorithm, in order to achieve a controlled tradeoff between signal-to-noise ratio and implementation cost.

Fast forward several years, and my wonderful collaborators Fredrik Dahlqvist, Rocco Salvia, Zvonimir Rakamarić and I have a new paper out on this topic, to be presented by Rocco and Fredrik at CAV 2021 next week. In this post, I summarise what’s new here – after all, the topic has been studied extensively since Turing!

I would characterise the key elements of this work as: (i) probabilistic, i.e. we’re interested in showing that computation probably achieves its goal, (ii) floating point (especially of the low custom-precision variety), and (iii) small-scale computation on straight-line code, i.e. we’re interested in deep analysis of small kernels rather than very large code, code with complex control structures, or code operating on very large data structures.

Why would one be interested in showing that something probably works, rather than definitely works? In short because worst-case behaviour is often very far from average case behaviour of numerical algorithms, a point discussed with depth in Higham and Mary’s SIAM paper. Often, ‘probably works’ is good enough, as we’ve seen recently with the huge growth of machine learning techniques predicated on this assumption.

In recent work targeting large-scale computation, Higham and Mary and, independently, Ipsen, have considered models of rounding error that are largely / partially independent of the statistical distribution of the error induced by a specific rounding operation. Fredrik was keen to take a fresh look at the kind of distributions one might see in practice, and in our paper has derived a ‘typical distribution’ that holds under fairly common assumptions.

Rocco and Fredrik then decided that a great way to approximate the probabilistic behaviour of the program is to sandwich whatever distribution is of interest between two other easy to compute distributions, utilising the prior idea of a p-box.

One of the core problems of automated analysis of numerical programs has always been that of ‘dependence’. Imagine adding together two variables each in the range [-1,1]. Clearly their sum is in the range [-2,2]. But what if we knew, a priori, that these two variables were related somehow? For example in the expression X + (-X), which is clearly always zero. Ideally, an automated system should be able to produce a tighter result that [-2,2] for this! Over the years, many approaches to dealing with this issue have arisen, from very the very simple approach of affine arithmetic to the more complex semialgebraic techniques Magron, Donaldson and myself developed using sequences of semidefinite relaxations. In our CAV paper, we take the practical step of cutting-out regions of the resulting probability space with zero probability using modern SMT solver technology. Another interesting approach used in our paper is in the decision of which nonlinear dependences to keep and which to throw away for scalability reasons. Similar to my work with Magron, we keep first-order dependence on small rounding error variables but higher-order dependence on input program variables.

I am really excited by the end result: not only a wonderful blend of ideas from numerical analysis, programming languages, automated reasoning and hardware, but also a practical open-source tool people can use: https://github.com/soarlab/paf. Please give it a try!

Readers interested in learning more about the deeply fascinating topic of numerical properties of floating point would be well advised to read Higham’s outstanding book on the topic. Readers interested in the proofs of the theorems presented in our CAV paper should take a look at the extended version we have on arXiv. Those interested in some of the issues arising (in the worst case setting) when moving beyond straight-line code could consult this paper with Boland. Those interested in the history of this profoundly beautiful topic, especially in its links to linear algebra, would do well to read Wilkinson.

Scheduling with Probabilities

Readers of this blog may remember that Jianyi Cheng, my PhD student jointly supervised by John Wickerson, has been investigating ways to combine dynamic and static scheduling in high-level synthesis (HLS). The basic premise has been that static scheduling, when it works well due to static control, works very well indeed. Meanwhile, for programs exhibiting highly dynamic control flow, static scheduling can be very conservative, a problem addressed by our colleagues Lana Josipović, Radhika Ghosal and Paolo Ienne at EPFL. Together with Lana and Paolo, we developed a scheme to combine the best of both worlds, which we published at FPGA 2020 (and recently extended in IEEE Transactions on CAD). I blogged about this work previously here. We provided a tool flow allowing us to stitch large efficient statically-scheduled components into a dynamic circuit.

However, when scheduling a circuit statically, there are many design choices that can be made, typically to trade off time (throughput, latency) against area. So while our previous work was useful to stitch pre-existing statically-scheduled components into a dynamically-scheduled environment, we had no way of automatically designing those components to optimally fit the dynamic environment.

Enter Jianyi’s latest contribution – to be presented at FCCM 2021 next week.

In his paper “Probabilistic Scheduling in High-Level Synthesis”, Jianyi tackles this problem. He demonstrates that the dynamic environment, including data-dependent decisions and even load-store queues, can be adequately modelled using a Petri net formalism, and uses the PRISM model checker from Kwiatowska et al. to extract an appropriate initiation interval for each statically-scheduled component.

One of Jianyi’s Petri net models of some memory accesses.

The initiation intervals inferred by Jianyi’s tool can then be given to a commercial HLS tool – in our case Vitis HLS – to schedule each component. The components – together with any remaining dynamically-scheduled code – is then integrated using our previously published framework, producing the complete FPGA-ready design. The whole process provides a quality of result very close to an exhaustive search of possible initiation intervals, without having to perform multiple scheduling runs, and so in a fraction of the time.