FPGA 2024

I have just returned from a wonderful trip to California with colleagues and students, into which I managed to pack: visiting AMD for a day of presentations to each other, co-running a workshop on Spatial Machine Learning, attending the ACM FPGA 2024 conference at which my student Martin Langhammer presented, and catching up with international colleagues and with Imperial College alumni and their families. In this blog post I’ll briefly summarise some of the work-related key takeaways from this trip for me.

A few of our alumni in the San Francisco Bay Area. Immediately behind me, with white hair, is Dr Nan Zhuang who first taught me logic synthesis when I was a first-year undergraduate and he was a PhD student at Imperial College

AMD and Imperial

At AMD’s Xilinx campus, I was able to share some of our most recent work. I chose to focus on two aspects: the work we’ve done on e-graphs as an EDA data-structure, and how we have developed these for datapath optimisation, and the development of new highly-efficient LUT-based neural networks. It was great to catch up with AMD on the latest AI Engines developments, especially given the imminent open-sourcing of a complete design flow here. My former PhD students Sam Bayliss and Erwei Wang have been working hard on this – amongst many others, of course – and it was great to get a detailed insight into their work at AMD.

SpatialML Workshop

Readers of this blog may know that I currently lead an international centre on spatial machine learning (http://spatialml.net). This year we ran an open workshop, sharing the work we have been doing in the centre, and bringing in additional colleagues from outside the centre too. This was exceptionally well attended: we had delegates from academia and industry internationally in attendance: A*STAR, AMD, Ciena, Cornell, DRW, Essen, Groq, Imperial College, Intel, KAUST, Mangoboost, Minnesota, Rhode Island, Seoul, Simon Fraser, Southampton, Tsinghua, Toronto, and UCLA. We structured the day around a number of excellent academic talks from Zhiru Zhang (Cornell), Vaughn Betz (University of Toronto), Paul Chow (University of Toronto), Lei Xun (University of Southampton), Atefeh Sohrabizadeh (UCLA), and Alex Montgomerie (Imperial College), combined with industrial keynotes from Satnam Singh (Groq) and Sam Bayliss (AMD), closing with a panel discussion on the topic of working across abstraction layers; we had Jason Anderson (U of T), Jason Cong (UCLA), Steve Neuendorffer (AMD) and Wayne Luk (Imperial College) on the panel, with me chairing (sadly Theo Drane from Intel could not make it due to poor weather conditions in Northern California). Slides that are publicly sharable will be made available on the workshop website.

We learned about the use of machine learning (graph representation learning, GNNs) in EDA flows, about scaling LLM implementations across multiple devices in industry and academia, about dynamic DNNs and resource scaling, about programming models and flows for deep learning acceleration in both academia and industry (often using MLIR), and about FPGA architectural enhancements to support more efficient deep learning.

We received very positive feedback from workshop attendees. I would like to express particular thanks to my workshop co-organisers, especially the astoundingly productive Andrew Boutros of the University of Toronto, for all his hard work making this workshop happen.

Our SpatialML workshop begins
The panel discussion at the workshop. L-R: Jason H. Anderson, Jason Cong, George A. Constantinides, Wayne Luk, Stephen Neuendorffer

FPGA 2024

I consider the ACM FPGA conference as my “home” conference: I’m a steering committee member and former chair, but this year my only roles were as a member of the best paper committee and as a session chair, so I could generally sit back and enjoy listening to the high quality talks and interacting with the other delegates. This year Andrew Putnam from Microsoft was technical program chair and Zhiru Zhang from Cornell was general chair. There is of course too much presented in a conference to try to summarise it all, but here are some highlights for me.

  • Alireza Khataei and Kia Bazargan had a nice paper, “CompressedLUT: An Open Source Tool for Lossless Compression of Lookup Tables for Function Evaluation and Beyond”, on lossless compression of large lookup tables for function evaluation.
  • Ayatallah Elakhras, Andrea Guerrieri, Lana Josipović, and Paolo Ienne have done some great work, “Survival of the Fastest: Enabling More Out-of-Order Execution in Dataflow Circuits”, on enabling (and localising) out-of-order execution in dynamic high-level synthesis, extending the reach of out-of-order execution beyond the approach I took with Jianyi Cheng and John Wickerson.
  • Louis-Nöel Pouchet, Emily Tucker and coauthors have developed a specialised approach to checking equivalence of two HLS programs, “Formal Verification of Source-to-Source Transformations for HLS”, for the case where there are no dynamic control decisions (common in today’s HLS code), based on symbolic execution, rewriting, and syntactic equivalence testing. It basically does what KLEE does for corner cases of particular interest to HLS, but much faster. Their paper won the best paper award at FPGA 2024.
  • Jiahui Xu and Lana Josipović had a nice paper, “Suppressing Spurious Dynamism of Dataflow Circuits via Latency and Occupancy Balancing”, allowing for a more smooth tradeoff between dynamic and static execution in high-level synthesis than was possible from the early work my PhD student Jianyi Cheng was able to achieve on this topic. They get this through balancing paths for latency and token occupancy in a hardware-efficient way.
  • Daniel Gerlinghoff and coauthors had a paper, “Table-Lookup MAC: Scalable Processing of Quantised Neural Networks in FPGA Soft Logic”, building on our LUTNet work and extensions thereof, introducing various approaches to scale down the logic utilisation via sequentialisation over some tensor dimensions and over bits in the linear (LogicNets) case.
  • My PhD student Martin Langhammer (also with Altera) presented his work, “A Statically and Dynamically Scalable Soft GPGPU”, on a super high clock frequency soft GPU for embedding into FPGA designs.
Martin Langhammer presenting our work on a super-high clock frequency compact soft GPU

Reflections

I’ve come away with many ideas from all three events: the AMD visit, the workshop, and the FPGA conference. In person attendance at conferences is still the best for this; I didn’t get nearly as many ideas when attending FPGA 2021 or 2022 remotely. It was also particularly satisfying to see our work on soft-logic efficient deep neural networks (starting with LUTNet, most recently PolyLUT) being cited by so many people at the conference; this work appears to have really made a long-term impact.

Finally, it is always a joy to visit Monterey. This year FPGA was held at a hotel on Cannery Row, described by John Steinbeck as “a poem, a stink, a grating noise, a quality of light, a tone, a habit, a nostalgia, a dream”.

Review: Empire of Normality

Robert Chapman, a neurodivergent philosopher, published this book, “Empire of Normality: Neurodiversity and Capitalism“, in 2023. I have been reading this thought-provoking book over the last few weeks, and have a few comments.

A Historical Materialist Approach

The general thesis of Chapman’s book, as I understand it, is that the very concept of “normal human functioning” is intimately bound up with – and emerged from – the economic demands of capitalism. Their book takes the reader on a really interesting journey from ancient to modern views of disease and abnormality, taking in the rise and fall of eugenics, the anti-psychiatry movement and its cooption by neoliberal policymakers, Fordism and post-Fordist production and their impact on mental health, the rise of the neurodiversity movement, and Chapman’s views on neurodiversity and liberation. The book aims to take a historical materialist approach to these questions, emphasising the importance of material economic conditions throughout. I learnt a lot about the history of mental illness, disability, and neurodiversity.

Chapman makes the argument that modern capitalism is harmful to neurotypical and neurodivergent alike, with neurotypical workers facing extreme forms of alienation while neurodivergent potential-workers are often consigned to a “surplus class”. This term appears to be used to denote the idea that such people form a group which can be called on to re-enter the workforce during times of economic growth, akin to the traditional unemployed, but the mechanisms for this to actually happen are not fully drawn out. I’m familiar with the classical concept of a “reserve army of labour”, but not of “surplus class”; late in the book, Chapman cites Adler-Bolton and Vierkant for this concept.

Bodies as Machines, Measurement & Pathology

The metaphor of the body as a machine, and of both neurodiversity and mental illness as corresponding to “broken machines” is criticised heavily throughout the book. Yet I think the metaphor appears to be used in two ways by Chapman, one being machine meaning deterministic device which can be repaired or enhanced with appropriate technical skill, and a different one – machine as a mass-produced device with interchangeable parts and interfaces to the outside world. It is not always clear to me which Chapman is criticising, and I think sometimes arguments against the first view are used rhetorically to support arguments against the second.

There’s a very detailed and interesting discussion on the history of measurement and its link to identifying outlying individuals. What’s less well developed, in my view, is a clear understanding of the potential of clustering of certain traits. To my mind, labels like ADHD have meaning precisely because they define one or more clusters of observable phenomena. These clusters may themselves be very far from the mean, yet identifying such clusters as clusters, rather than as isolated atypical individuals can clearly be helpful. This point arises also much later in the book, where Chapman refers to “autistic customs”, when describing the rise of the neurodiversity movement; the very idea of “customs” seems to only apply to groups of individuals, so this focus on clustering rather than on individuals seems essential to any argument made on this basis, rather than solely around the need to accommodate each unique individual. A reference is made late in the book to Sartre’s notion of ‘serial collectives’, which may hold out a basis for further analysis of this point. “Far from the mean” is also often taken implicitly to correspond to “far below the mean”, and I think the explicit thesis of “the mean is ideal”, explicitly linked to middle-class values by Chapman, is missing a fleshed out discussion of where this leaves “far above the mean” in terms of valuation.

The “pathology paradigm” is introduced as a set of assumptions: that mental and cognitive functioning are individual (whereas, as Chapman convincingly argues, there is a clear social element), that they are based on natural abilities, and that they can be totally ordered (or, as Chapman states it, ‘ranked in ration to a statistical norm across the species’).

Chapman does an excellent job in explaining how modern production leads to separation and sorting by neurotype. The claim made that “increasingly, new forms of domination have less to do with social class, which now, to an extent, is more fluid than in the 19th century, and much more to do with where each of us falls on the new cognitive hierarchies of capitalism” seems weaker if – by domination – it is intended to mean that the separation is self-reinforcing, akin to the traditional Marxist view of capital accumulation. This conclusion is hinted at, but I don’t think the case is strongly made – nor am I sure it is correct.

Chapman’s Marxian argument is strong when utilising the concept of alienation in the “post-Fordist” service economies to argue that some traits that were typically relatively benign became associated with disablement as a result of this economic shift. What is less clear to me is whether the same could be said the other way round: are there traits we can identify that were disabling under a “Fordist” economy but are now relatively benign? I hear elsewhere, for example, much said about shift to home-working benefiting some autistic workers significantly, especially in the software engineering industries. Their linking of disability to capitalism’s historical growth is clear and well argued, though weaker I felt when looking to the future. For example, it is stated that “capitalist logics both produce and require disablement”; here I am unsure whether “require” is meant in the sense of directly flowing from commodification of labour or in the sense of requiring a disabled “surplus class”. I think the argument for the former appears much stronger. Similarly the term “physician class” is used at several points in the text, and I am not completely sure how physicians would constitute a class per se.

I found the discussion of the rise, and revision, of the Diagnostic and Statistical Manual of Mental Disorders (DSM) fascinating, especially the arguments for including – and then removing – homosexuality from the manual (it appeared in DSM-I and DSM-II but was eliminated under DSM-III) on the basis that it did not meet the new criterion that “the disturbance is not only in the relationship between the individual and society”. I also did not know that, according to Chapman, some 25% of the UK population has been prescribed psychotropic medication – a statistic I find extraordinary.

Some Thoughts to be Developed

I think that there is plenty that could be said about what I would describe as the scalarisation of value through the one-dimensional lens of price and how this potentially moves to limit and devalue variety, not only in neurotype but across political economy. The point hinted at here, via ideas around the commodification of labour, but perhaps Chapman or others could be convinced to develop it further.

Given my background in school governance, I was struck by the commentary around differences in development in comparison to one’s age group becoming increasingly salient during the Fordist period. There is, I suspect, a lot more to say also about the shift to ‘teaching-to-the-age’ in the education world, rather than ‘teaching-to-the-individual’, a trend that was in England markedly accelerated by the 2014 national curriculum.

I feel like I need to read more about behaviourism. As someone who has studied the behaviour of dynamical systems, there is a difference between the philosophical approach Chapman ascribes to Watson, “science should focus only on what can objectively be observed”, which strikes me as a call to extensionality, and the conclusions claimed to follow (whether the inference is by Watson or by Chapman, I am not clear due to my ignorance of the field), that behaviour can be usefully directly controlled (e.g. via Applied Behaviour Analysis, an approach with a very bad reputation in the autistic community). The first claim can hold without the second having any validity, in the presence of stateful behaviour.

Looking to the Future

In contrast to the history of neurodiversity, the history of the Soviet Union Chapman provides is weak and doesn’t really seem to follow the materialist approach set out in the earlier subject matter in the book. We are told that not being career politicians contributed to the Bolshevik failure. While much space is given to the ‘state capitalist’ nature of the early Soviet state, the almost absolutely discontinuity between that state and the Stalinist bureaucracy is not discussed, rather we are told that Stalin “gave up on shifting beyond state capitalism [and] simply declared that communism had been achieved”. The idea that Stalin inherited the post-revolutionary state whole seems very odd, and I am also pretty sure he never proclaimed that (full) communism had been achieved.

The arguments Chapman makes for liberation I find less convincing than their materialist analysis of the emergence of neurodiversity as a form of disablement. In particular, there appears to me to be a jump from the well argued case that capitalism brings about groups of disadvantaged neurodiverse, to the less well argued case that such groups can form agents of change by organising in those groups – as Chapman puts it, “neurodivergent workers organising as neurodivergents” and to “empower the surplus as surplus”. Despite the materialist approach taken to historical analysis of neurodiversity and disability, this materialism appears somewhat missing from the calls to “turn[…] everyday comportment and behaviour into forms of resistance” or the statement that “we [disabled] have collective cognitive power … that could be harnessed no less than the collective power of the working class”, or that envisioning the future requires “mass consciousness-raising, critique, and collective imagining”.

Overall

Despite my gripes above, I would recommend this book. I learnt a lot of history, and I think it’s great to see this being approached from a materialist perspective. I also get the sense of Chapman, a neurodivergent philosopher, being determined to live their philosophy, and this is definitely to be celebrated in my book.

Nick Higham

I am devastated to be told today that my colleague Nick Higham has died. Nick was an outstanding research leader in numerical analysis, but he was also a deeply kind man and feel I need to say something about the limited time I spent with him.

I have always worked on the boundary of electronic engineering and computer science, often working on efficient hardware implementations of numerical algorithms. While as a research student I was aware of the early work of Jim Wilkinson, it was only after my PhD was complete in 2001/2 that I became aware of Nick’s work via his outstanding book Accuracy and Stability of Numerical Algorithms. There are only a few books I have purposely bought multiple copies of over the years, and this is one. I have at least three – one for home, one for work, one for my PhD students. It happens to be sitting on my desk at work next to my laptop as a write this. As does an early Muldivo mechanical calculator – another shared joy we bonded over: Nick’s office featured a Brunsviga model last time I was there.

I can’t remember how or when I first met Nick in person, but we hit it off very quickly. Despite being as eminent as they come, he took a deep interest in the work of a junior member of academic staff in a different discipline. He invited me to present in Manchester on a number of occasions, included me in SIAM conference mini-symposia he was organising, and involved me in a Royal Society event he co-organised. We had deep technical discussions, yes, but we also spoke about careers, about my role as a school governor (and about the school his son was then attending), about Jim Wilkinson’s work, and he encouraged me to think about fellowship of SIAM. One of my favourite conference experiences was attending SIAM CSE in Spokane, Washington, at Nick’s invitation, primarily because it was so different to the usual Electronics and Computing conferences I tend to attend.

Nick was a profoundly practical person as well as being deeply involved in theoretical analysis. After I told him about some experimental observations a PhD student and I had made some years earlier regarding finite precision effects in certain Krylov methods, he was surprised and a few days later had coded up an example, verified this behaviour, and written a SIAM News column about it! We connected over the interrelation of theory and practice, and he told me that he very much liked the suggestion I made that the rise of custom hardware and FPGA-based compute meant that time had come for computer science, numerical analysis, and electronic engineering to come back together, after a long period of not talking as much as disciplines as I would like. He was also amused by my interest in “very pure” (his words) real algebraic geometry.

Nick taught me – whether he knew it or not – how to communicate to mathematicians, as well as how to use Latex for slides… even down to emailing me suggestions for Latex code hacks to make my slides better. His presentations were always a model of clarity and depth.

Many colleagues in the Numerical Analysis and Linear Algebra world will know Nick much better than I did, but he touched my research life more deeply than most. Somewhere in the back of my mind, I had the idea that I’d like to take a sabbatical leave with Nick in the future. I thought there would be plenty of time. 

PolyLUT: Ultra low latency inference

This week my PhD student Marta Andronic will present our paper “PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference” at the International Conference on Field Programmable Technology (FPT), in Yokohama, Japan, where it has been short-listed for the best paper prize. In this blog post I hope to provide an accessible introduction to this exciting work.

FPGAs are a great platform for high-performance (and predictable performance) neural network inference. There’s a huge amount of work that’s gone into both the architecture of these devices and into the development of really high quality logic synthesis tools. In our work we look afresh at the question of how to best leverage both of these in order to do really low-latency inference, inspired by the use of neural networks in environments like CERN, where the speed of particle physics experiments and the volume of data generated demands really fast classification decisions: “is this an interesting particle interaction we’ve just observed?”

Our colleagues at Xilinx Research Labs (now part of AMD) published a great paper called LogicNets in 2020 that aimed to pack the combination of linear computation and activation function in a neural network into lookup tables in Verilog, a hardware description language, which then get implemented using the logic synthesis tools on the underlying soft-logic of the FPGA. Let’s be a bit more precise. The operation of a typical neuron in an artificial neural network is to compute the real-valued function y = \phi( w^T x + b ) for its inputs x and some learned weight vector w and bias b, where \phi is typically a fixed nonlinear function such as a ReLU. In practice, of course we use finite precision approximations to x and y. The Xilinx team noted that if they restrict the length of the vector x to be small enough, then the quantised version of this entire function can be implemented by simply enumerating all possible values of the input vector x to form a truth table for y, and using the back-end logic synthesis tools to implement this function efficiently, stitching together the neurons constructed in this way to form a hardware neural network. Moreover, note that in this setting one does not need to quantise the weights and bias at all – since the computation is absorbed in a truth table, arbitrary real-valued weights are just fine.

Marta and I have generalised this approach considerably. Firstly, we note that once we’re down at the level of enumerating truth tables, there’s no particular reason to limit ourselves to functions of the form y = \phi( w^T x + b ) – why not use an arbitrary function instead? From the perspective of the enumeration process and logic synthesis, it makes no difference. But from the perspective of neural network training, it certainly does. If we really wanted to look at arbitrary functions, we’d need a number of parameters to train that scales exponentially with the number of bits used to represent the entire vector x. This might be OK for very small vectors – indeed, with my former student Erwei Wang, I looked at something similar for the tiny physical lookup tables inside FPGA devices – but at the scale of neurons this is infeasible. So what family of functions should we use? Marta and I propose to use polynomials, where the total degree is fixed as a user-defined parameter we call D. In this way, we can tune a knob: turn down to D=1 for minimal expressivity but the smallest number of parameters to train, and recover LogicNets as a special case; turn up D and you get much more expressive functions you can pack into your LUTs, at the cost of more parameters to train. In a classical neural network, composition of linear layers and ReLU functions gives rise to the overall computation of a continuous piecewise linear function. In our networks, we have continuous piecewise polynomial functions. The important thing, though, is that we never have to do all the expensive multiplications and additions one typically associates with evaluating a polynomial – the implementation is all just a table lookup for each neuron, just like in the linear case.

So what does higher degree actually buy us? It’s well known that deep neural networks of the classical form are universal function approximators, so with enough neurons (enough depth or width), we can arbitrarily closely approximate a continuous function anyway. But by packing in more functionality into each neuron, which anyway gets enumerated and implemented using lookup tables in the FPGA (just like the classical case), we can get the same accuracy with fewer layers of network. And fewer layers of network means fewer layers of Boolean logic, which means lower latency. In fact, we show in our results that one can often at least halve the latency by using our approach: we run 2x as fast! This two-fold speedup comes from the more intricate decision boundaries one can implement with polynomial compute; look at this beautiful near-separation of the two dot colours using a curved boundary, and imagine how many small linear segments you would need to do a similar job – this provides the intuition for just why our networks perform so well.

PolyLUT is available for others to use and experiment with. Don’t just take our word for it, download Marta’s code and try it out for yourself! I’m also delighted that Marta’s paper at FPT is one of just three at the conference to have earned all available ACM badges, verifying that not only is her work public, but it has been independently verified as reproducible. Thank you to the artifact review team for their work on the verification!

Industry Coheres around MX?

In recent years there has been a rediscovery (sometimes multiple times!) of block floating point, a number representation used in the early days of the electronic computer for general purpose compute, but also in specialist areas such as signal processing, throughout the field’s history. The main reason for this modern rediscovery is the value of block floating point, and related ideas like block minifloat, in modern machine learning implementations.

Just like in the early days of floating point, a number of different approaches and implementations have flourished in recent years, so it was interesting to see the recent launch of the Open Compute Project OCP Microscaling Formats (MX) Specification, coauthored across Microsoft, AMD, Arm, Intel, Meta, NVIDIA and Qualcomm by a number of well-known figures (including Imperial’s very own Martin Langhammer). This document will be referred to as “the specification” in the rest of this blog post. At the same time, a paper was released on arXiv, supported by a GitHub repo, to show that when used for training deep neural networks (DNNs), MX-based implementations can produce high quality results.

In this blog post, I will briefly review a few of the features of the specification that I found particularly interesting as an “old hand” in fixed point and block floating point computer arithmetic, and raise some issues that the community may wish to take forward.

Firstly, I should say that it’s great to see efforts like this to raise the profile and awareness of block floating point and related number systems. My hope is that efforts like this will drive further work on the extremely important topic of efficient numerical hardware.

Storage

  1. Building block types. The focus of the specification is primarily one of storage, although certain computations are mentioned. The specification primarily defines the construction of a type for MX computations from three parameters: a (scalar) type for scalings, a (scalar) type for elements, and a (natural) number of elements. A value inhabiting this type is a scalar scaling X together with k elements, P_i, for 1 \leq i \leq k. The numbers represented are (except special values), v_i = X \cdot P_i. This is a very flexible starting point, which is to be welcomed, and there is much to explore under this general hood. In all the examples given, the scalar data types are either floating point or integer/fixed point, but there seems to be nothing in the specification specifically barring other representations, which could of course go far beyond traditional block floating point and more recent block exponent biases.
  2. Concrete types. In addition to specifying a generic type constructor, as above, the specification also defines certain concrete instantiations, always with a power-of-two scaling stored in a conventional 8-bit biased integer exponent, and with either floating point or integer elements. By my reading of the specification, to obtain an MX-compliant implementation, one does not seem to need to implement any of these concrete types, however.
  3. Subnormals mandated. If implementing a concrete format specified, subnormal representations must be implemented.

Computation

1. Dot products

Apart from conversion into MX, the only MX operation discussed in the specification is dot products of MX vectors of equal number of elements. This appears to leave some basic operations undefined, for example simple addition of two MX vectors. (It is, of course, possible to provide a reasonable semantics for this operation consistent with the scalar base types, but it is not in the specification.)

The definition given of the “Dot” operation is interesting. The specification says “The following semantics must be minimally supported”, and gives the following equation:

C = \text{Dot}(A,B) = X^{(A)}X^{(B)}\sum_{i=1}^k P_i^{(A)} \times P_i^{(B)}

where P_i^{A} denotes the ith element of vector A, X^{(A)} denotes the scaling of A, etc.

To my mind, the problem with this definition is that \times (on elements), the (elided) product on scalings and the sum on element products is undefined. The intention, of course, is that we want the dot product to approximate a real dot product, in the sense that \llbracket C \rrbracket \approx \llbracket X^{(A)} \rrbracket \llbracket X^{(B)} \rrbracket \sum_{i=1}^k \llbracket P_i^{(A)} \rrbracket \times \llbracket P_i^{(B)} \rrbracket, where the semantic brackets correspond to interpretation in the base scalar types and arithmetic operations are over the reals. But the nature of approximation is left unstated. Note, in particular, that it is not a form of round-to-nearest defined on MX, as hinted at by the statement “The internal precision of the dot product and order of operations is implementation-defined.” This at least suggests that there’s an assumed implementation of the multi-operand summation via two-input additions executed in some (identical?) internal precision in some defined order, though this is a very specific – and rather restrictive – way of performing multi-operand addition in hardware. There’s nothing wrong with having this undefined, of course, but an MX-compliant dot product would need to have a very clear statement of its approximation properties – perhaps something that can be considered for the standard in due course.

2. General Dot Products

The specification also defines an operation called “DotGeneral” which is a dot product of two vectors made up of MX-vector components. The specification defines it in the following way:

C = \text{DotGeneral}(A,B) = \sum_{j=1}^n \text{Dot}(A_j, B_j)

“where A_j and B_j are the jth MX-compliant subvector of A and B.” Like in the “Dot” definition, the summation notation is also not defined in the specification, and also this definition doesn’t carry the same wording as for “Dot” on internal precision and order, but I am assuming it is intended to. It is left undefined how to form component subvectors. I am assuming that the authors envisaged the vector A_j consisting of the jth block of k contiguous elements in the vector A, but of course there are many other ways this could be done (e.g. strided access, etc.)

I would be interested to hear from any blog readers who think I have misinterpreted the standard, or who think other components of the specification – not discussed here – are more interesting. I will be watching for uptake of this specification: please do alert me of interesting uses.

Supercharging Formal with E-Graphs

Designing circuits is not easy. Mistakes can be made, and mistakes made are not easily fixed. Even leaving aside safety-critical applications, product recalls can cost upwards of hundreds of millions of US$, and nobody wants to be responsible for that. As a result, the industry invests a huge effort in formally verifying hardware designs, that is coming up with computer-generated or computer-assisted proofs of hardware correctness.

My Intel-funded PhD student Sam Coward (jointly advised by Theo Drane) is about to head off to FMCAD 2023 in Ames, Iowa, to present a contribution to making this proof generation process speedier and more automated, while still relying on industry-standard tried and trusted formal verification tools. Together with his Intel colleague Emiliano Morini and then-intern (and Imperial student) Bryan Tan, Sam noticed that the e-graph techniques we have been using in his PhD for optimising datapath [1, 2] also have a natural application to verifying datapath designs, akin – but distinct – to their long-time use as a data-structure in SMT solvers.

The basic idea is straight-forward. Ever since our earliest ARITH paper, we’ve developed a toolbox of Intel-inspired rewrite rules that can be applied to optimise designs across parametric bitwidths. Our optimisation flow, called ROVER, applies these to a single expression we wish to optimise, and then we extract a minimal cost implementation. Our new paper asks a different question: what if we start, not from a single expression we wish to optimise, but from two expressions we’re trying to prove to be equivalent?

Following the approach I blogged about here from ROVER, if the expressions are not equivalent then they should never end up in the same equivalence class after multiple rewrite iterations. If they are equivalent then they may or may not end up in the same equivalence class: we provide no guarantees of completeness. So let’s think about what happens in each of these cases.

If they end up in the same equivalence class, then our tool thinks it has proven the two expressions to be equivalent. But no industrial design team will trust our tool to just say so – and rightly so! – tool writers make mistakes too. However, work by Sam and other colleagues from the Universities of Washington and Utah, has provided a methodology by which we can use our tool’s reasoning to export hints, in the form of intermediate proof steps, to a trusted industrial formal verification tool. And so this is how we proceed: our tool will generate many, sometimes hundreds, of hints which will help state-of-the-art industrial formal verification tools to find proofs more quickly – sometimes much more quickly – than they were able to do so beforehand.

So what if the two expressions don’t end up in the same equivalence class, despite actually being equivalent? No problem! We can still find two expressions for which their proof of equivalence, via some external tool, makes the overall proof we’re aiming for much easier… crunching down the proof to its hard essence, stripping away what we can. The main technical idea here is how to extract the expressions from the e-graph. Rather than aiming for the best possible implementation cost, as we did with ROVER, we aim to minimise the distance between the two implementations that are left for the industrial tool to verify. The overall process is shown in the figure below, where “EC” refers to an equivalence check conducted by the industrially-trusted tool and S^* and I^* are the extracted “close” expressions.

We show one example where, without the hints provided by our tool, the industrial formal verifier does not complete within a 24 hour timeout. Even when results are achievable with the industrial formal verifier, we still get an up to a 6x speedup in proof convergence.

This work forms a really nice “power use” of e-graphs. If you are at FMCAD 2023, come along and hear Sam talk about it in person!

Discovering Special Cases

My PhD student Sam Coward (jointly advised by Theo Drane from Intel) is about to head off on a speaking tour, where he will be explaining some of our really exciting recent developments. We’ve developed an approach that allows the discovery, exploitation, and generation of datapath hardware that works well in certain special cases. We’ve developed some theory (based on abstract interpretation), some software (based on egg), and some applications (notably in floating-point hardware design). Sam will be formally presenting this work at SOAP, EGRAPHS, and DAC over the coming weeks. In this blog post, I will try to explain the essence of the work. More detail can be found in the papers, primarily [1,2], with [3] for background.

We know that sometimes we can take shortcuts in computation. As a trivial example, we know that \text{abs}(x) can just be replaced by x for non-negative values of x. Special cases abound, and are often used in complex ways to create really efficient hardware. A great example of this is the near/far-path floating-point adder. Since the publication of this idea by Oberman and Flynn in the late 1990s, designs based on this have become standard in modern hardware. These designs use the observation that there are two useful distinct regimes to consider when adding two values of differing sign. If the numbers are close in magnitude then very little work has to be done to align their mantissas, yet a lot of work might be required to renormalise the result of the addition. On the other hand, if the two numbers are far in magnitude then a lot of work might be needed to align their mantissas, yet very little is required to renormalise the result of the addition. Thus we never see both alignment and renormalisation being significant computational steps.

Readers of this blog may remember that Sam, Theo and I published a paper at ARITH 2022 that demonstrated that e-graphs can be used to discover hardware that is equivalent in functionality, but better in power, performance, and area. E-graphs are built by repeatedly using rewrite rules of the form \ell \to r, e.g. \texttt{x + x} \to \texttt{2*x}. But our original ARITH paper wasn’t able to consider special cases. What we really need for that is some kind of conditional rewrite rules, e.g. x \geq 0 \Rightarrow \texttt{abs(x)} \to \texttt{x}, where I am using math script to denote the value of a variable and teletype script to denote an expression.

So we set out to answer:

  1. how can we deal with conditional rewrites in our e-graphs?
  2. how can we evaluate whether a condition is true in a certain context?
  3. how can we make use of this to discover and optimise special cases in numerical hardware?

Based on an initial suggestion from Pavel Panchekha, Sam developed an approach to conditionality by imagining augmenting the domain in which we’re working with an additional element, let’s call it *. Now let’s imagine introducing a new operator \texttt{assume} that takes two expressions, the second of which is interpreted as a Boolean. Let’s give \texttt{assume} the following semantics: \llbracket \texttt{assume(x,c)} \rrbracket = \llbracket \texttt{x} \rrbracket \text{ if } \llbracket c \rrbracket \text{ , and } * \text{, otherwise}. In this way we can ‘lift’ equivalences in a subdomain to equivalences across the whole domain, and use e-graphs without modification to reason about these equivalences. Taking the absolute value example from previously, we can write this equivalence as \texttt{assume( abs(x), x >= 0 )} \to \texttt{assume( x, x >= 0 )}. These \texttt{assume} function symbols then appear directly within the e-graph data structure. Note that the assume on the right-hand side here is important: we need both sides of the rewrite to evaluate to the same value for all possible values of x, and they do: for negative values they both evaluate to * and for non-negative values they both evaluate to x.

So how do we actually evaluate whether a condition is true in a given context? This is essentially a program analysis question. Here we make use of a variation of classical interval arithmetic. However, traditionally interval arithmetic has been a fairly weak program analysis method. As an example, if we know that x \in [-1,1], then a classical evaluation of x - x would give me [-2,2], due to the loss of information about the correlation between the left and right-hand sides of the subtraction operator. Once again, our e-graph setting comes to the rescue! Taking this example, a rewrite \mathtt{x - x} \to \mathtt{0} would likely fire, resulting in zero residing in the same e-class as \mathtt{x - x}. Since the interval associated with \mathtt{0} is [0,0], the same interval will automatically be associated with \mathtt{x - x} by our software, leading to a much more precise analysis.

This interaction between rewrites and conditions goes even further: a more precise analysis leads to the possibility to fire more conditional rewrite rules, as more conditions will be known to hold; firing more rewrite rules results in an even more precise analysis. The two techniques reinforce each other in a virtuous cycle:

A virtuous cycles: greater precision leads to more rewrites leads to greater precision.

Our technique is able to discover, and generate RTL for, near/far-path floating-point adders from a naive RTL implementation (left transformed to right, below), resulting in a 33% performance advantage for the hardware.

Left: A naive floating-point subtractor. Right: The subtractor produced by our software.

I’m really excited by what Sam’s been able to achieve, as I really think that this kind of approach has the potential to lead to huge leaps forward in electronic design automation for word-level designs.

Put Your Resource Where it’s Needed!

We all know that we have finite resources to accomplish a variety of tasks. Some tasks are easier than others, so we tend to spend less time and energy on them. Yet – when we survey the landscape of traditional neural networks – this observation does not apply to them. Take a convolutional neural network classifying pictures: does it do any less work (floating point operations, perhaps) if the picture is “obviously a cat” than if it’s a “cat hiding in the bushes”? Traditionally, the answer is a clear “no”, because there is no input dependence in the control-flow of the inference algorithm: every single image is processed in exactly the same way and takes exactly the same work.

But there are people who have looked to address this fundamental shortcoming by designing algorithms that put in less work if possible. Chief amongst these techniques is an approach known as “early exit neural networks”. The idea of these networks is that the network itself generates a confidence measure: if it’s confident early on, then it stops computing further and produces an early result.

My student Ben Biggs (jointly advised by Christos Bouganis) has been exploring the implications of bringing FPGAs to automatically accelerate such networks. There exist several good tool flows (e.g. fpgaConvNet, HPIPE) for automatically generating neural network implementations on FPGAs, but to the best of our knowledge, none of them support such dynamic control flow. Until now. Next week at FCCM 2023, Ben will be presenting our paper ATHEENA: A Toolflow for Hardware Early-Exit Network Automation.

The basic idea of ATHEENA is that early exit networks come in chunks, each of which is utilised a certain proportion of the time (e.g. 10% of images are hard to classify, 90% are not). So we make use of an existing neural network generator to generate each chunk, but allocate different FPGA resources to the different parts of the network, in order to maintain a steady throughput with no back pressure build-up points in the inference pipeline.

This principle is illustrated below. Imagine that the first stage of a network executes no matter what the input. Then I can create a throughput / area Pareto curve shown on the top left. Equally, I can create a throughput / area Pareto curve for the second stage, and then scale up the throughput achievable by a function of how frequently I actually need to use that stage: if it’s only needed 10% of the time, then I can support a classification rate 10x higher than the nominal throughput of the design returned by a standard DNN generator. By combining these two throughput / area curves for the same nominal throughput, I get an allocation of resources to each part of the network. Of course, if the nominal proportion p of ‘hard’ samples I used to characterise the network varies in practice from the actual proportion q, then I may end up with a somewhat different behaviour, as indicated by the purple region.

In practice, it turns out that this works really well. The graph below from Ben’s paper shows that on a real example, running on a real FPGA board, he’s able to improve throughput by more than 2x for the same area or reduce area by more than half for the same throughput.

I’m delighted that Ben’s work has been nominated for the best paper prize at FCCM this year and that it has received all three reproducibility badges available: Open Research Objects, Research Objects Reviewed and Results Reproduced, indicating Ben’s commitment to high quality reproducible research in our field.

If you plan to be in Los Angeles next week, come and hear Ben talk about it!

On a definition of Frege

Over the last few weekends I’ve been reading Frege’s “The Foundations of Arithmetic”, a wonderful book mixing philosophy, logic and mathematics from 1884. I hope I will find time to post on the book more generally, but for now I was intrigued enough by this interesting passage to want to play a little with it.

Frege writes:


The proposition

“if every object to which x stands in the relation to \phi falls under the concept F, and if from the proposition that d falls under the concept F it follows universally, whatever d may be, that every object to which d stands in the relation to \phi falls under the concept F, then y falls under the concept F, whatever concept F may be”

is to mean the same as

y follows in the \phi-series after x

G. Frege, “The Foundations of Arithmetic”, Section 79.

Let’s write “y follows in the \phi-series after x” as (x,y) \in G. Then I read Frege’s definition as:

(x,y) \in G iff

\forall F. (\forall a. (x,a) \in \phi \to F(a)) \wedge (\forall d. F(d) \to (\forall e. (d,e) \in \phi \to F(e))) \to F(y)

Frege’s definition appears to be a characterisation of the transitive closure of \phi, that is the smallest transitive relation containing \phi. Or is it? Perhaps this is not immediately apparent. I thought it might be a fun exercise to formalise this and explore it in Coq, a formal proof assistant I’ve played with a little in the past.

I enclose the Coq code I wrote below (note I am very much an amateur with Coq) that shows that Frege’s definition, once tying down an object universe ({\mathtt A} in the code) indeed (i) contains \phi, (ii) is transitive and (iii) is contained in any other transitive relation containing \phi.

I hope you enjoy.

(* Coq analysis of a definition of Frege                                                  *)
(* Formalises Frege’s ‘follows’ definition and proves it equivalent to transitive closure *)
(* George A. Constantinides, 25/03/23                                                     *)

Require Import Coq.Relations.Relation_Definitions. (* for inclusion and transitive *)
Require Import Coq.Relations.Relation_Operators.

Variable A : Type.
Variable phi : AAProp.

Definition FregeFollows (x : A) (y : A) :=
  (F : AProp),
  ( (a : A), (phi x a) → (F a)) ∧
  ( (d : A), (F d) → ( (e : A), (phi d e) → (F e)))
  → (F y).

(* We will separately show:                                                    *)
(* 1. phi is included in FregeFollows                                          *)
(* 2. FregeFollows is transitive                                               *)
(* 3. Amongst all relations meeting conditions 1 & 2, FregeFollows is minimal. *)

(* 1. phi is included in Frege’s relation *)

Theorem IncludesPhi : inclusion A phi FregeFollows.
Proof.
  unfold inclusion.
  intros x y.
  unfold FregeFollows.
  intros H F.
  intros H1.
  destruct H1. (* we only need base case *)
  apply (H0 y).
  auto.
Qed.

(* 2. Frege’s relation is transitive *)

Theorem IsTransitive : transitive A FregeFollows.
Proof.
  unfold transitive.
  intros x y z.
  unfold FregeFollows.
  intros H0 H1.
  intros F.
  specialize (H0 F).
  specialize (H1 F).
  intros H2.

  apply H1.

  pose proof H2 as G. (* keep a copy of H2 to use to prove F y *)

  apply H0 in G.

  destruct H2.
  split.
  2: assumption.

  intros a.
  intros H3.

  specialize (H2 y).

  specialize (H2 G).
  specialize (H2 a).
  specialize (H2 H3).
  assumption.

Qed.

(* 3. Amongst all transitive relations including phi, Frege’s relation is minimal *)

Theorem IsMinimal : R : relation A,
  (inclusion A phi R) ∧ (transitive A R) → (inclusion A FregeFollows R).
Proof.
  intros R.
  intros H.
  unfold inclusion.
  intros x y.
  unfold FregeFollows.
  intros H1.

  specialize (H1 (R x)). (* Choose F to be R(x,.) *)
  apply H1.

  split.

  destruct H.
  unfold inclusion in H.

  specialize (H x).
  1: assumption.

  intros d Rxd e.

  destruct H.
  unfold transitive in H0.
  specialize (H0 x d e).
  intros.
  apply H0.
  1: assumption.

  unfold inclusion in H.
  specialize (H d e).

  apply H.
  assumption.
Qed.

Industry & Academia Unite!

I’ve just returned from a very busy week in California visiting Intel and AMD in Folsom before chairing the SpatialML annual meeting in Monterey and then attending the first in-person ACM FPGA conference, also in Monterey. It was a great trip, and I think the theme for me was just how well the tech industry and academia can work together, building on each other’s strengths.

Industry Visits

I was delighted to have the first chance to pay an in-person visit to the team built by my former PhD student Theo Drane at Intel in Folsom, where he has been able to gather an exceptionally strong and diverse mix of skills. Intel is sponsoring my PhD student Sam Coward (far left of photo) who also a member of staff in the group. Together we have been working on some really exciting new ways to optimise datapath designs using word-level rewrite rules. While on the trip, we learnt that Sam’s DAC paper has been accepted, so there will be more I will want to say publicly about the development of this work soon!

With the team at Intel

Also in Folsom, I was able to meet members of the graphics team at AMD, who have recently agreed to support two new PhD students in my lab working on the interaction between microarchitecture and machine learning. I expect them to start in October 2023, with one co-supervised by Deniz Gündüz at Imperial and one co-supervised by Wayne Luk, also at Imperial. We’re planning exciting work, so watch this space.

SpatialML

In 2019 I was awarded a grant by EPSRC to bring together Imperial and Southampton (lead there Geoff Merrett) in the UK with Toronto (Anderson, Betz, Chow) and UCLA (Cong, Ercegovac) and industry (Imagination, Xilinx, Maxeler, NVIDIA, Google, Intel, Arm & Corerain) to reimagine the full stack of machine learning hardware when computing across space, not just time. You can visit the centre website here and see an up-to-date feed of our work on Twitter. Due to COVID, the last time we were able to hold an all-hands meeting was just after kick off, in February 2020, so it was truly a delight to be able to do so again this year. In the meantime, we have been very active and also expanded the centre to include Cornell & Cornell Tech (Abdelfattah, Zhang) as well as Sydney (Boland, Leong) and Samsung AI, also since then Maxeler has been acquired by Groq and Xilinx by AMD. Whereas in 2020, I had primarily focused the workshop on faculty and industry talks, this year I re-focused on hearing the progress that various PhD researchers had made since 2020 as well as very successful 2min lightning talks from all participants to aid networking and build a solid foundation for our researcher exchange programme. In addition, the new participants were given the chance to highlight their work in extended talks.

Members of the EPSRC-funded International Centre for Spatial Computational Learning

We had the following speakers:

At the meeting dinner, we were also able to celebrate the announcement that our member, Jason Cong, has just won the EDAA Achievement Award, to be presented at DATE 2023. We were able to end the workshop with a productive session, planning collaboration and researcher exchange between our sites over the coming year.

FPGA 2023

The ACM International Symposium on FPGAs (known as ‘FPGA’) is one of the key dates in my calendar. As a former chair of the conference, I sit on its steering committee, and I have always been a great supporter of the genuine community around this conference, combining a strong academic and industrial presence. It was wonderful to be back in person after two years of virtual FPGAs.

This year two of my collaborators were chairing: Zhiru Zhang (Cornell) was Program Chair and Paolo Ienne (EPFL) was General Chair. They put together a great event – you can read the program here. There were many highlights, but I would particularly mention the paper by my collaborator Lana Josipović and her PhD student Jiahui Xu and collaborators on the use of model checking to remove overhead in dynamically-scheduled HLS, closely related to my student Jianyi’s stream of work which I’ve described in other blog posts. I also had the privilege to serve on the best paper award panel this year, alongside Deming Chen and Viktor Prasanna, an award that went to some work from Jane Li‘s group for their development of a plug-and-play approach to integrating SSD storage into high-level synthesis, presented by the lead author Linus Wong, during the session I chaired on “High-Level Abstraction and Tools”.

It was a personal delight to see so many of my former PhD students, and the definite highlight was getting to meet the first child of one of my former students, born over the period I’ve been out of the loop due to COVID. My former student Sam Bayliss was in attendance and presenting a tutorial on programming the AMD AI Engines developed at using MLIR. My former student Kan Shi (Chinese Academy of Sciences) was also presenting his work together with my other former student David Boland (Sydney) and my current undergrad student Yuhan Diao (Imperial College) on on-chip debugging. Kan is also a part-time YouTuber, and his recent YouTube interview with me was mentioned to me by several attendees – I suspect it has had far wider reach than my publications!

Some of the Imperial College staff, students and alumni attending FPGA 2023

Overall, I have come away from the trip energised from the technical conversations, with far too many ideas for collaboration than I’ll be able to follow up on, and with an even further strengthened belief in the immense promise of industrial / academic collaboration in my field.