What’s the Rush?

At FPL 2021, my PhD student Jianyi Cheng (jointly supervised by John Wickerson) will present our short paper “Exploiting the Correlation between Dependence Distance and Latency in Loop Pipelining for HLS”. In this post, I explain the simple idea behind this paper and how it can significantly accelerate certain neglected corner cases in high-level synthesis (HLS).

By far the most significant way to extract high performance from a hardware accelerator in high-level synthesis is to use loop pipelining. Loop pipelining is the idea of starting the next iteration of a loop before the previous one finishes, allowing multiple iterations to be executing simultaneously. However, some loop iterations may need a result produced by earlier loop iterations, limiting the extent to which this can be done. HLS tools generally determine a ‘safe’ initiation interval – the number of clock cycles between starting two adjacent loop iterations – and then schedule the iterations statically at multiples of this interval.

This limit on initiation interval of the loop essentially derives from two properties. Firstly, if it takes a long time for the computation of a loop iteration to execute, then any iterations waiting on its result must be delayed. But secondly if an iteration’s result is only needed many iterations later, it can afford to take a long time to compute: what’s the rush? These two factors – latency and dependence distance – together determine the safe initiation interval.

The simple observation of our paper is that typically HLS tools will generally independently over-approximate latency and under-approximate dependence distance. However, there are some examples of programs where there is a correlation between dependence distance and latency. Jianyi gives this nice motivating example in the paper:

double f( double a ) {
  return (((((a+0.64)*a+0.7)*a+0.21)*a+0.33)*a+0.25)*a+0.125;
}

void example( double vec[M] ) {

  for (int i = 0; i < N; i++) {
    double e = vec[i];
    if (e > 0) vec[i+63] = f(e);
    else vec[i*i+9] = e * e;
  }

}

In this code snippet, you can see two control paths in the loop. The if branch has a long latency (it computes the Horner scheme polynomial f) but also writes to elements of vec that only get read many iterations later. Meanwhile the else branch has a short latency but can write – in the early stages of the loop at least – to values read in nearby iterations.

The end result is that the commercial tools Jianyi tried don’t cope very well with scheduling this loop. However, Jianyi has developed an approach that uses the formal verification tool Boogie to show that this loop can actually be scheduled very efficiently by exploiting this correlation.

He has developed an LLVM pass called iiProver that proves that it is safe to use a certain II with the commercial Vitis HLS tool from Xilinx. iiProver and our benchmarks are available – please take a look: https://github.com/JianyiCheng/iiProver. And you can hear Jianyi talking about his work on Youtube here: https://www.youtube.com/watch?v=SdQeBBc85jc.

It Probably Works!

Followers of my research will know that I’ve long been interested in rounding errors and how they can be controlled to best achieve efficient hardware designs. Going back 20 years, I published a book on this topic based on my PhD dissertation, where I addressed the question of how to choose the precision / word-length (often called ‘bit width’ in the literature) of fixed point variables in a digital signal processing algorithm, in order to achieve a controlled tradeoff between signal-to-noise ratio and implementation cost.

Fast forward several years, and my wonderful collaborators Fredrik Dahlqvist, Rocco Salvia, Zvonimir Rakamarić and I have a new paper out on this topic, to be presented by Rocco and Fredrik at CAV 2021 next week. In this post, I summarise what’s new here – after all, the topic has been studied extensively since Turing!

I would characterise the key elements of this work as: (i) probabilistic, i.e. we’re interested in showing that computation probably achieves its goal, (ii) floating point (especially of the low custom-precision variety), and (iii) small-scale computation on straight-line code, i.e. we’re interested in deep analysis of small kernels rather than very large code, code with complex control structures, or code operating on very large data structures.

Why would one be interested in showing that something probably works, rather than definitely works? In short because worst-case behaviour is often very far from average case behaviour of numerical algorithms, a point discussed with depth in Higham and Mary’s SIAM paper. Often, ‘probably works’ is good enough, as we’ve seen recently with the huge growth of machine learning techniques predicated on this assumption.

In recent work targeting large-scale computation, Higham and Mary and, independently, Ipsen, have considered models of rounding error that are largely / partially independent of the statistical distribution of the error induced by a specific rounding operation. Fredrik was keen to take a fresh look at the kind of distributions one might see in practice, and in our paper has derived a ‘typical distribution’ that holds under fairly common assumptions.

Rocco and Fredrik then decided that a great way to approximate the probabilistic behaviour of the program is to sandwich whatever distribution is of interest between two other easy to compute distributions, utilising the prior idea of a p-box.

One of the core problems of automated analysis of numerical programs has always been that of ‘dependence’. Imagine adding together two variables each in the range [-1,1]. Clearly their sum is in the range [-2,2]. But what if we knew, a priori, that these two variables were related somehow? For example in the expression X + (-X), which is clearly always zero. Ideally, an automated system should be able to produce a tighter result that [-2,2] for this! Over the years, many approaches to dealing with this issue have arisen, from very the very simple approach of affine arithmetic to the more complex semialgebraic techniques Magron, Donaldson and myself developed using sequences of semidefinite relaxations. In our CAV paper, we take the practical step of cutting-out regions of the resulting probability space with zero probability using modern SMT solver technology. Another interesting approach used in our paper is in the decision of which nonlinear dependences to keep and which to throw away for scalability reasons. Similar to my work with Magron, we keep first-order dependence on small rounding error variables but higher-order dependence on input program variables.

I am really excited by the end result: not only a wonderful blend of ideas from numerical analysis, programming languages, automated reasoning and hardware, but also a practical open-source tool people can use: https://github.com/soarlab/paf. Please give it a try!

Readers interested in learning more about the deeply fascinating topic of numerical properties of floating point would be well advised to read Higham’s outstanding book on the topic. Readers interested in the proofs of the theorems presented in our CAV paper should take a look at the extended version we have on arXiv. Those interested in some of the issues arising (in the worst case setting) when moving beyond straight-line code could consult this paper with Boland. Those interested in the history of this profoundly beautiful topic, especially in its links to linear algebra, would do well to read Wilkinson.

Scheduling with Probabilities

Readers of this blog may remember that Jianyi Cheng, my PhD student jointly supervised by John Wickerson, has been investigating ways to combine dynamic and static scheduling in high-level synthesis (HLS). The basic premise has been that static scheduling, when it works well due to static control, works very well indeed. Meanwhile, for programs exhibiting highly dynamic control flow, static scheduling can be very conservative, a problem addressed by our colleagues Lana Josipović, Radhika Ghosal and Paolo Ienne at EPFL. Together with Lana and Paolo, we developed a scheme to combine the best of both worlds, which we published at FPGA 2020 (and recently extended in IEEE Transactions on CAD). I blogged about this work previously here. We provided a tool flow allowing us to stitch large efficient statically-scheduled components into a dynamic circuit.

However, when scheduling a circuit statically, there are many design choices that can be made, typically to trade off time (throughput, latency) against area. So while our previous work was useful to stitch pre-existing statically-scheduled components into a dynamically-scheduled environment, we had no way of automatically designing those components to optimally fit the dynamic environment.

Enter Jianyi’s latest contribution – to be presented at FCCM 2021 next week.

In his paper “Probabilistic Scheduling in High-Level Synthesis”, Jianyi tackles this problem. He demonstrates that the dynamic environment, including data-dependent decisions and even load-store queues, can be adequately modelled using a Petri net formalism, and uses the PRISM model checker from Kwiatowska et al. to extract an appropriate initiation interval for each statically-scheduled component.

One of Jianyi’s Petri net models of some memory accesses.

The initiation intervals inferred by Jianyi’s tool can then be given to a commercial HLS tool – in our case Vitis HLS – to schedule each component. The components – together with any remaining dynamically-scheduled code – is then integrated using our previously published framework, producing the complete FPGA-ready design. The whole process provides a quality of result very close to an exhaustive search of possible initiation intervals, without having to perform multiple scheduling runs, and so in a fraction of the time.

Watch Where You’re Pointing That!

This week Nadesh Ramanathan, a member of research staff in my group, will be presenting a paper at the virtual FPL 2020 conference entitled “Precise Pointer Analysis in High Level Synthesis” (jointly with John Wickerson and myself). This blog post is intended as an accessible summary of the key message of the paper.

People are now aiming to generate hardware accelerators for more complex algorithms than things like classical CNNs, low-level image processing tasks, and other bread-and-butter hardware acceleration tasks. Inevitably, this is a difficult task to get right, and the prevalence of C/C++-based high-level synthesis (HLS) tools offers a great opportunity to experiment with the design space. Sophisticated algorithms written in C/C++ often incorporate pointers, which have long been difficult for HLS tools. Previously, I proposed a relatively sophisticated analysis using separation logic, together with my PhD student Felix Winterstein, which is an intensive analysis specialised to certain data structures. Nadesh’s most recent work can, in some sense, be viewed as the opposite. He is trying to make more simple, but more generally applicable pointer analyses more widely understood and used within HLS, while trying to quantify how much this might bring to hardware accelerator design.

The basic idea is that since FPGA compile times are long, we can afford to spend a bit more time being precise about which variables can point to which other variables. The question is, what are the benefits of being more precise in the context of HLS? Nadesh has studied two different types of ‘sensitivity’ of pointer analyses – to flow and to context. Flow-sensitive analyses consider the ordering of memory operations, context sensitive analyses consider the calling context of functions. The most common form of analysis in HLS is Andersen analysis, which is neither flow- nor context-sensitive. So how much do we gain by utilising more precise analyses?

Nadesh studies this question by modifying the LegUp source code, showing that over the PTABen benchmark set, area utilisation can be halved and performance doubled by using these analyses. This suggests that as we move towards greater diversity in hardware accelerators, HLS tool developers should think carefully about their pointer analyses.

When are Digits Correct?

Often, we compute with iterative algorithms. Start with some value, often an initial guess to be refined, and keep iterating until some stopping criterion is met. If we actually think about what goes on in a modern digital computer when we execute these algorithms, we soon see that – often – the same digits end up being computed time and again. As we converge to a value, it’s reasonable to expect that most of the time the most significant digits become stable. But we still compute them, time and again at each iteration, wasting computational resource.

In general, in standard binary representations, this re-computation may not be avoidable – most-significant digits might be stable for 1000 iterations and then flip, e.g. from 0.99999 to 1.00000. As a child, I used to play with such iterations using my HP32S calculator – a gift from fred harris – it provided endless entertainment.

There is, however, a class of number representations in which these digit flips can be avoided: redundant number representations. These representations have a long history – indeed, as my friend and colleague Miloš Ercegovac has identified, they can be traced back as far as a 1727 publication in Phil. Trans. Royal Soc by John Colson FRS. Miloš developed these ideas to a mature state between the 1970s and today, in the guise of Online Arithmetic.

Together with my PhD students He Li (now research staff at Cambridge) and Ian McInerney and collaborator James Davis, I have recently done some work on methods to detect and establish exactly when digits become stable using such schemes and what the implications might be for hardware that makes use of this stability. In our recent IEEE Transactions on Computers paper, we adapt standard forward error analyses of stationary iterative methods to this setting. We mathematically derive some conditions that can be checked at run-time to determine when you don’t need to compute certain digits in any future iteration, and also present a toy hardware implementation taking advantage of this approach using a non-standard arithmetic processor design.

We hope that – in the future – only what needs to be computed will be computed.

When to Schedule?

On Tuesday, Jianyi Cheng will present our recent work Combining Dynamic and Static Scheduling in High-level Synthesis at the ACM International Symposium on FPGAs in Monterey. This is joint work between Jianyi and his two supervisors, John Wickerson and myself, as well as our collaborators from EPFL, Lana Josipović and her PhD supervisor Paolo Ienne.

As I’ve described in previous blog posts [1,2], Lana has been doing interesting work over the last few years, developing a tool flow for dynamically-scheduled high-level synthesis (HLS). Typically in modern HLS tools like VivadoHLS or LegUp, scheduling decisions are made statically – at compile time. However, Lana’s tool flow makes these decisions dynamically, at run time, using handshaking circuitry, reminiscent of Page and Luk’s work compiling occam into FPGAs.

In our paper, we have teamed up with EPFL to build a flow that can result in the best of both worlds. Static scheduling can be very efficient, sharing resources and leading to low area designs. Dynamic scheduling can be very fast, working around actual rather than potential data dependencies. Jianyi’s paper allows the definition of statically scheduled functions within a dynamically scheduled program. He shows that over a range of benchmarks, the results are about half the area of the fully dynamically-scheduled designs while about 1.7x faster than the fully statically-scheduled designs.

Screenshot 2020-02-21 at 11.07.12

Arithmetic for Neural Networks

Last month, the Royal Society Phil Trans A published my paper Rethinking Arithmetic for Deep Neural Networks, part of a special issue put together by Jack Dongarra, Laura Grigori and Nick Higham. In this blog post, I aim to briefly introduce the ideas in the paper. The paper is open access, so please read it for further detail. In addition, my slides from a talk given on an early version of this work are available from Nick Higham’s blog, and an mp3 recording of me talking to these slides has been made available by the Royal Society here.

The central theme of the paper is that hardware accelerator circuits for neural networks can actually be treated as neural networks. Consider the two graphs below. One of them represents a simple deep neural network where each node performs an inner product and a ReLU operation. The other represents the result of (i) deciding to used 4-bit fixed-point arithmetic, and then (ii) synthesising the resulting network into a circuit, technology-mapped to 2-input Boolean gates.

Although there are obvious differences (in structure, in number of nodes) – there is a core commonality: a computation described as a graph operating on parameterisable functional nodes.

So what can we gain from this perspective?

1. Binarized Neural Networks are universal. The paper proves that any network you want to compute can be computed using binarized neural networks with zero loss in accuracy. It’s simply not the case that some problems need high precision. But, as a corollary, it is necessary to not be tied too closely to the original network topology if you want to be guaranteed not to lose accuracy.

2. Boolean topologies are tricky things. So if we want to rethink topologies, what first principles should we use to do so? In the paper, I suggest looking to inspiration from the theory of metric spaces as one step towards producing networks that generalise well. Topology, node functionality, and input / output encoding interact in subtle, interesting, and under-explored ways.

3. This viewpoint pays practical dividends. My PhD student Erwei Wang and collaborators James Davis and Peter Cheung and I have developed a Neural Network flow called LUTNet, which uses Boolean lookup tables as the main computational node in a neural network, leading to very efficient FPGA implementations.

I’m hugely excited by where this work could go, as well as the breadth of the fields it draws on for inspiration. Please do get in touch if you would like to collaborate on any of the open questions in this paper, or any other topic inspired by this work.

 

 

Machine Learning at FPT 2019

Next week, the IEEE International Conference on Field-Programmable Technology (FPT) will take place in Tianjin in China. I’m proud that my former PhD student Qiang Liu will be General Chair of the conference.

I am a coauthor of two papers to be presented at FPT, one led by my former BEng student Aaron Zhao, now a PhD student at Cambridge supervised by my colleague Rob Mullins, and one led by my former postdoc, Ameer Abdelhadi, now with COHESA / UofT. The paper by Aaron is also in collaboration with two of my former PhD students, Xitong Gao, now with the Chinese Academy of Sciences, and Junyi Liu, now with Microsoft Research.

The first paper, led by Aaron, is entitled ‘Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs’, and can be found on arXiv here. This paper is a serious look at getting an automated CNN flow for FPGAs that makes good use of some of the arithmetic flexibility available on these devices. Powers-of-two (“free” multiplication) and fixed-point (“cheap” multiplication) are both leveraged.

The second paper, led by Ameer, looks at the computation of a set of approximate nearest neighbours. This is useful in a number of machine learning settings, both directly as a non-neural deep learning inference algorithm and indirectly within sophisticated deep learning algorithms like Neural Turing Machines. Ameer has shown that this task can be successfully accelerated in an FPGA design, and explores some interesting ways to parameterise the algorithm to make the most of the hardware, leading to tradeoffs between algorithm accuracy and performance.

If you’re at FPT this year, please go and say hello to Aaron, Ameer and Qiang.

Approximating Circuits

Next week, Ilaria Scarabottolo, currently a visiting research student in my research group at Imperial, will present her paper “Partition and Propagate” at DAC 2019 in Las Vegas. In this post, I will provide a brief preview of her work (joint with Giovanni Ansaloni and Laura Pozzi from Lugano and me.)

I’ve been interested in approximation, and how it can be used to save resources, ever since my PhD 20 years ago, where I coined the term “lossy synthesis” to mean the synthesis of a circuit / program where error can be judiciously introduced in order to effect an improvement in performance or silicon area. Recently, this area of research has become known as “approximate computing“, and a bewildering number of ways of approximating behaviour – at the circuit and software level – have been introduced.

Some of the existing approaches for approximate circuit synthesis are point solutions for particular IP cores (e.g. our approximate multiplier work) or involve moving beyond standard digital design methodologies (e.g. our overclocking work.) However, a few pieces of work develop a systematic method for arbitrary circuits, and Ilaria’s work falls into this category.

Essentially, she studies that class of approximation that can be induced solely by removing chunks of a logic circuit, replacing dangling nets with constant values – a technique my co-authors referred to as Circuit Carving in their DATE 2018 paper.

Our DAC paper presents a methodology for bounding the error that can be induced by performing such an operation. Such error can be bounded by exhaustive simulation or SAT, but not for large circuits with many inputs due to scalability concerns. On the other hand, coarse bounds for the error can be derived very quickly. Ilaria’s work neatly explores the space between these two extremes, allowing analysis execution time to be traded for bound quality in a natural way.

Approximation’s time has definitely come, with acceptance in the current era often driven by machine-learning applications, as I explore in a previous blog post. Ilaria’s paper is an interesting and general approach to the circuit-level problem.

 

Boolean Circuits are Neural Networks

On Monday, my PhD student Erwei Wang will present our work (joint also with James Davis and Peter Cheung) called LUTNet: Rethinking Inference in FPGA Soft Logic at the IEEE International Symposium on Field-Programmable Custom Computing Machines in San Diego, California.

In this paper, we take a very unusual approach to the design of a deep neural network accelerator in hardware: for us, the nodes in the neural network are Boolean lookup tables.

We were motivated initially by the fact that in very low precision FPGA neural network architectures, lookup tables are often used for arithmetic, but they are often used for very specific functions: while a K-LUT is capable of implementing any nonlinear Boolean function with K inputs, it ends up getting used for only a tiny fraction of these 2^{2^K} functions. A good example is binarised neural networks (BNNs) such as FINN, where LUTs end up being used to implement XNOR gates (multiplication over \{-1,+1\}) and popcount functions. Our research question is therefore: rather than restricting ourselves to these functions, can we make better use of the LUTs by embracing the nonlinearity and the K-input support they give us?

We show that this is indeed possible. Our basic approach is to start with a weight-binarised neural network, add inputs to each node to bring them up to K support, and then retrain the Boolean function implemented by that node. Retraining Boolean functions is a bit tricky, of course, because neural network training algorithms are not designed for this purpose. We generate a smooth interpolating function over the LUT entries, allowing us to use standard neural network training software (we use TensorFlow).

The end result is that the re-trained neural network is far more prunable than the original, because the extra inputs to the K-LUTs compensate for the removal of other nodes. Thus we end up with a much sparser neural network for the same classification accuracy. The sparsity improves our area by a factor of two or more, yet the more complex inference functions at each node are effectively provided “for free” by the FPGA architecture.

Circuit netlist? Neural network? Same thing!