DATE 2019: Some Highlights

This week, I attended the Design, Automation & Test in Europe (DATE) conference in Florence, Italy. DATE is a large conference, which I have attended irregularly since I was a PhD student. This year, the general chair was a long-standing colleague from the FPGA community, Jürgen Teich.

Readers can find a summary of some of the talks I found most interesting below.

On Tuesday, my colleague Martin Trefzer chaired a session on Computational and Resource-efficiency in Quantum and Approximate Computing. The work by Sekanina was  interesting, using information on data distribution to drive the construction of approximate circuits. Circuits are constructed from the baseline using a technique called Cartesian Genetic Programming. I have recently been collaborating with Ilaria Scarabottolo and others from Laura Pozzi‘s group on a related problem – see our DAC 2019 paper (to appear) for details – so this was of particular interest.

On Wednesday, I chaired a session When Approximation Meets Dependability, together with my colleague Rishad Shafik. Ioannis Tsiokanos from Queen’s University Belfast presented an interesting approach that dynamically truncates precision in order to avoid timing violations. This is interestingly complementary to the approach I developed with my former PhD student Kan Shi where we first simply allowed timing violations [FCCM 2013], and secondly redesigned data representation based on Ercegovac‘s online arithmetic, so that timing violations caused low-magnitude error [DAC 2014].

David Pellerin from Amazon gave an interesting keynote address which very heavily emphasised Amazon’s F1 FPGA offering, which was – of course – music to my ears.

On Thursday morning, I attended the session Architectures for Emerging Machine Learning Techniques. Interestingly, there was a paper there making use of Gustafson’s posits within hardware-accelerated deep learning, a technique they dub Deep Positron.

The highlight talk for me was Ed Lee‘s Thursday keynote, A Fundamental Look at Models and Intelligence. Although I’ve been aware of Lee’s work, especially on Ptolemy, since I did my own PhD, I don’t think I’ve ever had the pleasure of hearing him lecture before. It was insightful and entertaining. A central theme of the talk was that models mean two different things for scientists and engineers: a scientist builds a model to correspond closely to a ‘thing’; an engineer builds a ‘thing’ to correspond closely to a model. He used dichotomy to illuminate some of the differences we see between neuroscience-inspired artificial intelligence and the kind of AI we see as very popular at the moment, such as deep learning. Lee’s general-readership book Plato and the Nerd – which has been on my “to read” list since my colleague and friend Steve Neuendorffer mentioned it to me a few years ago – has just climbed several notches up that list!

On Thursday afternoon, I attended the session on The Art of Synthesizing Logic. My favourite talk in this session was from Heinz Reiner, who presented a collaboration between EPFL and UC Berkeley on Boolean rewriting for logic synthesis, in which exact synthesis methods are used to replace circuit cuts, rather than resorting to a pre-computed database of optimal function implementations. During the talk, Reiner also pointed the audience to an impressive-looking GitHub repo, featuring what looks like some very useful tools.

Friday is always workshop day at DATE, featuring a number of satellite workshops.  I attended the workshop entitled Quo Vadis, Logic Synthesis?, organised by Tiziano Villa and Luca Carloni. This was a one-off workshop in celebration of the 35th anniversary of the publication of the influential Espresso book on two-level logic minimisation.

Villa talked the audience through the history of logic synthesis, starting with the Quine-McCluskey method.

My favourite talk in this workshop was from Jordi Cortadella, who spoke about a method for synthesising Boolean relations. This is the problem of synthesising a the cheapest implementation of a function f : {\mathbb B}^n \to {\mathbb B}^m, which one is free to choose from amongst the given relation R, i.e. viewing f as a relation f \subset {\mathbb B}^n \times {\mathbb B}^m, one is free to chose f – and its implementation – subject to the requirement that f \subseteq R for some given relation R (not necessarily a function). This is a strict generalisation of the well-known problem of Boolean ‘don’t care’ conditions, a.k.a. incompletely specified functions. Cortadella presented a method leveraging the known approaches to this latter problem, by exploring the semi-lattice of relations between these sets generated by \subseteq in a structured way, using a form of branch-and-bound.

Soeken also presented a very interesting summary of three uses of SAT within logic synthesis, namely Schmitt’s ASP-DAC paper on SAT-based LUT mapping, Eén’s paper on using logic synthesis for efficient SAT (rather than SAT for efficient logic synthesis) and Haaswijk‘s recent PhD on making exact logic synthesis more scalable by providing partial topological information – a topic that interestingly has some echoes in work I’m soon to present at the Royal Society.

The workshop was an enjoyable way to end DATE, and I was disappointed to have to leave half-way through – there may well have been other interesting talks presented in the afternoon.

Highlights of CSE 2019

Over the second half of this week, I’ve been attending the SIAM Computational Science and Engineering conference in Spokane, Washington – a short flight north (and a radical change in weather) from my earlier conference in California this week.

IMG_3274
Spokane, WA in February. Temperatures were as low as -12℃.

This was my first SIAM conference. I was kindly invited to speak on the topic of floating-point error analysis by Pierre Blanchard, Nick Higham and Theo Mary. I very much enjoyed the sessions they organised and indeed the CSE conference, which I hope to be able to attend more regularly from now on.

My own talk was entitled Approximate Arithmetic – A Hardware perspective. I spoke about the rise of architecture specialisation as driving the need for closer collaboration between computer architects and numerical analysts, about some of our work on automatic error bounds Boland and Constantinides (2011) and Magron, Constantinides and Donaldson (2017), on code refactoring Gao and Constantinides (2015), as well as some of our most recent work on machine learning (I will blog separately about this latter topic over the next couple of months.)

The CSE conference is very large – with 30-40 small parallel sessions happening at any given moment – so I cannot begin to summarise the conference. However, I include some notes below on other talks I found particularly interesting.

Plenary Sessions

I very much enjoyed the plenary presentation by Rachel Ward on Stochastic Gradient Descent (SGD) in Theory and Practice. She introduced the SGD method very nicely, and looked at various assumptions for convergence. She took a particularly illuminating approach, by looking at applying SGD to the simple special case of solving a system of linear equations by minimising F(w) = \frac{1}{2}||Aw-b||^2 in the case where \exists w^*. Aw^* = b. She showed that if the system is under-determined, then SGD converges to the solution of minimum 2-norm, and therefore has an inherent regularising effect. I was surprised by some of the results on overparameterised neural networks, showing that SGD finds global minimisers and that there really doesn’t tend to be much overfitting despite the huge number of parameters, pointing to the implicit regularisation caused by the SGD algorithm itself. I learnt a lot from this talk, and have several papers on my “to read” list as a result, in particular:

There was also an interesting plenary from Anima Anandkumar on the role of tensors in machine learning. The mathematical structure of tensors and multi-linear algebra are topics I’ve not explored before – mainly because I’ve not seen the need to spend time on them. Anandkumar certainly provided me with motivation to do that!

Floating-Point Error Analysis

Theo Mary from the University of Manchester gave a very good presentation of his work with Nick Higham on probabilistic rounding error analysis, treating numerical roundoff errors as zero-mean independent random variables of arbitrary distribution, making use of Hoeffding’s inequality to a produce a backward error analysis. Their work is described in more detail on their own blog post and – in more depth – in their their very interesting paper. It’s a really exciting and useful direction, I think, given the greater emphasis on average-case performance from modern applications, together with both very large data sets and very low precision computation, the combination of which renders many worst-case analyses meaningless. In a similar vein, Ilse Ipsen also presented a very interesting approach: a forward error analysis, more specialised in that she only looked at inner products, but also without the assumption of independence, making use of Azuma’s inequality. The paper on this topic has not yet been finished, but I certainly look forward to reading it in due course!

Reducing Communication Costs

There were a number of interesting talks on mitigating communication costs. Lawrence Livermore National Labs presented several papers relating to the ZFP format they’ve recently proposed for (lossily) compressed floating-point vectors, at a mini-symposium organised by Alyson Fox, Jeffrey Hittinger, and James Diffenderfer. Diffenderfer’s talk developed a bound on the norm-wise relative error of vectors reconstructed from ZFP; Alyson Fox’s talk then extended this to the setting of iterative methods, noting as future work their interest in probabilistic analyses. In the same session, Nick Higham gave a crystal clear and well-motivated talk on his recent work with Srikara Pranesh and Mawussi Zunonslides and paper are available. This work extends the applicability of Nick’s earlier work with Erin Carson to cases that would have over- or under-flowed, or led to subnormal numbers, without the scaling technique developed and analysed here. They use matrix equilibration – this reminded me of some work I did with my former PhD student Juan Jerez and colleague Eric Kerrigan, but in our case for a different algorithm kernel and targeting fixed-point arithmetic, where making use of the full dynamic range is particularly important. The Higham, Pranesh and Zunon results are both interesting and practically very useful.

In a different session, Hartwig Anzt spoke about the work he and others have been doing to explicitly decouple storage precision from compute precision in sparse linear algebra. The idea is simple but effective: take the high-order bits of the mantissa (and the sign / exponent) and store them in one chunk of data and – separately – store the low-order bits in another chunk. Perform all arithmetic in high precision (because it’s not the computation that’s the bottleneck), but convert low-precision stored data to high precision on the fly at data load (e.g. by packing low-order bits with zeros.) Then, at run-time, decide whether to load the full-precision data or only the low-precision data, based on current estimates of convergence. This approach could also make a good case study application for the run-time adaptation methodology we developed with U. Southampton in the PRiME project.

A Reflection

Beyond the technical talks, there were two things that stood out for me since I’m new to the conference. Firstly, there were many more women than in the typical engineering conferences I attend. I don’t know whether the statistics on maths versus engineering are in line with this observation, but clearly maths is doing something right from which we could learn. Secondly, there were clear sessions devoted to community building: mentoring sessions, tutorials for new research students, SIAM student chapter presentations, early career panels, presentations on funding programmes, diversity and inclusion sessions, a session on helping people improve their CV, an explicit careers fair, etc. Partly this may simply reflect the size of the conference, but even so, this seems to be something SIAM does particularly well.

Highlights of FPGA 2019

FPGA 2019 - ICL Group photo
Current and Former Imperial Staff, Students, and Sabbatical Visitors at FPGA 2019

This week, I attended the ACM FPGA 2019 conference in Seaside (nr. Monterey), California, the annual premier ACM event on FPGAs and associated technology. I’ve been involved in this conference for many years, as author, TPC member, TPC and general chair, and now steering committee member. Fashions have come and gone over this time, including in the applications of FPGA technology, but the programme at FPGA is always interesting and high quality. This year particular thanks should go to Steve Neuendorffer for organising the conference programme and to Kia Bazargan in his role as General Chair.

Below, I summarise my personal highlights of the conference. These are by no means my view of the “best” papers – they are all good – but rather those that interested me the most.

Efficient and Effective Sparse LSTM on FPGA with Bank-Balanced Sparsity, a collaboration between Tsinghua, Beihang, Harbin Institute of Technology, and Microsoft Research, tackled the problem of ensuring that an inference implementation, when sparsified, gets sparsified in a way that leads to balanced load across the various memory banks. The idea is simple but effective, and leads to an interesting tradeoff between the quality of LSTM output and performance. I think it would be interesting to try to design a training method / regulariser that encourages this kind of structured sparsity in the first place.

Kees Vissers from Xilinx presented a keynote talk summarising their new Versal architecture, which the Imperial team had previously had the pleasure of hearing about from our alumnus Sam Bayliss. This is a really very different architecture to standard FPGA fare, and readers might well be interested in taking a look at Kees’s slides to learn more.

Vaughn Betz presented a paper from the University of Toronto, Math Doesn’t Have to be Hard: Logic Block Architectures to Enhance Low Precision Multiply-Accumulate on FPGAs. This work proposed a number of relatively minor tweaks to Intel FPGA architectures which might have a signifiant impact on low-precision MAC performance. Vaughn began by pointing out that in this application, very general LUTs often get wasted by being used as very simple gates – he gave the example of AND gates in partial product generation, and even as buffers. A number of architectural proposals were made to avoid this issue. I find this particularly interesting at the moment, because together with my PhD student Erwei Wang and others, I have proposed a new neural network architecture called LUTNet, motivated by exactly the same concern. However, our approach is the dual of that presented by Vaughn – we keep the FPGA architecture constant but modify the basic computations performed by the neural network to be more well-tuned to the underlying architecture. Expect a future blog post on our approach!

Lana Josipović presented the most recent work on the dynamically scheduled HLS tool from Paolo Ienne‘s group at EPFL, which they first presented at last year’s conference – see my blog post from last year. This time they have added speculative execution to their armoury. This is a very interesting line of work as HLS moves to encompass more and more complex algorithns, and Lana did a great job illustrating how it works.

Yi-Hsiang Lai presented HeteroCL: A Multi-Paradigm Programming Infrastructure for Software-Defined Reconfigurable Computing, an interesting collaboration between Zhiru Zhang‘s group at Cornell and Jason Cong‘s group. This work proposed separating functionality from implementation / optimisation concerns, such as datapath, precision and memory customisation, providing a cleaner level of abstraction. The approach seems very interesting, and reminded me of the aspect-oriented HLS work I contributed to in the REFLECT European project, about which Joāo Cardoso and others have since written a book. I think it’s a promising approach, and I’d be interested to explore the potential and challenges of their tool-flow. This paper won the best paper prize of the conference – congratulations to the authors!

My PhD student Jianyi Cheng presented our own paper, EASY: Efficient Arbiter SYnthesis from Multi-Threaded Code, and did an excellent job. Our paper is described in more detail in an earlier blog post.

IMG_3231
Jianyi Cheng presenting our paper

Other papers I found particularly interesting include Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs, Microsemi’s contribution on analytic placement, ETH Zürich’s paper on an FPGA implementation of an approximate maximum graph matching algorithm, and U. Waterloo’s paper on a lightweight NoC making use of traffic injection regulation to avoid stalls. Unfortunately I had to miss the talks after noon on Tuesday, so there may well be more of interest in that part of the programme too.

The panel discussion – chaired by Deming Chen – was on the topic of whether FPGAs have a role to play in Supercomputing. As I pointed out in the discussion, to answer this question scientifically we need to have a working definition of “FPGA” and of “Supercomputing” – both seem to be on shifting sands at the moment, and we need to resist reducing a question like this to “does LINPACK run well on a Virtex or Stratix device.”

We also had the pleasure of congratulating Deming Chen and Paul Chow on their recently awarded fellowships, awarding a best paper prize, recognising several historical FPGA papers of significance, and last but by no means least welcoming the new baby of two of the stalwarts of the FPGA community – baby complete with “I am into FPGA” T-shirt! All this led to an excellent community feeling, which we should continue to nurture.

 

 

Efficient Memory via Formal Verification

My new PhD student Jianyi Cheng is presenting a very exciting paper at the ACM International Symposium on FPGAs (FPGA 2019). This is work he did for his Masters degree, and is a collaboration with Joy Chen and Jason Anderson at the University of Toronto, as well as Shane Fleming and myself at Imperial. In this blog post, I aim to summarise the main idea.

Multi-threaded programming is now a fairly mainstream activity, and has found its way into high-level synthesis tools, both through OpenCL and also LegUp pthreads support. We focus here on the latter.

At FPL 2017, Joy and Jason had a paper that automatically decided how to partition shared arrays for multi-threaded code, aiming to reduce the amount of arbitration required between hardware units and chunks of memory. Their approach used a simulation trace to identify candidate partitions, and designed the arbiters so that, for example, if accesses to partition P were only observed in that trace to come from thread T, then there is very low latency access to P from T at execution time. In this way, they were able to significantly speed up synthesised multi-threaded code making use of shared memories.

However, the arbiters were still there. They were necessary because while no access by some other thread T’ was observed during simulation, there was no guarantee that such an access might not occur at run-time. So the arbiters sat there, taking up FPGA area and – for large enough numbers of ports – hitting the critical path of the design.

Enter our work.

In our paper, we show – building on the excellent PhD thesis by Nathan Chong that I examined a few years back – how the original multi-threaded code can be translated into  single-threaded code in a verification language developed by Microsoft Research called Boogie. We then show how to automatically construct assertions in Boogie that, if passed, correspond to a formal proof that a particular thread can never access a particular partition. This lets us strip out the arbiters, gaining back the area and significantly boosting the clock frequency.

I think it’s a really neat approach. Please come and hear Jianyi give his talk and/or read the paper!

screenshot2019-01-31at11.37.09

Neural Networks, Approximation and Hardware

My PhD student Erwei Wang, various collaborators and I have recently published a detailed survey article on this topic: Deep Neural Network Approximation for Custom Hardware: Where We’ve Been, Where We’re Going, to appear in ACM CSUR. In this post, I will informally explain my personal view of the role of approximation in supervised learning (classification), and how this links to the very active topic of DNN accelerator design in hardware.

We can think of a DNN as a graph G, where nodes perform computations and edges carry data. This graph can be interpreted (executed) as a function \llbracket G \rrbracket mapping input data to output data. The quality of this DNN is typically judged by a loss function \ell. Let’s think about the supervised learning case: we typically evaluate the DNN on a set of n test input data points x_i and their corresponding desired output y_i, and compute the mean loss:

L(G) = \frac{1}{n} \sum_{i=1}^n {\ell\left( \llbracket G \rrbracket(x_i), y_i \right)}

Now let’s think about approximation. We can define the approximation problem as – starting with G – coming up with a new graph G', such that G' can be somehow much more efficiently implemented than G, and yet L(G') is not significantly greater than L(G) – if at all. All the main methods for approximating NNs such as quantisation of activations and weights and sparsity – structured and unstructured – can be viewed in this way.

There are a couple of interesting differences here to the different problem – often studied in approximate computing, or lossy synthesis – of approximating the original function \llbracket G \rrbracket. In this latter setting, we can define a distance d(G',G) between G and G' (perhaps worst case or average case difference over the input data set), and our goal is to find a G' that keeps this distance bounded while improving the performance, power consumption, or area of the implementation. But in the deep learning setting, even the original network G is imperfect, i.e. L(G) > 0. In fact, we’re not really interested in keeping the distance between G and G' bounded – we’re actually interested bounding the distance between \llbracket G' \rrbracket and some oracle function defining the perfect classification behaviour. This means that there is a lot more room for approximation techniques. It also means that L(G') may even improve compared to L(G), as sometimes seen – for example – through the implicit regularisation behaviour of rounding error in quantised networks. Secondly, we don’t even have access to the oracle function, only to a sample (the training set.) These features combine to make the DNN setting an ideal playground for novel approximation techniques, and I expect to see many such ideas emerging over the next few years, driven by the push to embed deep learning into edge devices.

I hope that the paper we’ve just published in ACM CSUR serves as a useful reference point for where we are at the moment with techniques that simultaneously affect classification performance (accuracy / loss) and computational performance (energy, throughput, area). These are currently mainly based around quantisation of the datatypes in G (fixed point, binarisation, ternarisation, block floating point, etc.) topological changes to the network (pruning) and re-parametrisation of the network (weight sharing, low-rank factorisation, circulant matrices) as well as approximation of nonlinear activation functions. My view is that this is scratching the surface of the problem – expect to see many more developments in this area and consequent rapid changes in hardware architectures for neural networks!

 

 

Approximation of Boolean Functions

Approximate Computing has been a buzzphrase for a while. The idea, generally, is to trade off quality of result / solution, for something else – performance, power consumption, silicon area. This is not a new topic, of course, because in numerical computation people have generally always worked with finite precision number representations. In my early work in 2001, before the phrase “Approximate Computing” was in circulation, I introduced this as “Lossy Synthesis” – the idea that circuit synthesis can be broadened to incorporate the automated control of loss of numerical quality in exchange for reduction in area and increase in performance.

Most approximate computing frameworks focus on domains where numerical error is tolerable. Perhaps we don’t care if our answer is 1% wrong, for example, or perhaps we don’t even care if it’s out by 100%, so long as that happens very infrequently.

However, there is another interesting class of computation. Consider a function producing a Boolean output f : \chi \to {\mathbb B}, where {\mathbb B} = \{T, F\}. An interesting challenge is to produce another function \tilde{f} : \chi \to {\mathbb T} with a ternary output {\mathbb T} = \{T, F, -\} bearing a close resemblance to f. We can make the idea of bearing a close resemblance precise in the following way: if \tilde{f} declares a value true (false), then so must f. We can think of this as relation between fibres:

\tilde{f}^{-1}(\{T\}) \subseteq f^{-1}(\{T\}) and \tilde{f}^{-1}(\{F\}) \subseteq f^{-1}(\{F\})            (1)

We can then think of the function \tilde{f} as approximating f if the fibre of the ‘don’t know’ element, -, is small in some sense, e.g. if |\tilde{f}^{-1}(\{-\})| is small.

In the context of approximate computing, we can pose the following optimisation problem:

\min_{\tilde{f}}: \mbox{Cost}(\tilde{f}) subject to |\tilde{f}^{-1}(\{-\})| < \tau and (1),

where \mbox{Cost} represents the cost (energy, area, latency) of implementing a function. One application area for this kind of investigation is in computer graphics. It is often the case that, when rendering a scene, an algorithm first needs to decide which components of the scene will definitely not be visible, and therefore need not be considered further. Should this part of the graphics pipeline make a mistake by deciding a component may be visible when it is actually invisible, little harm is done – more computation is required downstream in the graphics pipelining, costing energy and time, but not a reduced quality rendering. On the other hand, if it makes a mistake by deciding that a component is invisible when it is actually visible, this may cause a significant visual artefact in the rendered scene.

Last year, I had a bright Masters student, Georgios Chatzianastasiou, who decided to explore this problem in the context of f being the Slab Method in computer graphics and \tilde{f} being one of a family of approximations \tilde{f}_p, each produced by using interval arithmetic approximations to f computed in floating-point with precision p. In this way we get a family of approximate computing hardware IP blocks, all of which guarantee that, when given a ray and a bounding box, if the IP reports no intersection between the two, then there is provably no intersection. Yet each family member operates at a different precision, requiring different circuit area, trading off against the rate of `false positives’. Georgios wrote a paper on the implementation, which was accepted by FPL 2018 – he presents it next Wednesday.

If you’re at the FPL conference, please go and say hello to Georgios. If you’re interested in working with me to deepen and broaden the scope of this work, please get in touch!

Throwaway Digits

Tomorrow, my PhD student He Li will present our paper Digit Elision for Arbitrary-accuracy Iterative Computation (joint work with James Davis and John Wickerson) at the IEEE Symposium on Computer Arithmetic in Amherst, MA.

Readers of this blog may remember that we previously came up with a neat way of computing arbitrarily precise values of arbitrarily deep iterations of an iterative real-number computation, while only using constant-area compute hardware. This latest paper extends our previous work in the following way.

In our previous work, we computed every digit of every iteration of the computation. While for any computable real function this will give a correct result, it tends to be wasteful in practice. There are two reasons it’s wasteful. Firstly, often the reason we’re computing an iteration is because that iteration converges. Convergence can be seen as agreement in most-significant digits – after a while they don’t change. So why do we recompute them? We see this again and again in standard numerical computing – each iteration might add just a couple of new correct digits, but we still end up wasting time and energy computing all of the digits in each iteration, even the stable ones. Secondly, not all iterations may contribute equally to the overall error resulting from early termination. This paper addresses these two issues.

The first, and more general, issue is the wastefulness of computing stabilised digits. But just because they look stable, are they really stable? Maybe we’ve stabilised to 0.9, 0.99, 0.999, 0.999, and then one more iteration might kick us over to 1.0001. So can we really afford not to recompute most-significant digits? Ercegovac‘s Online Arithmetic comes to our rescue again! If we compute in an appropriate redundant number representation, then we can prove that stability of digits means we don’t need to consider them any more. This is our first contribution – to recognise this and utilise it within an appropriately modified computational architecture.

The second, and more specific, issue is that some digits are effectively ‘don’t care’. In this paper, we only analyse the specific case of stationary iterative methods (Jacobi, SOR, etc.) for this kind of digit. We show that, in these cases, for a fixed digit budget (e.g. “compute at most D digits across all iterations”), you should allocate these digits by computing a constant more digits each iteration. This constant can be estimated from the infinity norm of a certain matrix involved in the computation. Again, we modify our hardware architecture to take advantage of this pattern.

The end result is that we end up tracing out a corridor of digits, shown in the figure below, where the vertical axis is iteration and the horizontal axis is precision / digit number. Some digits have provably stabilised and no longer need computation (marked “), some are irrelevant don’t cares (marked X). This corridor radically improves the storage requirements of the original ARCHITECT scheme.

Screen Shot 2018-06-25 at 14.07.29

Hardware for Rational Functions

Next Tuesday, my collaborator Silviu-Ioan Filip will present some of our recent work with Nicolas Brisebarre, Miloš Ercegovac, Matei Istoan and Jean-Michel Muller at the IEEE International Symposium on Computer Arithmetic.

In the 1970s, Miloš invented a rather nice method called the E-method for evaluating rational functions, i.e. ratios of two polynomials.  The basic idea of his method is as follows. We may solve a system of linear equations Ay = b where A is a matrix of a special structure formed from constants q_i together with variable x:

A = \begin{bmatrix}  1 & -x  & 0 & 0 & \cdots & 0 & 0 \\  q_1 & 1 & -x & 0 & \cdots & 0 & 0 \\  q_2 & 0 & 1 & -x & \cdots & 0 & 0 \\  \vdots & \vdots & \ddots & \ddots & \ddots  & \vdots & \vdots \\  q_{n-1} & 0 & 0 & 0 & \cdots & 1 & -x \\  q_n & 0 & 0 & 0 & \cdots & 0 & 1  \end{bmatrix}

If we further choose the vector b = \begin{bmatrix} p_0 & \cdots & p_n \end{bmatrix}^T, then it turns out that the first element of the solution vector is the rational function \frac{p_n x^n + \cdots + p_0}{q_n x^n + \cdots + q_0}.

So we can use this to evaluate such rational functions. On the face of it, that doesn’t seem very interesting: why would we go to the bother of solving a system of linear equations to evaluate a rational function?

The answer lies in the combination of this idea with another one of Miloš’s key contributions, the idea of online arithmetic – computing results most-significant-digit first. In fact, if the matrix A is sufficiently well conditioned then we may use a stationary iterative method to solve the system of equations in such a way that it produces one new correct digit of the solution for each iteration of the method, leading to very efficient evaluation.

Our paper at ARITH makes two novel contributions. Firstly, we show how to find such a matrix A that is sufficiently well conditioned and for which the solution is close to a given function we’re trying to approximate, improving on the previous technique of Brisebarre et al. Secondly, we show how this method can be efficiently implemented in modern FPGA hardware, when aiming for high throughput.

The main domain of interest will be functions where rational approximation provides a much better fit than polynomials, as the computation required essentially provides rational computation for the price of polynomial computation. A buy-one-get-one-free offer, if you will.

I’m pleased to say that both the rational approximation generator and the hardware IP core generator will soon be open-sourced. Watch this space! Edit: I’m pleased to say this is now available at https://github.com/sfilip/emethod.

Structures in Arithmetic Teaching Tools

Readers of this blog will know that beyond my “day job”, I am interested in early mathematics education. Partly due to my outreach work with primary schools, I became aware of several tools that are used by primary (elementary) school teachers to help children grasp the structures present in arithmetic. The first of these, Cuisenaire Rods, have a long history and have recently come back in vogue in education. They consist of coloured plastic or wooden rods that can be physically manipulated by children. The second, usually known in this country as the Singapore Bar Model, is a form of drawing used to illustrate and guide the solution to “word problems”, including basic algebra. Through many discussions with my colleague, Charlotte Neale, I have come to appreciate the role these tools – and many other physical pieces of equipment, known in the education world as manipulatives – can play in helping children get to grips with arithmetic.

Cuisenaire and Bar Models have intrigued me, and I spent a considerable portion of my Easter holiday trying to nail down exactly what arithmetic formulae correspond to the juxtaposition of these concrete and pictorial representations. After many discussions with Charlotte, I’m pleased to say that we will be presenting our findings at the BSRLM Summer Conference on the 9th June in Swansea. Presenting at an education conference is a first for me, so I’m rather excited, and very much looking forward to finding out how the work is received.

In this post, I’ll give a brief overview of the main features of the approach we’ve taken from my (non educationalist!) perspective.

Firstly, to enable a formal study of these structures, we needed to formally define how such rods and diagrams are composed.

 

Cuisenaire Rods

These rods come in all multiples up to 10 of a single unit length, and are colour coded. To keep things simple, we’ve focused only on horizontal composition of rods (interpreted as addition) to form terms, as shown in an example below.

2pl3pl4.jpg

In early primary school, the main relationships being explored relating to horizontal composition are equality and inequality. For example, the figure below shows that black > red + purple, because of the overhanging top-right edge.

7gt2pl4.jpg

With this in mind, we can interpret any such sentence in Cuisenaire rods as an equivalent sentence in (first order) arithmetic. After having done so, we can easily prove mathematically that all such sentences are true. Expressibility and truth coincide for this Cuisenaire syntax! Note that this is very different to the usual abstract syntax for expressing number facts: although 4 = 2 + 1 is false, we can still write it down. This is one reason – we believe – they are so heavily used in early years education: truths are built through play. We only need to know syntactic rules for composition and we can make many interesting number sentences.

From an abstract algebraic perspective, closure and associativity of composition naturally arise, and so long as children are comfortable with conservation of length under translation, commutativity is also apparent. Additive inverses and identity are not so naturally expressed, resulting in an Abelian semigroup structure, which also carries over to our next tool, the bar model.

 

Bar Models

Our investigations suggest that bar models – example for 20 = x+2 pictured below –  are rarely precisely defined in the literature, so one of our tasks was to come up with a precise definition of bar model syntax.

bar20eqxpl2.jpg

We have made the observation that there seem to be a variety of practices here. The most obvious one, for small numbers drawn on squared paper, is to retain the proportionality of Cuisenaire. These ‘proportional bar models’ (our term) inherit the same expressibility / truth relationship as Cuisenaire structures, of course, but now numerals can exceed 10 – at the cost of decimal numeration being a prerequisite for their use. However, proportionality precludes the presence of ‘unknowns’ – variables – which is where bar models are heavily used in the latter stages of primary schools and in some secondary schools.

At the other extreme, we could remove the semantic content of bar length, leaving only abutment and the alignment of the right-hand edges as denoting meaning – a type of bar model we refer to as a `topological bar model’. These are very expressive – they correspond to Presburger arithmetic without induction. It now becomes possible to express false statements (e.g. the trivial one below, stating that 1 = 2).

bar1eq2.jpg

As a result, we must be mathematically precise about valid rules of inference and axiom schemata for this type of model, for example the rule of inference below. Note that due to the inexpressibility of implication in the bar model, many more rules of inference are required than in a standard first-order treatment of arithmetic.
rule1.jpg

The topological bar model also opens the door to many different mistakes, arising when children apply geometric insight to a topological structure.

In practice, it seems that teachers in the classroom informally use some kind of mid-way point between these two syntaxes, which we call an `order-preserving’ bar model: the aim is for relative sizes of values to be represented, ensuring that larger bars are interpreted as larger numbers. However, this approach is not compositional. Issues arising from this can be seen when trying to model, for example, x + y = 3. The positive integral solutions are either x = 2, y = 1 leading to x > y or x = 1, y =2, leading to y > x.

 

Other Graphical Tools and Manipulatives

As part of our work, we identify certain missing elements from first-order arithmetic in the tools studied to date. It would be great if further work could be done to consider drawings and manipulatives that could help plug these gaps. They include:

  • Multiplication in bar models. While we can understand 3x, for example, as a shorthand for x + x + x, there is no way to express x^2
  • Disjunction and negation. While placing two bar models side-by-side seems like a natural way of expressing conjunction, there is no natural way of expressing disjunction / negation. Perhaps a variation on Pierce’s notation could be of interest?
  • We can consider variables in a bar model as implicitly existentially quantified. There is no way of expressing universal quantification.
  • As noted above, these tools capture an Abelian semigroup structure. We’re aware of some manipulatives, such as Algebra Tiles, which aim to also capture additive inverses, though we’ve not explored these in any depth.
  • We have only discussed one use of Cuisenaire rods – there are many others – as the recent ATM book by Ollerton, Williams and Gregg makes clear, many of which we feel could also benefit from analysis using our approach.
  • There are also many more manipulatives than Cuisenaire, as Griffiths, Back and Gifford describe in detail in their book, and it would be of great interest to compare and contrast these from a formal perspective.
  • At this stage, we have avoided introducing a monus into our algebra of bar models, but this is a natural next step when considering the algebraic structure of so-called comparative bar models.
  • My colleague Dan Ghica alerted me to the computer game DragonBox Algebra 5+, which we can consider as a sophisticated form of virtual manipulative incorporating rules of inference. It would be very interesting to study similar virtual manipulatives in a classroom setting.

 

An Exciting Starting Point

Charlotte and I hope that attendees at the BSRLM conference – and readers of this blog – are as excited as we are about our idea of the potential for using the tools of mathematical logic and abstract algebra to understand more about early learning of arithmetic. We hope our work will stimulate some others to work with us to develop and broaden this research further.

 

Acknowledgement

I would like to acknowledge Dan Ghica for reading this blog post from a semanticist’s perspective before it went up, for reminding me about DragonBox, and for pointing out food for further thought. Any errors remain mine.

Know Your Threads

Next week my PhD student Nadesh Ramanathan will present his paper (joint work with Wickerson) on Concurrency-Aware Scheduling for High-Level Synthesis at FCCM 2018.

This work is the latest instalment of our approach to scheduling multithreaded software in high-level synthesis while taking advantage of the weak memory behaviour allowable in the C/C++11 standard.

Our previous work analysed, and then synthesised, each thread individually. What this paper adds is the ability to perform an inter-thread analysis – while still synthesising threads individually. It is natural, in hardware synthesis, to assume knowledge of the other threads that are being synthesised at compile time. We show in this paper that such knowledge can – and often does – considerably improve high-level synthesis results, by removing redundant constraints during the scheduling process.

Readers wanting to know a little more before diving into the paper itself could also read John Wickerson’s description of our work.