Book Review: Category Theory for the Sciences

I have been reading Spivak’s book ‘Category Theory for the Sciences’ on-and-off for the last 18 months during vacations as some of my ‘holiday reading.’ I’ve wanted to learn something about category theory for a while, mainly because I feel it’s something that looks fun and different enough to my day-to-day activity not to feel like work. I found Spivak’s book on one of my rather-too-regular bookshop trips a couple of years ago, and since it seemed like a more applied introduction, I thought I’d give it a try. This blog post combines my views on the strengths of this book from the perspective of a mathematically-minded engineer, together with a set of notes on the topic which might be useful for me – and maybe other readers – to refer back to in the future.

Screenshot 2019-08-01 at 07.40.30.png
Category Theory by the Pool, Summer 2019

The book has a well-defined structure. The first few chapters cover topics that should be familiar to applied mathematicians and engineers: sets, monoids, graphs, orders, etc., but often presented in an unusual way. This sets us up for the middle section of the book which presents basic category theory, showing how the structures previously discussed can be viewed as special cases within a more general framework. The final chapters of the book discuss some topics like monads that can then be explored using the machinery of this framework.

Set

Chapter 3 covers the category \mathbf{Set} of sets.

One of the definitions here that’s less common in engineering is that of coproduct. The coproduct of sets X and Y, denoted X \sqcup Y is defined as the disjoint union of X and Y. I first came across disjoint union as a concept when doing some earlier holiday reading on topology. We also here get the first experience of the author’s use of informal ‘slogans’ to capture the ideas present in more formal definitions and theorems: “any time behavior is determined by cases, there is a coproduct involved.” I really like this approach – it’s the kind of marginalia I usually write in books myself, but pre-digested by the author!

The notation \text{Hom}_{\mathbf{Set}}(X,Y) is used for the set of functions from X to Y, which we are encouraged to refer to as homomorphisms, all without really explaining why at this stage – it of course all becomes clear later in the book.

To give a flavour of the kind of exercises provided, we are asked to prove an isomorphism \text{Hom}_{\text{\bf Set}}(X \sqcup Y, A) \xrightarrow{\cong} \text{Hom}_{\text{\bf Set}}(X, A) \times \text{Hom}_{\text{\bf Set}}(Y, A). This reminds me of digital circuit construction: the former corresponds to a time-multiplexed circuit capable of producing outputs from A with inputs from either X or Y, together with an identifier for which (to feed a multiplexer select line); the latter corresponds to two circuits producing both the outputs rather than just the selected one.

The fibre product, X \times_Z Y, given functions f : X \to Z and g : Y \to Z is defined as the set X \times_Z Y = \{ (x, z, y) | f(x) = z = g(y)\}. Any set isomorphic to this fibre product is called a pullback of the diagram X \xrightarrow{f} Z \xleftarrow{g} Y.

The fibre sum, X \sqcup_W Y, given functions f : W \to X and g : W \to Y is the quotient of X \sqcup W \sqcup Y by the equivalence relation \sim generated by w \sim f(w) and w \sim g(w) for all w \in W. A pushout of X and Y over W is any set Z which is isomorphic to X \sqcup_W Y.

A good intuitive description in the text is to think of the fibre sum of X and Y as gluing the two sets together, with W as the glue. A nice example from one of the exercises is W = \mathbb{N}, X = \mathbb{Z}, Y a singleton set, f : W \to X defined by w \mapsto -(w+1). Then we’re left with X \sqcup_W Y \cong \{ -, 0, 1, \ldots \}, where - is an element corresponding to the collapsing of all negative elements in X to a single point.

The familiar notions of surjective and injective functions are recast as monomorphisms and epimorphisms, respectively. A function f : X \to Y is a monomorphism if for all sets A and pairs of functions g, g' : A \to X, if fg = fg' then g = g'. A function f : X \to Y is an epimorphism if for all sets B and pairs of functions h, h' : Y \to B, if hf = h'f then h = h'. By this time in the book, we get the hint that we’re learning this new language as it will generalise later beyond functions from sets to sets – the chapter ends with various slightly more structured concepts, like set-indexed functions, where the idea is always to define the object and the type of homomorphisms that might make sense for that object.

Monoids, Groups, Graphs, Orders & Databases

In Chapter 4, the reader is introduced to several other structures. With the exception of the databases part, this was familiar material to me, but with slightly unfamiliar presentation.

We learn the classical definition of a monoid and group, and also discuss the free monoid generated by a set X as being the monoid consisting of set \mbox{List}(X) of lists of elements of X, with the empty list as the identity and list concatenation as the operation. We discuss the idea of a monoid action on a set S, i.e. a function \circlearrowleft: M \times S \to S associated with the monoid M having the properties e \circlearrowleft s = s for any s \in M, where e is the identity, and m \circlearrowleft (n \circlearrowleft s) = (m \star n) \circlearrowleft s, where m and n are any elements of M, s is any element of S, and \star is the monoid operation. The discussion of monoids is nice and detailed, and I enjoyed the exercises, especially those leading up to one of the author’s slogans: “A finite state machine is an action of a free monoid on a finite set.”

Graphs are dealt with in a slightly unusual way (for me) as quadruples G := (V, A, \mbox{src}, \mbox{tgt}) of a set of vertices V, a set of arrows A, a functions \mbox{src}, \mbox{tgt} mapping each arrow into its source and destination vertex. This corresponds then to what I would call a multigraph, as parallel edges can exist, though it is called a graph in the book.

For each of the structures discussed, we finish by looking at their homomorphisms.

As an example, let {\mathcal M} := (M, e, \star) and {\mathcal M'} := (M', e', \star') be monoids (the tuple is of the set, identity element, and operation). Then a monoid homomorphism f: {\mathcal M} \to {\mathcal M'} is a function f: M \to M' preserving identity and operation, i.e. (i) f(e) = e' and f(m_1 \star m_2) = f(m_1) \star' f(m_2) for all m_1, m_2 \in M.

Similarly, for graphs, graph homomorphisms f: G \to G' for G := (V, A, \mbox{src}, \mbox{tgt}) and G' := (V', A', \mbox{src}', \mbox{tgt}') are defined as pairs of functions f_0: V \to V' and f_1: A \to A' preserving the endpoints of arrows, i.e. such that the diagrams below commute.

Order morphisms are similarly defined, this time as those preserving order.

Thus we are introduced to several different (somewhat) familiar mathematical structures, and in each case we’re learning about how to describe morphisms on these structures that somehow preserve the structure.

Categories, Functors, and Natural Transformations

Things really heat up in Chapter 5, where we actually begin to cover the meat of category theory. This is exciting and kept me hooked by the pool. The unusual presentation of certain aspects in previous chapters (e.g. the graphs) all made sense during this chapter.

A category \mathcal{C} is defined as a collection of various things: a set of objects \text{Ob}(\mathcal{C}), a set of morphisms \text{Hom}_{\mathcal{C}}(x,y) for every pair of objects x and y, a specified identity morphism \text{id}_x for every object x, and compositions \circ: \text{Hom}_{\mathcal{C}}(y,z) \times \text{Hom}_{\mathcal{C}}(x,y) \to \text{Hom}_{\mathcal{C}}(x,z). To qualify as a category, such things must satisfy identity laws and also associativity of composition.

Examples of the category of Monoids, of Groups, of Sets, of Graphs, of Pre-Orders, etc., are explored. Isomorphisms between objects in a category are carefully defined.

Functors are introduced as transformations from one category to another, via a function transforming objects and a function transforming morphisms, such that identities and composition and preserved. Functors are then used to rigorously explore the connections between preorders, graphs, and categories, and I finally understood why the author had focused so much on preorders in the previous chapters!

Things started to take a surprising and exciting turn for me when the author shows, via the use of functors, that a monoid is a category with one object (the set of the monoid becomes the morphisms in the category), and that groups can therefore be understood as categories with one object for which each morphism is an isomorphism. This leads to a playful discussion, out of which pops groupoids, amongst other things. Similar gymnastics lead to preorders reconstituted as categories in which hom-sets have one or fewer elements, and graphs reconstituted as functors from a certain two-object category to \mathbf{Set}. Propositional logic is re-examined from a categorical perspective, though I suspect this is just scratching the surface of the interplay between category theory and logic, and I’d love to read more about this. For me, this was a highlight – that by playing with these definitions and trying things out, all these wonderful familiar structures arise.

The final part of this chapter deals with natural transformations. Just as functors transform categories to categories, natural transformations transform functors to functors. Specifically, a natural transformation between functor F : {\mathcal C} \to {\mathcal D} and G : {\mathcal C} \to {\mathcal D} is defined as a morphism (in {\mathcal D}) \alpha_X for each object X in {\mathcal C}, such that the diagram below commutes for every morphism f : X \to Y in {\mathcal C}.

Screenshot 2019-08-01 at 04.52.46

I think this section on natural transformations is where the huge number of exercises in the book really came into its own; I doubt very much I would’ve grasped some of the subtleties of natural transformations without them.

We learn about two kinds of composition of natural transformations. The first combines two natural transformations on functors F,G,H : {\mathcal C} \to {\mathcal D}, say one \alpha: F \to G and one \beta: G \to H to obtain a natural transformation \beta \circ \alpha: F \to H. The other combines a natural transformation \gamma_1 between functors F_1, F_2 : {\mathcal C} \to {\mathcal D} with one \gamma_2 between functors G_1, G_2 : {\mathcal D} \to {\mathcal E}, to produce one, \gamma_2 \Diamond \gamma_1: G_1 \circ F_1 \to G_2 \circ F_2.

An equivalence relation {\mathcal C}' \simeq {\mathcal C} – weaker than isomorphism – is introduced on categories as a functor F : {\mathcal C} \to {\mathcal C}' such that there exists a functor F' : {\mathcal C}' \to {\mathcal C} and natural isomorphisms id_{\mathcal C} \xrightarrow{\cong} F' \circ F and id_{{\mathcal C}'} \xrightarrow{\cong} F \circ F'. This seemed odd to me — the lack of insistence on isomorphism for equivalence — but the examples all make sense. More importantly it is shown that this requirement is sufficient for the on-homomorphism part of the functor F to be bijective for every fixed pair of objects in {\mathcal C}, which feels right. I think my understanding could benefit from further interesting examples of usefully equivalent but non-isomorphic categories though – suggestions welcome!

Limits and Colimits

Chapter 6 is primarily given over to the definitions and discussion of limits and colimits, generalising the ideas in \mathbf{Set} to other categories. The chapter begins by looking again at product and coproduct. As an example, for a category {\mathcal C}, with objects X, Y \in \text{Ob}({\mathcal C}), a span is defined as a triple (Z, p, q) where Z is an object in {\mathcal C} and p : Z \to X and q : Z \to Y are morphisms in {\mathcal C}. A product X \times Y is then defined as as span on X and Y such that for any other span (Z, p, q) on X and Y, there exists a unique morphism t_{p,q} such that the following diagram commutes. Again, I rather like the author’s slogan for this: “none shall map to X and Y except through me!”

Screenshot 2019-10-06 at 15.36.29

The chapter then further generalises these ideas to the the ideas of limit and colimit. This section was a step up in abstraction and also one of the least “applied” parts of the book. It has certainly piqued my interest – what are limits and colimits in various structures I might use / encounter in my work? – but I found the examples here fairly limited and abstract compared to the copious ones in previous chapters. It’s in this chapter that universal properties begin to rear their head in a major way, and again I’m still not totally intuitively comfortable with this idea – I can crank the handle and answer the exercises, but I don’t feel like I’m yet at home with the idea of universal properties.

Categories at Work

In the final section of the book, we first study adjunctions. Given categories {\mathcal B} and {\mathcal A}, an adjunction is defined as a pair of functors L: {\mathcal B} \to {\mathcal A} and R: {\mathcal A} \to {\mathcal B} together with a natural isomorphism \alpha whose component for A \in \text{Ob}({\mathcal A}), B \in \text{Ob}({\mathcal B}) is \alpha_{B,A}: \text{Hom}_{\mathcal A}(L(B), A) \xrightarrow{\cong} \text{Hom}_{\mathcal B}(B, R(A)). Several preceding results and examples can then be re-expressed in this framework, and understood as examples of adjunctions. Adjuctions are also revisited later in the chapter to show a close link with monads.

Categories of functors are discussed, leading up to a discussion of the Yoneda lemma. This is really interesting stuff, as it seems to cross huge boundaries of abstraction. Consider a category {\mathcal C}. Spivak first defines a functor Y_c := \text{Hom}_{\mathcal{C}}(c, -) : {\mathcal C} \to \mathbf{Set} sending objects in {\mathcal C} to the set \text{Hom}_{\mathcal C}(c,d) and similarly for morphisms. This is then used in the following theorem:

Let {\mathcal C} be a category, c \in \text{Ob}(\mathcal{C}) be an object, and I: {\mathcal C} \to \mathbf{Set} be a set-valued functor. There is a natural bijection \text{Hom}_{\mathcal{C}-\mathbf{Set}}(Y_c, I) \xrightarrow{\cong} I(c).

This looks super weird: the right-hand side is just a plain old set and yet the left hand-side seems to be some tower of abstraction. I worked through a couple of examples I dreamt up, namely \mathcal{C} = \mathbf{1} (the category with one object) with I mapping that object to an arbitrary set, and \mathcal{C} = \mathbf{Set} with I = \text{id}_{\mathbf{Set}} and convinced myself this works just fine for those two examples. Unfortunately, the general proof is not covered in the book – readers are referred to Mac Lane – and while some discussion is provided regarding links to databases, I felt like I need a proof to properly grasp what’s going on here under the hood.

Monads are then covered. I’ve briefly seen monads in a very practical context of functional programming, specifically for dealing with I/O in Haskell, but I’ve never looked at them from a theoretical perspective before.

A monad on \mathbf{Set} (monads are only defined in the book on \mathbf{Set}, for reasons not totally apparent to me) is defined as a triple (T, \eta, \mu) consisting of a functor T: \mathbf{Set} \to \mathbf{Set}, a natural transformation \eta: \text{id}_{\mathbf{Set}} \to T, and natural transformation \mu: T \circ T \to T such that the following diagrams commute:

 

Screenshot 2019-12-08 at 14.52.30Screenshot 2019-12-08 at 14.52.41Screenshot 2019-12-08 at 14.52.50

I found this discussion enlightening and relatively straight-forward. Firstly, exception monads are discussed, like Maybe in Haskell, and a nice example of wrapping sets and their functions into power sets and their functions, for which I again enjoyed the exercises. Kleisli categories of monads are defined, and it then becomes apparent why this is such a powerful formalism to think about concepts like program state and I/O.

The Kleisli category \mathbf{Kls}(\mathcal{T}) of a monad \mathcal{T} = (T, \eta, \mu) on \mathbf{Set} is defined as a category with objects \text{Ob}(\mathbf{Kls}(\mathcal{T}) = \text{Ob}(\mathbf{Set}) and morphisms \text{Hom}_{\mathbf{Kls}(\mathcal{T})}(X,Y) = \text{Hom}_{\mathbf{Set}}(X,T(Y)). Identity is \text{id}_X: X \to X in \mathbf{Kls}(\mathcal{T}) is exactly \eta_X in \mathbf{Set}  and composition of morphisms f: X \to Y and g: Y \to Z in \mathbf{Kls}(\mathcal{T}) is given by \mu_z \circ T(g) \circ f in \mathbf{Set}. You can intuitively see the ‘baggage’ of T being carried through in \mathbf{Set} yet hidden from view in \mathbf{Kls}.

Finally, the book ends with a (very) brief discussion of operads. I’ve not come across operads before, and I’m not really sure what they ‘buy’ me beyond monoidal categories.

General

The exercises are copious and great, especially in the earlier chapters. They kept me engaged throughout and helped me to really understand what I was reading, rather than just think I had. In the later exercises there are many special cases that turn out to be more familiar things. This is both enlightening “oh, an X is just a Y!” and frustrating “yeah, but I want to see Xs that aren’t Ys, otherwise what’s this all heavyweight general machinery for?” I guess this comes from trying to cover a lot of ground in one textbook.

I am struck by just how much mental gymnastics I feel I had to go through to see mathematical structures from the “outside in” as well as from the “inside out”, and how category theory helped me do that. This is a pleasing feeling.

The slogans are great and often amusing, e.g. when describing a functor from the category of Monoids to the category of Sets, “you can use a smartphone as a paperweight.”

The book ends rather abruptly after the brief discussion of operads, with no conclusion. I felt it would be nice to have a summary discussion, and perhaps point out the current directions the field of applied category theory is taking.

Overall, this book has been an enlightening read. It has helped me to see the commonality behind several disparate mathematical structures, and has encouraged me to look for such commonality and relationships in my own future work. Recommended!

Machine Learning at FPT 2019

Next week, the IEEE International Conference on Field-Programmable Technology (FPT) will take place in Tianjin in China. I’m proud that my former PhD student Qiang Liu will be General Chair of the conference.

I am a coauthor of two papers to be presented at FPT, one led by my former BEng student Aaron Zhao, now a PhD student at Cambridge supervised by my colleague Rob Mullins, and one led by my former postdoc, Ameer Abdelhadi, now with COHESA / UofT. The paper by Aaron is also in collaboration with two of my former PhD students, Xitong Gao, now with the Chinese Academy of Sciences, and Junyi Liu, now with Microsoft Research.

The first paper, led by Aaron, is entitled ‘Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs’, and can be found on arXiv here. This paper is a serious look at getting an automated CNN flow for FPGAs that makes good use of some of the arithmetic flexibility available on these devices. Powers-of-two (“free” multiplication) and fixed-point (“cheap” multiplication) are both leveraged.

The second paper, led by Ameer, looks at the computation of a set of approximate nearest neighbours. This is useful in a number of machine learning settings, both directly as a non-neural deep learning inference algorithm and indirectly within sophisticated deep learning algorithms like Neural Turing Machines. Ameer has shown that this task can be successfully accelerated in an FPGA design, and explores some interesting ways to parameterise the algorithm to make the most of the hardware, leading to tradeoffs between algorithm accuracy and performance.

If you’re at FPT this year, please go and say hello to Aaron, Ameer and Qiang.

Highlights of Asilomar 2019

This week, I attended my first Asilomar Conference on Circuits, Signals and Computers, a very long-running conference series of the IEEE Signal Processing Society, with a very broad range of topics. I decided to attend Asilomar after being invited to give not just one talk, but two, once by my friend and collaborator Miloš Ercegovac from UCLA, and once by my good colleague Zhiru Zhang from Cornell.

No discussion of highlights of Asilomar can go without pointing out the extraordinarily beautiful setting of a conference centre right on Asilomar Beach. I can certainly see why the conference organisers keep coming back year after year – since the 1970s for Miloš and even earlier for my old friend fred harris, who I met there by surprise.

IMG_4500.jpg

 

Distinguished Lecture

The conference opened with distinguished lecture by Helmut Bölcskei from ETH Zurich, who gave a wonderful talk about the fundamental limits of deep learning. The key results he presented were about neural networks built of linear computational units and ReLU functions, and he showed how they can approximate a range of different functions. I was already familiar with asymptotic results for infinite depth or infinite width networks, but Bölcskei’s results were different – they showed how the approximation quality can be traded against a metric of neural network complexity that captured the number of bits needed to store the topology and the weights of the network. He was able to show the power of such neural networks across an extremely broad class of functions, and to explain how this comes about.

Compilation for Spatial Computing Architectures

This session was organised by Zhiru Zhang from Cornell and Hongbo Rong from Intel. The first talk, given by Yi-Hsiang Lai from Cornell, described the HeteroCL infrastructure, about which I’ve previously blogged in my description of FPGA 2019. Very closely related to this was Hongbo’s own work at Intel Labs, which makes heavy use of polyhedral methods, and work from the systolic array community on affine and uniform recurrence equations.

I then gave a talk about some of the work my research group has been doing over the past 12+ years in analysis of memory access patterns for High-Level Synthesis, taking in my early foundational work in bringing the polyhedral model to HLS with Qiang Liu (now at Tianjin University), our work on Separation Logic in HLS (now also a book by Felix Winterstein, my former PhD student who leads Xelera Technologies), and our recent work on utilising Microsoft Boogie in this context for multi-threaded HLS by my current PhD student Jianyi Cheng.

IMG_9262

Finally, Thierry Moreau from the University of Washington presented his very interesting work on a hardware-software open-source stack for modern deep learning (see the TVM website).

Computer Arithmetic

This session was organised by Miloš Ercegovac from UCLA and Earl Swartzlander from UT Austin. The first talk in this session was from Fredrik Dahlqvist, a postdoc in my group, who spoke about our work together with Rocco Salvia marrying ideas from probabilistic programming with rounding error analysis.

IMG_4510

Miloš Ercegovac from UCLA and James Stine from Oklahoma State University looked at how digit iteration techniques for division compare to multiplication-based techniques. Alexander Groszewski and Earl Swartzlander from UT Austin discussed their results from deterministic unary arithmetic inspired by stochastic computing; Keshab Parhi from the audience raised the interesting point of the importance of preservation of temporal structure in specially designed deterministic sequences for purposes of compositionality.

I really enjoyed the unusual talk by Keshab Parhi (U. Minnesota) on Molecular Computing Inspired by Stochastic Logic (see here for more details) via Fractional Coding, building on Soloveichik, Seelig and Winfree. If digits are encoded as relative concentrations of molecules, the problem of signal correlation, which tends to take the shine off stochastic computing work, can be avoided. He proposed computation using molecular reaction rates, and showed how to encode values as concentrations of two different molecules; his techniques have been verified in simulation – I would love to see this in a test-tube.

Theory of Deep Learning

This session was organised by Richard Baraniuk and Santiago Segarra (Rice University.)

There was a very enjoyable talk by Alessandro Achille from UCLA on studying deep neural networks from an information-theoretic perspective. He pointed out that real-valued weights appear to contain infinite information, but that by using the principle that small perturbations in weights should not throw-off the classification result completely, we can recover a finite weight encoding. He then moved on to show using a PAC-Bayes bound that good generalisation comes from low weight information. He demonstrated that Stochastic Gradient Descent implicitly minimises Fisher information, but that for generalisation performance, it is Shannon information that should be bounded – he then derived a connection between the two under some conditions.

Tom Goldstein (University of Maryland) gave a stunningly illustrated talk on Understanding Generalization in Neural Nets via Visualization, based on his co-authored paper on the topic. He sought to empirically understand how the continuous piecewise linear functions of modern DNNs, when combined with SGD-based optimisation, lead to functions that generalise well. This was done via a clever process of “poisoning” training data to obtain badly generalising minima.

AI/ML Architectures

This session was organised by Keshab Parhi (University of Minnesota.)

Danny Bankman gave a talk about Stanford’s RRAM-based DNNs. He showed that register-file access accounts for the majority of energy in standard CMOS processor-like architectures, and drew the conclusion that architectures should be “memory-like” in their design, using “conductance-mode arithmetic” with very low precision integer activations, and put the necessary ramp generator for their ADC right inside the RRAM array. Results are verified using SPICE. I know little about RRAM technology, but talking with my colleagues Themis Prodromakis and Tony Kenyon has got me intrigued.

Deep Learning Theory

This session was organised by Tom Goldstein (University of Maryland.)

My favourite talk in this session was by Tom himself, in which he presented an analysis of adversarial attacks in DNNs, again beautifully illustrated – based on his co-authored paper. He showed that due to the high dimensionality of the spaces involved, you are extremely likely to hit – at random – a point in the input space that can be adversarially perturbed. He demonstrated – using the audience as guinea pigs – that adversarial perturbation can also trick humans quite easily on the CIFAR-10 data set. Perhaps my favourite twist on the talk was that he gave the talk wearing an “invisibility cloak” which – if worn – tricks YOLO into not identifying the wearer.

Reflections on Asilomar

I’ve sent PhD students to Asilomar before, but this was the first time I attended myself. It’s a very broad conference, in a beautiful setting. It seems to be a great venue to complement the more technically homogeneous conferences like FPGA which I help to organise – they serve different purposes. Asilomar is a great conference to have your work seen by people who wouldn’t usually follow your work, and to pick up ideas from neighbouring fields.

Approximating Circuits

Next week, Ilaria Scarabottolo, currently a visiting research student in my research group at Imperial, will present her paper “Partition and Propagate” at DAC 2019 in Las Vegas. In this post, I will provide a brief preview of her work (joint with Giovanni Ansaloni and Laura Pozzi from Lugano and me.)

I’ve been interested in approximation, and how it can be used to save resources, ever since my PhD 20 years ago, where I coined the term “lossy synthesis” to mean the synthesis of a circuit / program where error can be judiciously introduced in order to effect an improvement in performance or silicon area. Recently, this area of research has become known as “approximate computing“, and a bewildering number of ways of approximating behaviour – at the circuit and software level – have been introduced.

Some of the existing approaches for approximate circuit synthesis are point solutions for particular IP cores (e.g. our approximate multiplier work) or involve moving beyond standard digital design methodologies (e.g. our overclocking work.) However, a few pieces of work develop a systematic method for arbitrary circuits, and Ilaria’s work falls into this category.

Essentially, she studies that class of approximation that can be induced solely by removing chunks of a logic circuit, replacing dangling nets with constant values – a technique my co-authors referred to as Circuit Carving in their DATE 2018 paper.

Our DAC paper presents a methodology for bounding the error that can be induced by performing such an operation. Such error can be bounded by exhaustive simulation or SAT, but not for large circuits with many inputs due to scalability concerns. On the other hand, coarse bounds for the error can be derived very quickly. Ilaria’s work neatly explores the space between these two extremes, allowing analysis execution time to be traded for bound quality in a natural way.

Approximation’s time has definitely come, with acceptance in the current era often driven by machine-learning applications, as I explore in a previous blog post. Ilaria’s paper is an interesting and general approach to the circuit-level problem.

 

Boolean Circuits are Neural Networks

On Monday, my PhD student Erwei Wang will present our work (joint also with James Davis and Peter Cheung) called LUTNet: Rethinking Inference in FPGA Soft Logic at the IEEE International Symposium on Field-Programmable Custom Computing Machines in San Diego, California.

In this paper, we take a very unusual approach to the design of a deep neural network accelerator in hardware: for us, the nodes in the neural network are Boolean lookup tables.

We were motivated initially by the fact that in very low precision FPGA neural network architectures, lookup tables are often used for arithmetic, but they are often used for very specific functions: while a K-LUT is capable of implementing any nonlinear Boolean function with K inputs, it ends up getting used for only a tiny fraction of these 2^{2^K} functions. A good example is binarised neural networks (BNNs) such as FINN, where LUTs end up being used to implement XNOR gates (multiplication over \{-1,+1\}) and popcount functions. Our research question is therefore: rather than restricting ourselves to these functions, can we make better use of the LUTs by embracing the nonlinearity and the K-input support they give us?

We show that this is indeed possible. Our basic approach is to start with a weight-binarised neural network, add inputs to each node to bring them up to K support, and then retrain the Boolean function implemented by that node. Retraining Boolean functions is a bit tricky, of course, because neural network training algorithms are not designed for this purpose. We generate a smooth interpolating function over the LUT entries, allowing us to use standard neural network training software (we use TensorFlow).

The end result is that the re-trained neural network is far more prunable than the original, because the extra inputs to the K-LUTs compensate for the removal of other nodes. Thus we end up with a much sparser neural network for the same classification accuracy. The sparsity improves our area by a factor of two or more, yet the more complex inference functions at each node are effectively provided “for free” by the FPGA architecture.

Circuit netlist? Neural network? Same thing!

Royal Society Discussion Meeting

I was kindly invited to speak at a Royal Society Discussion Meeting before Easter, entitled “Numerical Algorithms for High-Performance Computational Science”, organised by Nick Higham, Laura Grigori and Jack Dongarra. This blog post summarises some of the discussion at the meeting.

IMG_3491
Prof Nick Higham kicking off the proceedings

I very much enjoyed the format of the meeting: select interesting speakers and allow them 30 minutes to talk about a topic of their choosing related to the theme of the meeting, with 10 full minutes for discussion and in-depth questions after each talk. Posters were also presented by a wide variety of researchers, with each poster presenter given a one-minute lightning-talk slot. Two of my PhD students, Erwei Wang and He Li, took this opportunity. Erwei presented a preview of our LUTNet paper appearing at FCCM very soon (separate blog post to follow), while He presented some of our work on arbitrary precision iterative compute.

 

Talks by others included:

  • David Keyes (KAUST) on the topic “Hierarchical algorithms on hierarchical architectures”. He discussed some very interesting hierarchical low-rank decompositions and also hierarchies of numerical precision.
  • Kathy Yelick (Berkeley) spoke on “Antisocial parallelism: avoiding, hiding and managing communication”, a very fruitful area of research in recent years. A few years ago, Abid Rafique, one of my former PhD students (joint with Nachiket Kapre) made use of this work, and it was good to catch up with the current state of research.
  • Anna Scaife (Manchester) gave a fascinating insight into the Square Kilometre Array. The sheer volumes of data are mind boggling (zettabytes annually) and pose unique algorithmic challenges.
  • Michela Taufer (UTK) discussed molecular dynamics workflows, and how we may be able to harness machine learning to reduce the human bottlenecks in such workflows in the future.
  • Rick Stevens (Argonne) gave a very engaging talk about the intersection of machine learning with computational science, exemplified by the Candle project, using deep learning in cancer research. He mentioned many of the emerging architectures for deep learning and their optimisation for low-precision compute. Interested readers may also like our recent survey article on the topic.
  • Jack Poulson (Hodge Star) spoke about sampling Determinantal Point Processes, and how links to matrix decomposition algorithms can be used to radically accelerate this process.
  • John Shalf (LBNL) spoke about alternative computational models beyond CMOS, new materials for switches, and the growth of hardware specialisation. He proposed the strategy of: hardware-driven algorithm design, algorithm-driven hardware design, and co-design of hardware and algorithm. Having worked in the FPGA community for decades where this has been our mantra, it is great to see hardware specialisation spreading the message of co-design into the HPC community.
  • Doug Kothe (ORNL) provided a very interesting insight into exascale computational motifs at the DOE.
  • Tony Hey (STFC) set out a compelling argument that academics in applied deep learning should focus on deep learning for scientific data, on the basis that (i) scientific data sets are huge and open and (ii) head-to-head competition with industrial giants on profit-oriented deep-learning applications, without access to their data sets, is a poor choice for academia. I think the same argument could be made for academic computer architecture too. His team are developing benchmarks for scientific machine learning, complementary to MLPerf.
  • Erin Carson (Charles University) presented an enchanting talk on iterative linear algebra in low and multiple precision compute. I’m a fan of her earlier work with Nick, and it was great to hear her current thinking and discussion of least-squares iterative refinement.
  • Steve Furber (Manchester) spoke about arithmetic in the context of the SpiNNaker machine, and a particular approach they have taken to numerical solution of neural ODEs in fixed-point arithmetic, demonstrating that stochastic rounding can radically improve the quality of their results.
  • Tim Palmer (Oxford) argued for low precision compute in weather and climate models, allowing the recouped computational cost to be recycled into better resolution weather models, resulting in higher overall accuracy. This reminded me of the argument I made with my PhD student Antonio Roldão Lopes and collaborator Eric Kerrigan in our paper More FLOPS or More Precision?
  • Guillaume Aupy (INRIA) discussed memory-efficient approaches for automatic differentiation and back-propagation.
  • Satoshi Matsuoka (RIKEN Centre) took us through the work being done on Post-K, a new Japanese supercomputer being designed to provide compute infrastructure for future workloads at the intersection of big data and AI/ML.
  • Mike Heroux (Sandia) spoke about his work developing programming infrastructure for future HPC, in particular for performance portability and for system reliability.

My own talk was entitled “Rethinking Deep Learning: Architectures and Algorithms” – I will save summarising the content for a future blog post. Slides for all these talks will appear on the Manchester Numerical Linear Algebra group website. In addition, each speaker has received an invitation to author an article for a special issue of Philosophical Transactions A – this should be a very interesting read.

D3tHpjbX4AAWM53.jpg large
My talk on “Rethinking Deep Learning: Architectures and Algorithms”

I was impressed by the great attendance at the meeting and by the quality of the technical interaction; I met several new and interesting people at the intersection of numerical analysis and scientific computing.

Special thanks to the organisers, Nick, Laura, and Jack for putting an excellent programme together. And congratulations to Jack for the news – a few days after the meeting – of his election to Foreign Member of the Royal Society!

IMG_3508
The Imperial EEE Team at the Royal Society

DATE 2019: Some Highlights

This week, I attended the Design, Automation & Test in Europe (DATE) conference in Florence, Italy. DATE is a large conference, which I have attended irregularly since I was a PhD student. This year, the general chair was a long-standing colleague from the FPGA community, Jürgen Teich.

Readers can find a summary of some of the talks I found most interesting below.

On Tuesday, my colleague Martin Trefzer chaired a session on Computational and Resource-efficiency in Quantum and Approximate Computing. The work by Sekanina was  interesting, using information on data distribution to drive the construction of approximate circuits. Circuits are constructed from the baseline using a technique called Cartesian Genetic Programming. I have recently been collaborating with Ilaria Scarabottolo and others from Laura Pozzi‘s group on a related problem – see our DAC 2019 paper (to appear) for details – so this was of particular interest.

On Wednesday, I chaired a session When Approximation Meets Dependability, together with my colleague Rishad Shafik. Ioannis Tsiokanos from Queen’s University Belfast presented an interesting approach that dynamically truncates precision in order to avoid timing violations. This is interestingly complementary to the approach I developed with my former PhD student Kan Shi where we first simply allowed timing violations [FCCM 2013], and secondly redesigned data representation based on Ercegovac‘s online arithmetic, so that timing violations caused low-magnitude error [DAC 2014].

David Pellerin from Amazon gave an interesting keynote address which very heavily emphasised Amazon’s F1 FPGA offering, which was – of course – music to my ears.

On Thursday morning, I attended the session Architectures for Emerging Machine Learning Techniques. Interestingly, there was a paper there making use of Gustafson’s posits within hardware-accelerated deep learning, a technique they dub Deep Positron.

The highlight talk for me was Ed Lee‘s Thursday keynote, A Fundamental Look at Models and Intelligence. Although I’ve been aware of Lee’s work, especially on Ptolemy, since I did my own PhD, I don’t think I’ve ever had the pleasure of hearing him lecture before. It was insightful and entertaining. A central theme of the talk was that models mean two different things for scientists and engineers: a scientist builds a model to correspond closely to a ‘thing’; an engineer builds a ‘thing’ to correspond closely to a model. He used dichotomy to illuminate some of the differences we see between neuroscience-inspired artificial intelligence and the kind of AI we see as very popular at the moment, such as deep learning. Lee’s general-readership book Plato and the Nerd – which has been on my “to read” list since my colleague and friend Steve Neuendorffer mentioned it to me a few years ago – has just climbed several notches up that list!

On Thursday afternoon, I attended the session on The Art of Synthesizing Logic. My favourite talk in this session was from Heinz Reiner, who presented a collaboration between EPFL and UC Berkeley on Boolean rewriting for logic synthesis, in which exact synthesis methods are used to replace circuit cuts, rather than resorting to a pre-computed database of optimal function implementations. During the talk, Reiner also pointed the audience to an impressive-looking GitHub repo, featuring what looks like some very useful tools.

Friday is always workshop day at DATE, featuring a number of satellite workshops.  I attended the workshop entitled Quo Vadis, Logic Synthesis?, organised by Tiziano Villa and Luca Carloni. This was a one-off workshop in celebration of the 35th anniversary of the publication of the influential Espresso book on two-level logic minimisation.

Villa talked the audience through the history of logic synthesis, starting with the Quine-McCluskey method.

My favourite talk in this workshop was from Jordi Cortadella, who spoke about a method for synthesising Boolean relations. This is the problem of synthesising a the cheapest implementation of a function f : {\mathbb B}^n \to {\mathbb B}^m, which one is free to choose from amongst the given relation R, i.e. viewing f as a relation f \subset {\mathbb B}^n \times {\mathbb B}^m, one is free to chose f – and its implementation – subject to the requirement that f \subseteq R for some given relation R (not necessarily a function). This is a strict generalisation of the well-known problem of Boolean ‘don’t care’ conditions, a.k.a. incompletely specified functions. Cortadella presented a method leveraging the known approaches to this latter problem, by exploring the semi-lattice of relations between these sets generated by \subseteq in a structured way, using a form of branch-and-bound.

Soeken also presented a very interesting summary of three uses of SAT within logic synthesis, namely Schmitt’s ASP-DAC paper on SAT-based LUT mapping, Eén’s paper on using logic synthesis for efficient SAT (rather than SAT for efficient logic synthesis) and Haaswijk‘s recent PhD on making exact logic synthesis more scalable by providing partial topological information – a topic that interestingly has some echoes in work I’m soon to present at the Royal Society.

The workshop was an enjoyable way to end DATE, and I was disappointed to have to leave half-way through – there may well have been other interesting talks presented in the afternoon.