Where High Performance Reconfigurable Computing Meets Embedded Systems

I have just returned from attending HiPEAC 2015 in Amsterdam. As part of the proceedings, I was asked by my long-time colleague Juergen Becker to participate in a panel debate on this topic.

Panels are always good fun, as they give one a chance to make fairly provocative statements that would be out of place in a peer review publication, so I took up the opportunity.

In my view, there are really two key areas where reconfigurable computing solutions provide an inherent advantage over most high performance computational alternatives:

  • In embedded systems, notions of correct or efficient behaviour are often defined at the application level. For example, in my work with my colleagues on control system design for aircraft, the most important notion of incorrect behaviour is would this control system cause the aircraft to fall out of the sky? An important metric of efficient behaviour might be how much fuel does the aircraft consume? These high level specifications, which incorporate the physical world (or models thereof) in its interaction with the computational process, allow a huge scope for novelty in computer architecture, and FPGAs are the ideal playground for this novelty.
  • In real time embedded systems, it is often important to know exactly how long a computation will take in advance. Designs implemented using FPGA technology often provides such guarantees – down to the cycle – where others fail. There is, however, potentially a tension between some high level design methodologies being proposed and the certainty of the timing of the resulting architecture.

Despite my best attempts to stir up controversy, there seemed to be very few dissenters to this view from other members of the panel, combined with a general feeling that the world is no longer one of “embedded” versus “general purpose” computing. If indeed we can draw such divisions, they are now between “embedded / local” and “datacentre / cloud”, though power and energy concerns dominate the design process in both places.

Starting a Math Circle

I’ve been intrigued for a couple of years by the idea of math circles. Over Christmas I finally plucked up the courage to start one at a local primary school with my wife, a maths teacher. The school was happy, and recruited six pupils from Year 2 to Year 6 to attend.

Armed with a number of publications from the American Mathematical Society’s Mathematical Circles Library, our first task was to find a suitable topic for Session 1. Our primary goal was to find a topic that was clearly not everyday school maths, preferably as far away from the National Curriculum as possible, and ideally including practical activities.

In the end, we decided to look at Möbius strips. In the traditional of journal keeping for Mathematical Circles, I thought I would report the experience here in case anyone else wants to give it a go. In particular, we took the following approach:

  1. Make zero half-turn bands (loops) and colour the outside in one colour.  Then colour the inside in a different colour.
  2. Repeat with a one half-turn band. This caused some surprise when it became apparent that one colour coloured the whole band.
  3. Predict what would happen if you cut the zero half-turn band down the centre (prediction was universally that you’d get two zero half-turn bands). Try it.
  4. Now predict for the one half-turn band. Children were less sure about this case, but the most popular prediction was two half-turn bands. More surprise when it turned out to create a single four half-turn band. One child then went off on his own exploring what happened when you cut one of these four half-turn bands (two four half-turn bands).

By this time some of children were already off under their own steam, trying out their own experiments. This was great, but even with only six children and with two adults, I found it hard to pull together the outcomes of these experiments in any systematic way in real time.

Eventually we discovered together that:

  • If the initial number of half-turns is odd, cutting gives you one larger band with more half turns. (I was hoping we’d be able to quantify this, but it turns out to be very difficult to count the number of half-turns in a strip – even for me, let alone for the younger children!)
  • If the initial number of half-turns is even, cutting gives you two interlinked bands with the same number of half-turns.

This took up pretty much the whole 50mins we had for Session 1, though I did briefly try to show them an explanation of why this might be the case, following the diagrammatic representation in this document from the University of Surrey. I probably didn’t leave enough time to do this properly, and they were anyway keen on cutting and exploring by that time, so with hindsight I probably should have just left them to it.

What delighted me was the child who wanted to take home his Möbius strip to show his dad. So, not a bad start to the Math Circle. Let’s see how we get on!

Overclocking-Friendly Arithmetic Needn’t Cost the Earth

This week, the FPT conference is being held in Shanghai. My PhD student, Kan Shi, will be presenting our work (joint with David Boland), entitled Efficient FPGA Implementation of Digit Parallel Online Arithmetic Operators.

In a previous paper, we’ve shown how the ideas of Ercegovac‘s “online arithmetic” – an arithmetic where computation proceeds from most significant digit to least significant digit, in contrast to the usual way we think about adding and multiplying – can be applied in the brave new world of clocking circuits faster than they “should” be able to run. The basic idea is simple: although sometimes beneficial, overclocking normal arithmetic – when it hurts – hurts bad, because errors tend to occur in the more significant digits. But if the flow of information were reversed, from more to less significant digits, then the error should hurt less too.

And so it did. But with one problem: to allow most significant digits to be generated first requires a redundant number system – a way of representing numbers where each number has more than one representation, for example, where there are two different ways of writing “10”. This redundancy cost a lot of silicon area.

This new paper shows that, in modern FPGA architectures, with careful design, the cost can be reduced significantly for adders. For multipliers, most significant digit first arithmetic has the important benefit that if you only want the most significant digits of the result, you don’t need to compute the least significant digits. In multiplication this is often the case, often in regular binary arithmetic we compute the 2n-bit result of an n by n-bit multiplication only to throw away the bottom n bits. We show in this paper that by judicious design, the area penalties of the redundant arithmetic can be eliminated in this case.

This work removes one of the last remaining hurdles that stops online arithmetic being industrially viable.

How (not) to Assess Children

Last month, the UK’s Department for Education launched a formal consultation to replace the statutory assessment in primary schools throughout England. The consultation is still running, and can be found at https://www.gov.uk/government/consultations/performance-descriptors-key-stages-1-and-2, and runs until the 18th December. Everyone can respond, and should respond. In my view, this proposal has the potential to seriously damage the education of our children, especially those who are doing well at school.

Currently, English schools report a “level” at the middle of primary school and the end of primary school in reading, writing, maths and spelling, punctuation and grammar. At the end of primary school, typical levels reported range from Level 3 to Level 6, with Level 4 being average. The new proposals effectively do away with reporting a range of attainment, simply indicating whether or not a pupil has met a baseline set of criteria. In my view this is a terrible step backwards: no longer will schools have an external motivation to stretch their most able pupils. In schools with weak leadership and governance, this is bound to have an impact.

I have drafted a response to the consultation document at https://www.scribd.com/doc/246073668/Draft-Response-to-DfE-Consultation.

My response has been “doing the rounds”. Most recently, it was emailed by the Essex Primary Heads Association to all headteachers in Essex. It has also been discussed on the TES Primary Forum and has been tweeted about a number of times.

I am not the only one who has taken issue with this consultation: others include http://thelearningmachine.co.uk/ks1-2-statutory-teacher-assessment-consultation/ and http://michaelt1979.wordpress.com/2014/11/13/primary-teachers-a-call-to-arms/.

Please add your say, and feel free to reuse the text and arguments made in this document.

Review: The Learning Powered School

This book, The Learning Powered School, subtitled Pioneering 21st Century Education, by Claxton, Chambers, Powell and Lucas, is the latest in a series of books to come from the work initiated by Guy Claxton, and described in more detail on the BLP website. I first became aware of BLP through an article in an education magazine, and since found out that one of the teachers at my son’s school has experience with BLP through her own son’s education. This piqued my interest enough to try to find out more.

The key idea of the book is to reorient schools towards being the places where children develop the mental resources to enjoy challenge and cope with uncertainty and complexity. The concepts of BLP are organised around “the 4 Rs”: resilience, resourcefulness, reflectiveness, and reciprocity, which are discussed throughout the book in terms of learning, teaching, leadership, and engaging with parents.

Part I, “Background Conditions”, explains the basis for BLP in schools in terms of both the motivation and the underlying research.

Firstly, motivation for change is discussed. The authors argue that both national economic success and individual mental health is best served by parents and schools helping children to “discover the ‘joy of the struggle’: the happiness that comes from being rapt in the process, and the quiet pride that comes from making progress on something that matters.” This is, indeed, exactly what I want for my own son. They further argue that schools are no longer the primary source of knowledge for children, who can look things up online if they need to, so schools need to reinvent themselves, not (only) as knowledge providers but as developers of learning habits. I liked the suggestion that “if we do not find things to teach children in school that cannot be learned from a machine, we should not be surprised if they come to treat their schooling as a series of irritating interruptions to their education.”

Secondly, the scientific “stable” from which BLP has emerged is discussed. The authors claim that BLP primarily synthesises themes from Dweck‘s research (showing that if people believe that intelligence is fixed then they are less likely to be resilient in their learning), Gardner (the theory of multiple intelligences), Hattie (emphasis on reflective and evaluative practice for both teachers and pupils), Lave and Wenger (communities of practice, schools as an ‘epistemic apprenticeship’), and Perkins (the learnability of intelligence). I have no direct knowledge of any of these thinkers or their theories, except through the book currently under review. Nevertheless, the idea of school (and university!) as epistemic apprenticeship, and an emphasis on reflective practice ring true with my everyday experience of teaching and learning. The seemingly paradoxical claim that emphasising learning rather attainment in the classroom leads to better attainment is backed up with several references, but also agrees with a recent report on the introduction of Level 6 testing in UK primary schools I have read. The suggestion made by the authors that this is due increased pressure on pupils and more “grade focus” leading to shallow learning.

The book then moves on to discuss BLP teaching in practice. There is a huge number of practical suggestions made. Some that particularly resonated with me included:

    • pupils keeping a journal of their own learning experiences
    • including focus on learning habits and attitudes in lesson planning as well as traditional focuses on subject matter and assessment
    • a “See-Think-Wonder” routine: showing children something, encouraging them to think about what they’ve seen and record what they wonder about

Those involved in school improvement will be used to checklists of “good teaching”. The book provides an interesting spin on this, providing a summary of how traditional “good teaching” can be “turbocharged” in the BLP style, e.g. students answer my questions confidently becomes I encourage students to ask curious questions of me and of each other, I mark regularly with supportive comments and targets becomes my marking poses questions about students’ progress as learners, I am secure and confident in my curriculum knowledge becomes I show students that I too am learning in lessons. Thus, in theory, an epistemic partnership is forged.

There is some discussion of curriculum changes to support BLP, which are broadly what I would expect, and a variety of simple scales to measure pupils’ progress against the BLP objectives to complement more traditional academic attainment. The software Blaze BLP is mentioned, which looks well worth investigating further – everyone likes completing quizzes about themselves, and if this could be used to help schools reflect on pupils’ self-perception of learning, that has the potential to be very useful.

In a similar vein, but for school leadership teams, the Learning Quality Framework looks worth investigating as a methodology for schools to follow when asking themselves questions about how to engage in a philosophy such as BLP. It also provides a “Quality Mark” as evidence of process.

Finally, the book summarizes ideas for engaging parents in the BLP programme, modifying homework to fit BLP objectives and improve resilience, etc.

Overall, I like the focus on:

  • an evidence-based approach to learning (though the material in this book is clearly geared towards school leaders rather than researchers, and therefore the evidence-based nature of the material is often asserted rather than demonstrated in the text)
  • the idea of creating a culture of enquiry amongst teachers, getting teachers to run their own mini research projects on their class, reporting back, and thinking about how to evidence results, e.g. “if my Year 6 students create their own ‘stuck posters’, will they become more resilient?”

I would strongly recommend this book to the leadership of good schools who already have the basics right. Whether schools choose to adopt the philosophy or not, whether they “buy in” or ultimately reject the claims made, I have no doubt that they will grow as places of learning by actively engaging with the ideas and thinking how they could be put into practice, or indeed whether – and where – they already are.

Insure yourself against your own inaccuracy

Last week saw the 19th World Congress of the International Federation of Automatic Control in Cape Town, South Africa. Three of my collaborators, Andrea Suardi, Stefano Longo and Eric Kerrigan, were there to present our joint paper Robust explicit MPC design under finite precision arithmetic.

The basic idea of this work is simple but interesting. Since we know we make mistakes, can we make decisions in a way that insures ourselves against our own faulty decision making? In a control system, we typically want to control a “real world thing” – an aircraft, a gas turbine, etc. Ourselves and others have proposed very sophisticated ways to do this, but since with each finite precision operation we might drift further away from the correct result, can we develop our algorithm in a way that provides a guaranteed behaviour?

We show that since control engineering provides tools to control systems with uncertain behaviour, it’s possible to incorporate the uncertainty of the control algorithm itself into the model of the system we’re controlling, to produce a kind of self-aware system design.

While the setting of this paper is in so-called explicit model predictive control, there’s no reason why this general philosophy should not extend to other decision making processes. It provides a rigorous way to think about the impact of decision quality on the overall behaviour of a system, since we can generally make decisions in any number of ways ranging from “quick and dirty” to “slow and thoughtful”, we could decide how to decide based on ideas like this.

 

Review: Practical Foundations for Programming Languages

A lot of my work revolves around various problems encountered when one tries to automate the production of hardware from a description of the behaviour the hardware is supposed to exhibit when in use. This problem has a long history, most recently going by the name “High Level Synthesis”. A natural question, but one that is oddly rarely asked in computer-aided design, is “what kind of description?”

Of course not only hardware designers need to specify behaviour. The most common kind of formal description is that of a programming language, so it seems natural to ask what the active community of programming language specialists have to say. I am  fortunate enough to be an investigator on a multidisciplinary EPSRC project with my friend and colleague Dan Ghica, specifically aimed at bringing together these two communities, and I thought it was time to undertake sufficient reading to help me bridge some gaps in terminology between our fields.

With this is mind, I recently read Bob Harper‘s Practical Foundations for Programming Languages. For an outsider to the field, this seems to be a fairly encyclopaedic book describing a broad range of theory in a fairly accessible way, although it did become less readily accessible to me as the book progressed. My colleague Andy Pitts is quoted on the back cover as saying that this book “reveals the theory of programming languages as a coherent scientific subject,” so with no further recommendation required, I jumped in!

I like the structure of this book because as a non-specialist I find this material heavy going: Harper has split a 500 page book into fully 50 chapters, which suits me very well. Each chapter has an introduction and a “notes” section and – for the parts in which I’m not very interested – I can read these bits to still get the gist of the material. Moreover, there is a common structure to these chapters, where each feature is typically first described in terms of its statics and then its dynamics. The 50 chapters are divided into 15 “parts”, to provide further structure to the material.

The early parts of the book are interesting, but not of immediate practical relevance to me as someone who wants to find a use for these ideas rather than study them in their own right. It is nice, however, to see many of the practical concepts I use in the rare occasions I get to write my own code, shown in their theoretical depth – function types, product types, sum types, polymorphism, classes and methods, etc. Part V, “Infinite Data Types” is of greater interest to me just because anything infinite seems to be of interest to me (and, more seriously, because one of the most significant issues I deal with in my work is mapping of algorithms conceptually defined over infinite structures into finite implementations).

Where things really get interesting for me is in Part XI, “Types as Propositions”. (I also love Harper’s distinction between constructive and classical logic as “logic as if people matter” versus “the logic of the mind of God”, and I wonder whether anyone has explored the connections between the constructive / classical logic dichotomy and the philosophical idealist / materialist one?) This left me wanting more, though, and in particular I am determined to get around to reading more about Martin-Löf type theory, which is not covered in this text.

Part XIV, Laziness, is also interesting for someone who has only played with laziness in the context of streams (in both Haskell and Scala, which take quite different philosophical approaches). Harper argues strongly in favour of allowing the programmer to make evaluation choices (lazy / strict, etc.).

Part XV, Parallelism, starts with the construction of a cost dynamics, which is fairly straight forward. The second chapter in this part looks at a concept called futures and their use in pipelining; while pipelining is my bread and butter in hardware design, the idea of a future was new to me. Part XVI, Concurrency, is also relevant to hardware design, of course. Chapter 43 makes an unexpected (for me!) link between the type system of a distributed concurrent language with modal logics, another area in which I am personally interested for epistemological reasons, but know little.

After discussing modularity, the book finishes off with a discussion of notions of equivalence.

I found this to be an enlightening read, and would recommend it to others with an interest in programming languages, an appetite for theoretical concerns, but a lack of exposure to the topics explored in programming language research.

Review: Children’s Minds

My third and final piece of holiday reading this Summer was Margaret Donaldson’s Children’s Minds, the first book I’ve read on child psychology.

This is a book first published in 1978, but various sources suggested it to me as a good introduction to the field.

I wasn’t quite sure what to make of this book. The initial chapters seem to be very much “of the time”, a detailed critique of the theory of Piaget, which meant little to me without a first hand knowledge of Piaget’s views. Donaldson does, however, include her own summary of Piaget’s theories as an appendix.

Donaldson argues that children are actually very capable thinkers at all ages, but that effort must be made to communicate to them in a way they can understand. Several interesting experiments by Piaget, from which he apparently concluded that children are incapable of certain forms of thought, are contrasted against others by later researchers who found that by setting up the experiments using more child-friendly communication, these forms are apparently exhibited.

The latter half of the book becomes quite interesting, as Donaldson explores what schools can do during reception (kindergarten) age and beyond to ensure that the early excitement of learning which most children have is not destroyed by schools themselves. It is fascinating, for example, to read that

There is now a substantial amount of evidence pointing to the conclusion that if an activity is rewarded by some extrinsic prize or token – something quite external to the activity itself – then that activity is less likely to be engaged in later in a free and voluntary manner when the rewards are absent, and it is less likely to be enjoyed.

I would be most interested in what work has been done since the 70s on this point, as if this is true then it seems to clash markedly with practice in the vast majority of primary schools I know.

The final part of the text is remarkably polemical:

Perhaps it is the convenience of having educational failures which explains why we have tolerated so many of them for so long…

A vigorous self-confident young population of educational successes would not be easy to employ on our present production lines. So we might at last be forced to face up to the problem of making it more attractive to work in our factories – and elsewhere – and, if we had done our job in the schools really well, we should expect to find that economic attractions would not be enough. We might be compelled at last to look seriously for ways of making working lives more satisfying.

Little progress since the 70s, then! Interestingly (for me) Donaldson approvingly quotes A.N. Whitehead’s views on the inertia of education; I know Whitehead as a mathematician, and was completely unaware of his educational work.

As I mentioned, this is the first book on child psychology I’ve read. I found it rather odd; I am not used to reading assertions without significant citations and hard data to back up the assertions. I am not sure whether this is common in the field, whether it is Donaldson’s writing, or whether it is because this is clearly Donaldson writing for the greater public. I tended to agree with much of what was written, but I would have been far more comfortable with greater emphasis on experimental rigour. There is much in here to discuss with your local reception class teacher; I want to know more about older children.

 

Review: Risk Savvy

My second piece of holiday reading this Summer was Gigerenzer’s Risk Savvy. This is an “entertaining book” for a general readership.

I learnt quite a bit from this book, but still found it frustrating and somewhat repetitive. There are many very interesting anecdotes about risk and poor decision making under risk, as well as lots of examples of how we are manipulated by the press and corporations into acting out of fear.

However, I don’t necessarily agree with the conclusions reached.

As an example, Gigerenzer clearly shows that PSA testing for prostate cancer is the US does more harm than good compared to the UK’s approach. More people are rendered incontinent and impotent through early intervention, without any significant difference in mortality rate. A similar story is told for routine mammography. But the conclusion that Gigerenzer seems to draw from these – and similar – studies, is that “it’s better not to know”, whereas my conclusion would be “it’s better not to intervene immediately”. I can’t see why simply knowing more should be worse.

I was also frustrated by the way Gigerenzer deals with the classification of risks into “known risks”, i.e. those for which good statistical information is available, and “unknown risks”. He convincingly shows that – all too often – we deal with unknown risks as if they were known risks, resulting in poor decision making. To me this appears to be a mirror of the two ways I know of dealing with uncertainty in mathematical optimization: stochastic versus robust optimization. This is a valuable dichotomy, but I don’t think that Gigerenzer’s conclusion that, in the presence of “unknown risk”, “simple is good” and “go with your gut feeling” is well justified. I do think that more needs to be done by decision makers to factor in the computational complexity of making decisions – and the overfitting of overly complex models – into decision making methodologies, but if “simple is good” then this should be a derivable result; I would love to see some mathematically rigorous work in this area.

Review: The Adventure of Numbers

One of the books I took on holiday this year was Gilles Godefroy’s The Adventure of Numbers. This is a great book!

The book takes the reader on a tour of numbers: ancient number systems, Sumerian and Babylonian number systems (decimal coded base 60, from which we probably get our time system of hours, minutes and seconds), ancient Greeks and the discovery of irrational numbers, Arabs, the development of imaginary numbers, transcendentals, Dedekind’s construction of the reals, p-adic numbers, infinite ordinals, and the limits of proof.

This is a huge range, well written, and while fairly rigorous only requires basic mathematics.

I love the fact that I got from this book both things that I can talk to primary school children about (indivisibility of space through a geometric construction of the square root of two and its irrationality) and also – unexpectedly – an introduction to the deep and beautiful MRDP theorem which links two sublime interests for me: computation (in a remarkably general sense) and Diophantine equations.

What’s not to love?