Fields Can Make Your Loops Run Faster

On Tuesday, my PhD student Xitong Gao will present our work (joint with John Wickerson) on Automatically Optimizing the Latency, Area, and Accuracy of C Programs for High-Level Synthesis at the ACM International Symposium on Field-Programmable Gate Arrays.

Our starting point for this work is that people often want to express their algorithms in terms of real numbers, but then want to have an implementation is some finite precision arithmetic on an FPGA after passing the results through a High-Level Synthesis tool.

One of the key limiting factors of the performance of an algorithm is the ability of the HLS tool to pipeline loops in the design in order to maximise throughput. It turns out that numerical properties of real numbers can come to the rescue!

It’s a well known (but often forgotten) fact that computer-represented numbers are not – in fact are very far from – real numbers. In particular, the main equivalence properties that the real numbers exhibit: closure, associativity, commutativity, distributivity, etc. simply do not apply for a large number of practically used data representations.

In our paper we take account of this discrepancy by automatically refactoring code to make it run faster (while tracking area and accuracy too). As a trivial example for the recurrence x_n = 3 x_{n-1} + 1, a multiply and an add must be performed before executing the next iteration. But transforming this to x_n = 9 x_{n-2} + 4 gives much more time to execute. Combining this with expression balancing, etc., leads to a wide variety of possible implementations, which our tool can explore automatically while also proving the numerical properties of the transformed code. This latter point makes it quite unlike so-called “unsafe” optimisations commonly used in compilers such as -funsafe-math-optimizations in gcc.

HiPEAC 2016

I have just returned from HiPEAC, a major “roadshow” of primarily European research in high performance and embedded computing. As always, it was good to catch up with old friends and colleagues from around the world. I briefly describe below some of the highlights for me, as well as my contributions as part of a panel discussion held on  Tuesday.

The keynote talk at PARMA-DITAM from my old friend Pedro Diniz on compiler transformations for resilience. A theme that kept coming up throughout the conference, and is also a key topic of the large project I have with Southampton, Manchester and Newcastle Universities.

Kirsten Eder’s talk at the ICT-ENERGY workshop on the work at Bristol on modelling power consumption in the XMOS architecture. An interesting choice, it turns out, because the communication in XMOS is transparent to the programmer, which makes power modelling at instruction level much more appealing.

Another old friend, Juergen Teich, gave a great keynote at IMPACT on extending scheduling to the parametric / symbolic case, in order to allow for the number and shape of a rectangular array of processing units to be determined at run time rather than at compile time. This aligns very nicely with our own drive to parametricity, in our case for run-time pipelining decisions in high-level synthesis.

Cristiano Malossi from IBM gave an interesting keynote at WAPCO on mixed precision conjugate gradient solvers, amongst other things. My former PhD student Antonio Roldao Lopes did an early investigation into how the precision of such a solver impacts its performance (lower precision, more iterations, but each iteration runs faster). It was interesting to see this extended to a mixed precision setting, where expensive parts of the algorithm are executed in low precision while cheap parts are executed in high precision.

I was invited to participate in a panel discussion in front of the audience at the WRC workshop. The other panelists were Aaron Smith, who is writing compilers for the Microsoft FPGA system now in the Bing search engine, Ronan Keryell, who does high-level design R&D at Xilinx, Peter Hofstee, from IBM, and Koen Bertels from TU Delft. The panel was moderated by another old friend, Juergen Becker. We were asked to discuss whether “the embedded revolution is dead” or whether it’s “just the microprocessor” whose death has come (!), and what the future holds. My perspective was that the embedded revolution has not started yet (how much computational intelligence is embedded in most things around us, and how much better could our lives be if there were more?), that the microprocessor is far from dead (though future microprocessors will look far more ‘FPGA-like’ as the number of cores expand beyond the point at which it becomes unreasonable to ignore the fact we’re computing in space), and that the future belongs to high performance embedded systems based on heterogeneous architectures. I don’t think there was much disagreement from other members of the panel. Juergen did his tongue-in-cheek best to start an IBM-and-Microsoft-versus-others fight by suggesting that it was actually datacentres which are dead, though I think there is a general consensus that the balance between local and cloud-based compute needs to be carefully evaluated in future system designs and that neither will kill the other.

Questions from the audience were interesting because people seem mostly worried about how to program these damn things! Which is good for those of us working on exactly this question, of course. The panel’s view was that with today’s compiler technology, the two main options were Domain Specific Languages and OpenCL. I made the point that architecture specialisation, and FPGA design in particular, is naturally aligned with static analysis – you get automatic specialisation by automatically understanding what you’re going to implement; I therefore ventured that future languages for heterogeneous computing, whatever they are, will be designed to make static analysis a simpler task.

Book Review: Out of the Labyrinth: Setting Mathematics Free

This book, by Kaplan and Kaplan, a husband and wife team, discusses the authors’ experience running “The Math Circle”. Given my own experience setting up and running a math circle with my wife, I was very interested in digging into this.

The introductory chapters make the authors’ perspective clear: mathematics is something for kids to enjoy and create independently, together, with guides but not with instructors. The following quote gets across their view on the difference between this approach and their perception of “school maths”:

Now math serves that purpose in many schools: your task is to try to follow rules that make sense, perhaps, to some higher beings; and in the end to accept your failure with humbled pride. As you limp off with your aching mind and bruised soul, you know that nothing in later life will ever be as difficult.

What a perverse fate for one of our kind’s greatest triumphs! Think how absurd it would be were music treated this way (for math and music are both excursions into sensuous structure): suffer through playing your scales, and when you’re an adult you’ll never have to listen to music again.

I find the authors’ perspective on mathematics education, and their anti-competitive emphasis, appealing. Later in the book, when discussing competition, Math Olympiads, etc., they note two standard arguments in favour of competition: that mathematics provides an outlet for adolescent competitive instinct and – more perniciously – that mathematics is not enjoyable, but since competition is enjoyable, competition is a way to gain a hearing for mathematics. Both perspectives are roundly rejected by the authors, and in any case are very far removed from the reality of mathematics research. I find the latter of the two perspectives arises sometimes in primary school education in England, and I find it equally distressing. There is a third argument, though, which is that some children who don’t naturally excel at competitive sports do excel in mathematics, and competitions provide a route for them to be winners. There appears to be a tension here which is not really explored in the book; my inclination would be that mathematics as competition diminishes mathematics, and that should competition for be needed for self-esteem, one can always find some competitive strategy game where mathematical thought processes can be used to good effect. However, exogenous reward structures, I am told by my teacher friends, can sometimes be a prelude to endogenous rewards in less mature pupils. This is an area of psychology that interests me, and I’d be very keen to read any papers readers could suggest on the topic.

The first part of the book offers the reader a detailed (and sometimes repetitive) view of the authors’ understanding of what it means to do mathematics and to learn mathematics, peppered with very useful and interesting anecdotes from their math circle. The authors take the reader through the process of doing mathematics: analysing a problem, breaking it down, generalising, insight, and describe the process of mathematics hidden behind theorems on a page. They are insistent that the only requirement to be a mathematician is to be human, and that by developing analytical thinking skills, anyone can tackle mathematical problems, a mathematics for The Growth Mindset if you will. In the math circles run by the authors, children create and use their own definitions and theorems – you can see some examples of this from my math circle here, and from their math circles here.

I can’t say I share the authors’ view of the lack of beauty of common mathematical notation, explored in Chapter 5. As a child, I fell in love with the square root symbol, and later with the integral, as extremely elegant forms of notation – I can even remember practising them so they looked particularly beautiful. This is clearly not a view held by the authors! However, the main point they were making: that notation invented by the children, will be owned and understood by the children, is a point well made. One anecdote made me laugh out loud: a child who invented the symbol “w” to stand for the unknown in an equation because the letter ‘w’ looks like a person shrugging, as if to say “I don’t know!”

In Chapter 6, the authors begin to explore the very different ways that mathematics has been taught in schools: ‘learning stuff’ versus ‘doing stuff’, emphasis on theorem or emphasis on proof, math circles in the Soviet Union, competitive versus collaborative, etc. In England, in my view the Government has been slowly shifting the emphasis of primary school mathematics towards ‘learning stuff,’ which cuts against the approach taken by the authors. The recent announcement by the Government on times tables is a case in point. To quote the authors, “in math, the need to memorize testifies to not understanding.”

Chapter 7 is devoted to trying to understand how mathematicians think, with the idea that everyone should be exposed to this interesting thought process. An understanding of how mathematicians think (generally accepted to be quite different to the way they write) is a very interesting topic. Unfortunately, I found the language overblown here, for example:

Instead of evoking an “unconscious,” with its inaccessible turnings, this explanation calls up a layered consciousness, the old arena of thought made into a stable locale that the newer one surrounds with a relational, dynamic context – which in its turn will contract and be netted into higher-order relations.

I think this is simply arguing for mathematical epistemology as akin to – in programming terms – summarizing functions by their pre and post conditions. I think. Though I can’t be sure what a “stable locale” or a “static” context would be, what “contraction” means, or how “higher order relations” might differ from “first order” ones in this context. Despite the writing not being to my taste, interesting questions are still raised regarding the nature of mathematical thought and how the human mind makes deductive discoveries. This is often contrasted in the text to ‘mechanical’ approaches, without ever exploring the question of either artificial intelligence or automated theorem proving, which would seem to naturally arise in this context. But maybe I am just demonstrating the computing lens through which I tend to see the world.

The authors write best when discussing the functioning of their math circle, and their passion clearly comes across.

The authors provide, in Chapter 8, a fascinating discussion of the ages at which they have seen various forms of abstract mathematical reasoning emerge: generalisation of when one can move through a 5×5 grid, one step at a time, visiting each square only once, at age 5 but not 4; proof by induction at age 9 but not age 8. (The topics, again, are a far cry from England’s primary national curriculum). I have recently become interested in the question of child development in mathematics, especially with regard to number and the emergence of place value understanding, and I’d be very interested to follow up on whether there is a difference between this between the US, where the authors work, and the UK, what kind of spread is observed in both places, and how age-of-readiness for various abstractions correlates with other aspects of a child’s life experience.

Other very valuable information includes their experience on the ideal size of a math circle: 5 minimum, 10 maximum, as they expect children to end up taking on various roles “doubter, conjecturer, exemplifier, prover, and critic.” If I run a math circle again, I would like to try exploring a single topic in greater depth (the authors use ten one hour sessions) rather than a topic per session as I did last time, in order to let the children explore the mathematics at their own rate.

The final chapter of the book summarises some ideas for math circle style courses, broken down by age range. Those the authors believe can appeal to any age include Cantorian set theory and knots, while those they put off until 9-13 years old include complex numbers, solution of polynomials by radicals, and convexity – heady but exciting stuff for a nine year old!

I found this book to be a frustrating read. And yet it still provided me with inspiration and a desire to restart the math circle I was running last academic year. Whatever my beef with the way the authors present their ideas, their core love – allowing children to explore and create mathematics by themselves, in their own space and time – resonates with me deeply. It turns out that the authors run a Summer School for teachers to learn their approach, practising on kids of all ages. I think this must be some of the best maths CPD going.

Sizing Custom Parallel Caches

On Wednesday, my PhD student Felix Winterstein will present his paper Custom-Sized Caches in Application-Specific Memory Hierarchies at FPT 2015 in New Zealand. This is joint work with MIT and Intel, specifically the team who developed LEAP.

Imagine that your FPGA-based algorithm needs to access data. Lots of data. What do you do? Store that data in SDRAM and spend ages carefully thinking about how to make efficient use of scarce on-chip memory to get the best performance out of your design. What if that whole process could be automated?

Felix has been working for a while on a method to do just that. He’s been able to produce some incredibly exciting work, making use of the latest developments in software verification to devise an automatic flow that analyses heap manipulating software and how it should be efficiently implemented in hardware. At FPGA in February this year, we were able to show how our analysis could automatically design a parallel caching scheme, and know – prove, even! – when costly coherence protocols could be avoided. The remaining part of the puzzle was to automatically know – in a custom parallel caching scheme – how much on-chip memory you should allocate to which parallel cache. This is the last piece of the puzzle we will present on Wednesday.

 

Book Review: Surely You’re Joking, Mr Feynman

This Summer I had a long flight to Singapore. A few hours into long flights I tend to lose the will to read anything challenging, so I decided to return to a book I first read as a young teen when it was lent to me by family friend fred harris, the book Surely You’re Joking, Mr Feynman. I remember at the time that this book was an eye opener. Here was a Nobel prize winning physicist with a real lust for life coupled with an inherent enthusiasm for applying the scientific method to all aspects of life and a healthy lack of respect for rules and authority. As a teenager equally keen to apply the scientific method to anything and everything, and keen to work out my own ideas on authority and rules, yet somewhat lacking the personal confidence of Feynman, I found the book hugely enjoyable.

I still find the book hugely enjoyable. I was chuckling throughout – the practical jokes, the encounters with non-scientific “experts” and authority figures. However, I was also somewhat taken aback by Feynman’s attitude to women, which I found misogynistic in places, something that had totally passed me by as a teenager. Feynman’s attitudes were probably unremarkable for his time (Feynman was born in 1918) so I’m not attributing blame here, but I find it odd that I didn’t notice this in the 1980s. Googling for it now, I find commentary about this point is all over the Internet. It just goes to show how many subtleties pass children and teens by, something parents and educators would do well to remember.

Assessment of Primary School Children in England

Some readers of this blog will know that I am particularly interested in the recent reform of the English National Curriculum and the way that assessment systems work.

This week the Commission on Assessment without Levels produced their long-awaited report, to which the government has published a response. Both can be read on the Government’s website. In addition, the Government has published interim statutory frameworks for Key Stage 1 and Key Stage 2 assessment. I set out below my initial thoughts on what I believe is a profoundly problematic assessment scheme.

Please let me know what you think of this initial view – I would be most interested to hear from you.

1. Aims and Objectives

The chairman of the commission states in his foreword that the past has “been too dominated by the requirements of the national assessment framework and testing regime to one where the focus needs to be on high-quality, in-depth teaching, supported by in-class formative assessment.” I have no doubt he is right, but I hope to provide an alternative view in this post – that the proposed interim assessment frameworks exacerbate this problem rather than solve it.

2. High Learning Potential

I find it extraordinary that the Commission does not provide insight into how they expect systems of assessment to cater to children who learn at a faster rate than their peers. Considerable space is given – rightly – to those children who learn at a rate slower than their peers, and the DfE response says “we announced the expert group on assessment for pupils working below the level of the national curriculum tests on 13 July 2015. We are keen to ensure that there is also continuity between this group and the Commission.” This is most welcome. Where is the balancing expert group on assessment for pupils working above the level of the national curriculum tests? How will this group be catered to? The only mention of “the most able” in the commission report says “all pupils, including the most able, do work that deepens their knowledge, understanding and skills, rather than simply undertaking more work of the same difficulty or going on to study different content.” The problem is that the statutory assessment frameworks provide no way to differentiate between schools which are working hard to “deepen the knowledge, understanding and skills,” of pupils who are already attaining at the expected standard at Key Stage 2 and schools which are not. This makes this group of pupils very vulnerable, as it dramatically reduces the statutory oversight of their progress.

3. Mastery

The commission has attempted to grasp the nettle and tried to come up with a definition of what they see as “mastery” in their report – this much used word by purveyors of solutions for the new National Curriculum. The fundamental principles outlined are, I think, uncontroversial – ensuring children know their stuff before moving on. “Pupils are required to demonstrate mastery of the learning from each unit before being allowed to move on to the next” – this is just good practice in any teaching, new or old curriculum – when combined with providing children enough opportunity to demonstrate mastery. However, they then muddy the waters with this quote about the new national curriculum: “it is about deep, secure learning for all, with extension of able students (more things on the same topic) rather than acceleration (rapidly moving on to new content)”. This all depends on the definition of “rapidly”. Of course children should not be moved onto content until they are secure with prior content. Of course it might be possible to identify lots more content on “the same topic” without straying into content from a later key stage (though I have yet to see good examples of this – publish them, please!) But let’s be clear: the national curriculum does not say that acceleration is unacceptable. It says “Pupils who grasp concepts rapidly should be challenged through being offered rich and sophisticated problems before any acceleration through new content” and “schools can introduce key stage content during an earlier key stage, if appropriate.” There is a difference still here between the national curriculum view, which I support (accelerate only if ready) and the commission’s perception of the national curriculum (don’t accelerate). Whether this revolves around a different definition of “accelerate” or a fundamental difference of opinion is less clear, but this issue needs to be addressed.

4. Levels: What’s The Real Issue?

The commission set out in detail what they feel are the problems with assessment by level. In summary, they are:

a. Levels caused schools to prioritise pace over depth of understanding

The Commission reports that, under the old national curriculum, despite a wider set of original purposes, the pressure generated by the use of levels in the accountability system led to a curriculum driven by Attainment Targets, levels and sub-levels, rather than the programmes of study.”

This is probably quite true, and it seems will be at least as true under the proposed new interim teacher assessments: these are dominated by a set of tick-box statements which are narrower than those found in the published programmes of study, recreating and entrenching the same problem.

b. Levels used a “best fit” approach, which left serious gaps of understanding often unfilled

If any schools used levels alone to pass information about pupil attainment to the next year group teacher, then that school was – in my view – being woefully negligent in their assessment policy. Of course more information on what pupils are secure in and what they are not secure in needs to be passed between professionals than purely a best fit level; in my view this is a specious argument against levels – it is actually an argument against poor assessment. And I think we can all get behind that.

Now let us consider what happens when we move from a best fit approach to the “lowest common denominator” approach appearing in the recent statutory frameworks: To demonstrate that pupils have met the standard, teachers will need to have evidence that a pupil demonstrates consistent attainment of all the statements within the standard. Certainly an advantage is that when a pupil moves into the next key stage, a stamp of “met the standard” should mean that new teachers have a meaningful baseline to work from (though still no understanding of where the pupil exceeds this baseline and by how much; simply reporting this information between key stages would still be woefully inadequate.) The disadvantage is likely to come from those children with unusual learning profiles. I was surprised that the commission report actually identifies autistic children: “there were additional challenges in using the best fit model to appropriately assess pupils with uneven profiles of abilities, such as children with autism.” It might certainly be easier for teachers to reach an assessment that an autistic child has “not met the standard” because he or she has a particular blind spot on one part of the curriculum, but it is certainly no more helpful for the teachers this child will move onto to be told “not met” than it is to be told “Level 6”, and arguably much less so. Again, we can agree – I think – that a profile of what children can achieve should be produced to go alongside summary attainment indicators, whether these are “secondary readiness” or “Level 4b”.

I hope I have outlined above where I think Government thinking is achieving well and where it lags behind a reasonable standard for assessment of our children. This doesn’t stop me coming up with a best fit summary assessment: Requires Improvement.

Math Circle Session 13: Infinity

Given the number of questions we’ve been asked by the children about infinity in recent weeks, I decided that today we’d have a session just on infinity.

I opened the session by asking the children whether they had any questions about infinity they’d like to try to answer in this session. I received the following suggestions:

  • Why does infinity have the symbol \infty?
  • What is infinity plus infinity?
  • Why do we sometimes say infinity is a number and sometimes not?
  • What are the rules for adding and subtracting with Aleph numbers? (From a child who had discussed infinite cardinals at home in the past.)

We began the session by one child answering the first question: “it represents going round and round and never stopping, which would also be a circle, but that’s zero.” I hoped the others would crop up during the discussion that followed.

We began by discussing prime numbers. All children had heard of primes, but only the older children could give a clear unambiguous definition. I had them list out some prime numbers, and asked how many there were. Opinion was divided: one child thought there would be an infinite number (after all, the session was on infinity!), while another child thought that this can’t be true “because most numbers are not prime, and there are infinite counting numbers, so there can’t be infinite primes too.” I then showed them Euclid’s proof that there are infinite primes.

Starting first with a hotel with a finite number of rooms, I then explained Hilbert’s  Hotel. For blog readers, there is an excellent animation of these ideas on TED. This seemed to get them quite excited, which is what I was hoping to see! We covered one new arrival and an infinitely large coach of new arrivals, each time stopping to see whether they could contrive of a construction that would allow the new arrivals to fit in the full hotel. The child who had previously seen infinite cardinals suggested a solution to one new arrival, but nobody came up with a solution to the infinitely large coach of new arrivals. However, it was great to see the thought going into this (most children wanted to move each person from their room to one infinitely further down the corridor.)

At this point, there were two directions I had thought we might go, given enough time: one on an infinite number of infinitely large coaches using a prime construction, given our proof of Euclid’s Theorem, and the other to look at a diagonal argument proof of the uncountability of the continuum, as a route into the final question the children asked. However, given we only had five minutes left, I decided it was better to just let the children’s discussion free-range, since they were still talkative after the hotel story. A variety of discussion points emerged involving infinity, and I just let them talk to each other: is space infinite, how can space not be infinite, are there an infinite number of life-supporting planets out there, what happened “before” the big bang, etc.! It was great to see this discussion, even if we hadn’t manage to answer all the questions posed at the beginning.

Math Circle Sesion 12: Eulerian Circuits

Yesterday we turned our attention to Eulerian circuits in math circle. This was a completely “homebrew” session.

We started by asking the children to solve the classical puzzle of drawing this “house” without taking pen off paper, and without retracing your steps.

Screen Shot 2015-05-06 at 20.22.52

Surprisingly, they hadn’t seen it before, but it didn’t take long for them to solve, and also to discover that the path was not unique. We then asked whether it is possible to draw the same shape, following the same rules, but ending up at the end of the drawing back where you started. The children made many attempts at this. Nobody could find a way, but nobody suggested a reason why it wasn’t possible. One child noticed that if you remove the bottom most edge, the problem has a solution.

Screen Shot 2015-05-06 at 20.23.02

As a result, we began to enumerate all the connected graphs with one vertex, with two vertices, with three vertices, and with four vertices. For each graph, children quickly solved the problem where it had a solution. For small enough graphs, where there wasn’t a solution, they could see this through complete enumeration of paths.

One child suggested that there may be a link between the parity of the number of vertices in the graph, and whether or not an Eulerian circuit exists, but I demonstrated this to be false by reference to the two houses – the complete one and the one with the missing floor.

I suggested we look at the act of entering and exiting a vertex, using two edges each time. That, combined with drawing the order on each vertex of each of the graphs we’d enumerated, resulted in the conjecture from one of the kids that Eulerian circuits can only exist when all vertices have even order, and we together constructed an argument why this should be the case.

I then posed the (much harder) converse question: does an Eulerian circuit exist for every connected graph with even order vertices. I am not sure the children clearly understood the difference between this and the previous question; since they had shown that every graph with an Eulerian circuit has even order vertices, I think this seemed “obvious” to them (we must look at logic during one session – any reader with a child-friendly introduction to logic, please let me know!) I asked them to try to produce a graph with even order vertices that does not admit an Eulerian circuit, and they tried for some time before suggesting it is not possible, though none were clear why.

In a step towards trying to answer this question, I wanted to demonstrate that every graph with only even order vertices contains a circuit. I had them first try to construct graphs where each vertex has even order but the graph contains no circuit. Some elaborate failed attempts ensued, but then one child made the following remarkable claim: “it’s possible if the number of vertices is infinite”. I queried this child, and he drew the following “graph”:

Screen Shot 2015-05-06 at 20.19.07

His argument was that all these vertices have order two, yet the graph clearly contains no circuit. I thought this a fairly remarkable construction for a 7 or 8 year old! After restricting ourselves to finite graphs (infinity crops up so often, and absorbs so much interest, we should have a session on infinity also!) we all agreed that every graph with even order vertices contains a cycle.

From that point it was a small step to prove by induction that every graph with even order vertices admits and Eulerian circuit. I think they understood this, but I don’t think I had time to properly emphasise the inductive nature of the proof, and again proof by induction is something we must revisit in the future.

As a final exercise, I had them find an Eulerian circuit in what I had imagined to be a complex case: a pentagram inscribed inside a pentagon, with vertices at all points of intersection. They solved this easily, after which my infinity-inspired child noticed that there is a small pentagon formed by the pentagram, and asked what would happen if we put a pentagram inside this pentagon, and so ad infinitum, forming a fractal graph: would the property still hold?

Lots of things to pick up in future sessions, I think! If I were to change anything when redoing this session, it would be to make the problems more concrete, e.g. using towns as vertices and roads as edges, as my wife had suggested before the session! I think the children “got” it without this, but it might have captured the imagination slightly more.


An Interlude: Is Infinity Odd or Even?

During the session, when infinity came up, I was asked a direct question by one of the younger children: “is infinity odd or even?” At a lull in the main session, I decided we should use this question as a way to talk about the importance of definitions once more. All children had presupposed an arithmetic in which infinity plays the same role as any “other number”, and therefore already had in their minds that, for example, 2 \times \infty = \infty and \infty + 1 = \infty. I didn’t question these suppositions (again – a possible topic for another session!) but instead decided to work with them. I asked for a definition of even, and got:

Definition: A number is even if it is two times a whole number.

I asked for a definition of odd, and got:

Definition: A whole number is odd if it is not even.

To which I added an additional alternative definition:

Definition: A whole number is odd if it is one more than an even number.

Children then quickly worked out for themselves that by their definition (and under their arithmetic) infinity is even. By the first definition of odd, infinity is not odd, but by the second definition, infinity is both odd and even! Thus the two definition of odd coincide for integers but not for their version of arithmetic with infinities. We left that as a talking and discussion point about the importance of definitions, rather than trying to draw any strong conclusions.

Run Fast! (When You Can): Data-Dependent Pipelining

This week sees the IEEE International Symposium on Field-Programmable Custom Computing Machines hosted in Vancouver, BC, Canada. I have sent my PhD student, Junyi Liu, who is presenting some work we’ve done together with my former research fellow, now a member of staff at Xilinx, Sam Bayliss.

Junyi and I are interested in memory accesses about which we know something at compile time, but not quite enough to be able to have a complete picture of what they’ll be at run time. This means that existing analyses of code containing these accesses will typically result in suboptimal behaviour at run time.

The case we study in this paper is that of affine loop nests containing memory accesses whose addresses involve indeterminate variables that do not change value as the loop executes. The value of these variables can completely change the dependence structure of the loop, meaning that for some values, the loop can be highly pipelined and run extremely fast whereas for other values, the loop must run very slowly. We give the trivial example below in the paper:


for( int i=LB; i<=UB; i++ )
  A[i] = A[i+m] + 1;

The variable m does not change as the loop executes. For some values of m, each iteration of the loop must start execution when the previous one finishes. For other values of m, we can pipeline the loop so that iterations overlap in time. So how to discover this and take advantage of it automatically?

We present an approach based on an extension of the polyhedral model to parametric polyhedra, where the parameters are these parameters, like m, unknown at compile time. My former PhD student Qiang Liu (no relation), colleague Kostas Masselos and I were some of the first people to bring the polyhedral model to FPGA-based computation in our FCCM 2007 paper (Qiang is now an associate professor at Tianjin University) but to the best of my knowledge, parametric polyhedra are relatively understudied in this area.

The basic idea of this new work is simple: use static analysis to identify regions of the parameter space where you can run the loop fast, and synthesise light-weight detectors of these regions for runtime checks on entry to the loop nest. Since my original work with the polyhedral model, high level synthesis has become a commercial reality, so we build on top of Xilinx VivadoHLS, implementing our tool as a source-to-source transformation engine. This automatically inserts the appropriate VivadoHLS pragmas as well as the extra code for the runtime checks.

The results are very promising: some of our benchmarks show a 10x speedup in initiation interval (the time you need to wait before executing another iteration of the loop).

Math Circle Session 11: More Platonic Solids

Tuesday saw the next installment of our math circle. In the previous session, we had been investigating Platonic solids, and the aim of this session was to build up to a proof that there are exactly five such solids.

Since in the previous session the children had produced a lot of their own definitions, I produced a sheet summarising their definitions as well as standard mathematical definitions of some of the concepts we had been dealing with during the previous session. Below you can see the summary sheet of the children’s definitions, which was laid alongside the standard definitions. Names have been changed for privacy reasons.


Definition: Line-Shape

A line-shape is any shape made by a finite number of straight lines.

(Note the Thomas-Adrian Condition: The number of lines used by the shape must be finite, because an infinite number could lead to “curvy shapes”.)

Definition: Eve-Regular

An Eve-Regular line-shape is one whose sides all have the same length.

Definition: Line-Shape-Shape

A line-shape-shape is any 3D shape made by a finite number of line-shapes.

(Note the Christopher Convexity Test: Place a pencil on the object so it rests on at least two points; if you cannot create a space between the pencil and the object between these two points, the object is convex.)


The first part of the session was then taken up comparing these definitions to more standard ones. In particular, I pointed out that the standard name for Eve-Regular polygons is equilateral, and that polygons that are both equilateral and equiangular are referred to as regular. This approach seemed to work very well, especially with the younger children, who were very excited to see their names in print as part of a definition! Fortunately, I had been able to include the names of all children who had been present in the previous session, as all had been able to contribute.

After reminding the children of the {n,m} notation for the Schläfli symbol of a Platonic solid, I led them towards trying to systematically enumerate the solids. They were able to quickly tell me that the minimum values of n and m must be 3. We then worked our way through all possible Schläfli symbols until they found that m is bounded from above for each n, due to the need for the internal angles to add to a value strictly less than 360 degrees. By physically building each polyhedron as they went, they were able to show that every allowable m does indeed lead to a polyhedron, and we were able to construct all 5, showing that there were indeed only 5 of them in the process: the tetrahedron, the cube, the octahedron, the dodecahedron, and the icosahedron.

In case they finished early, my plan was to have them compute the Euler characteristic for each Platonic solid, and then try to build other polyhedra to see whether the value held, however the proof of five solids took up the whole session. After two sessions building polyhedra with Polydron, some of the children may have reached the limits of their interest in the topic for now, so if we do explore Euler characteristics for polyhedra, it will be in a later session.