Review: Practical Foundations for Programming Languages

A lot of my work revolves around various problems encountered when one tries to automate the production of hardware from a description of the behaviour the hardware is supposed to exhibit when in use. This problem has a long history, most recently going by the name “High Level Synthesis”. A natural question, but one that is oddly rarely asked in computer-aided design, is “what kind of description?”

Of course not only hardware designers need to specify behaviour. The most common kind of formal description is that of a programming language, so it seems natural to ask what the active community of programming language specialists have to say. I am  fortunate enough to be an investigator on a multidisciplinary EPSRC project with my friend and colleague Dan Ghica, specifically aimed at bringing together these two communities, and I thought it was time to undertake sufficient reading to help me bridge some gaps in terminology between our fields.

With this is mind, I recently read Bob Harper‘s Practical Foundations for Programming Languages. For an outsider to the field, this seems to be a fairly encyclopaedic book describing a broad range of theory in a fairly accessible way, although it did become less readily accessible to me as the book progressed. My colleague Andy Pitts is quoted on the back cover as saying that this book “reveals the theory of programming languages as a coherent scientific subject,” so with no further recommendation required, I jumped in!

I like the structure of this book because as a non-specialist I find this material heavy going: Harper has split a 500 page book into fully 50 chapters, which suits me very well. Each chapter has an introduction and a “notes” section and – for the parts in which I’m not very interested – I can read these bits to still get the gist of the material. Moreover, there is a common structure to these chapters, where each feature is typically first described in terms of its statics and then its dynamics. The 50 chapters are divided into 15 “parts”, to provide further structure to the material.

The early parts of the book are interesting, but not of immediate practical relevance to me as someone who wants to find a use for these ideas rather than study them in their own right. It is nice, however, to see many of the practical concepts I use in the rare occasions I get to write my own code, shown in their theoretical depth – function types, product types, sum types, polymorphism, classes and methods, etc. Part V, “Infinite Data Types” is of greater interest to me just because anything infinite seems to be of interest to me (and, more seriously, because one of the most significant issues I deal with in my work is mapping of algorithms conceptually defined over infinite structures into finite implementations).

Where things really get interesting for me is in Part XI, “Types as Propositions”. (I also love Harper’s distinction between constructive and classical logic as “logic as if people matter” versus “the logic of the mind of God”, and I wonder whether anyone has explored the connections between the constructive / classical logic dichotomy and the philosophical idealist / materialist one?) This left me wanting more, though, and in particular I am determined to get around to reading more about Martin-Löf type theory, which is not covered in this text.

Part XIV, Laziness, is also interesting for someone who has only played with laziness in the context of streams (in both Haskell and Scala, which take quite different philosophical approaches). Harper argues strongly in favour of allowing the programmer to make evaluation choices (lazy / strict, etc.).

Part XV, Parallelism, starts with the construction of a cost dynamics, which is fairly straight forward. The second chapter in this part looks at a concept called futures and their use in pipelining; while pipelining is my bread and butter in hardware design, the idea of a future was new to me. Part XVI, Concurrency, is also relevant to hardware design, of course. Chapter 43 makes an unexpected (for me!) link between the type system of a distributed concurrent language with modal logics, another area in which I am personally interested for epistemological reasons, but know little.

After discussing modularity, the book finishes off with a discussion of notions of equivalence.

I found this to be an enlightening read, and would recommend it to others with an interest in programming languages, an appetite for theoretical concerns, but a lack of exposure to the topics explored in programming language research.

Review: Children’s Minds

My third and final piece of holiday reading this Summer was Margaret Donaldson’s Children’s Minds, the first book I’ve read on child psychology.

This is a book first published in 1978, but various sources suggested it to me as a good introduction to the field.

I wasn’t quite sure what to make of this book. The initial chapters seem to be very much “of the time”, a detailed critique of the theory of Piaget, which meant little to me without a first hand knowledge of Piaget’s views. Donaldson does, however, include her own summary of Piaget’s theories as an appendix.

Donaldson argues that children are actually very capable thinkers at all ages, but that effort must be made to communicate to them in a way they can understand. Several interesting experiments by Piaget, from which he apparently concluded that children are incapable of certain forms of thought, are contrasted against others by later researchers who found that by setting up the experiments using more child-friendly communication, these forms are apparently exhibited.

The latter half of the book becomes quite interesting, as Donaldson explores what schools can do during reception (kindergarten) age and beyond to ensure that the early excitement of learning which most children have is not destroyed by schools themselves. It is fascinating, for example, to read that

There is now a substantial amount of evidence pointing to the conclusion that if an activity is rewarded by some extrinsic prize or token – something quite external to the activity itself – then that activity is less likely to be engaged in later in a free and voluntary manner when the rewards are absent, and it is less likely to be enjoyed.

I would be most interested in what work has been done since the 70s on this point, as if this is true then it seems to clash markedly with practice in the vast majority of primary schools I know.

The final part of the text is remarkably polemical:

Perhaps it is the convenience of having educational failures which explains why we have tolerated so many of them for so long…

A vigorous self-confident young population of educational successes would not be easy to employ on our present production lines. So we might at last be forced to face up to the problem of making it more attractive to work in our factories – and elsewhere – and, if we had done our job in the schools really well, we should expect to find that economic attractions would not be enough. We might be compelled at last to look seriously for ways of making working lives more satisfying.

Little progress since the 70s, then! Interestingly (for me) Donaldson approvingly quotes A.N. Whitehead’s views on the inertia of education; I know Whitehead as a mathematician, and was completely unaware of his educational work.

As I mentioned, this is the first book on child psychology I’ve read. I found it rather odd; I am not used to reading assertions without significant citations and hard data to back up the assertions. I am not sure whether this is common in the field, whether it is Donaldson’s writing, or whether it is because this is clearly Donaldson writing for the greater public. I tended to agree with much of what was written, but I would have been far more comfortable with greater emphasis on experimental rigour. There is much in here to discuss with your local reception class teacher; I want to know more about older children.

 

Review: Risk Savvy

My second piece of holiday reading this Summer was Gigerenzer’s Risk Savvy. This is an “entertaining book” for a general readership.

I learnt quite a bit from this book, but still found it frustrating and somewhat repetitive. There are many very interesting anecdotes about risk and poor decision making under risk, as well as lots of examples of how we are manipulated by the press and corporations into acting out of fear.

However, I don’t necessarily agree with the conclusions reached.

As an example, Gigerenzer clearly shows that PSA testing for prostate cancer is the US does more harm than good compared to the UK’s approach. More people are rendered incontinent and impotent through early intervention, without any significant difference in mortality rate. A similar story is told for routine mammography. But the conclusion that Gigerenzer seems to draw from these – and similar – studies, is that “it’s better not to know”, whereas my conclusion would be “it’s better not to intervene immediately”. I can’t see why simply knowing more should be worse.

I was also frustrated by the way Gigerenzer deals with the classification of risks into “known risks”, i.e. those for which good statistical information is available, and “unknown risks”. He convincingly shows that – all too often – we deal with unknown risks as if they were known risks, resulting in poor decision making. To me this appears to be a mirror of the two ways I know of dealing with uncertainty in mathematical optimization: stochastic versus robust optimization. This is a valuable dichotomy, but I don’t think that Gigerenzer’s conclusion that, in the presence of “unknown risk”, “simple is good” and “go with your gut feeling” is well justified. I do think that more needs to be done by decision makers to factor in the computational complexity of making decisions – and the overfitting of overly complex models – into decision making methodologies, but if “simple is good” then this should be a derivable result; I would love to see some mathematically rigorous work in this area.

Review: The Adventure of Numbers

One of the books I took on holiday this year was Gilles Godefroy’s The Adventure of Numbers. This is a great book!

The book takes the reader on a tour of numbers: ancient number systems, Sumerian and Babylonian number systems (decimal coded base 60, from which we probably get our time system of hours, minutes and seconds), ancient Greeks and the discovery of irrational numbers, Arabs, the development of imaginary numbers, transcendentals, Dedekind’s construction of the reals, p-adic numbers, infinite ordinals, and the limits of proof.

This is a huge range, well written, and while fairly rigorous only requires basic mathematics.

I love the fact that I got from this book both things that I can talk to primary school children about (indivisibility of space through a geometric construction of the square root of two and its irrationality) and also – unexpectedly – an introduction to the deep and beautiful MRDP theorem which links two sublime interests for me: computation (in a remarkably general sense) and Diophantine equations.

What’s not to love?