This Christmas, one of the books I read during my holiday was Leslie Valiant’s book Probably Approximately Correct, if not quite a truly general readership-oriented book, then certainly one suitable for anyone with a scientific background and equally suitable for adults and teens.
Valiant needs no introduction to those with a background in computer science – for those outside the field, you can read about his deep contributions on his Wikipedia page. Inspired by the work of Turing, in this book (from 2013) he presents some wide-ranging thoughts on the Probably Approximately Correct (PAC) framework for understanding learning kicked off by his paper from 1984, presenting the concept as fundamental to learning as it occurs naturally. He deals with both “learning of the species, through evolution” and the more commonly considered “learning of the individual” within the same algorithmic framework – which he calls “ecorithms” – identifying the weaker learning behaviours possible in the former setting.
This is a great book, full of ideas and interesting philosophical discussion points. The key exciting elements to me relate to the use of computational complexity arguments to shed light on the different classes of function that can be learnt in a resource-constrained setting, as per the original PAC technical work, and the broader philosophical conclusions Valiant draws from this. Some particular ideas that stood out to me as worthy of further thought (my own list, Valiant does not prioritise these ideas):
- The modularity of biological systems as a side-effect of the limited learnability (through evolution) of large-scale non-modular systems
- The idea that “artificial” in “artificial intelligence” should not be considered as defining “the material basis of the realisation” (i.e. an intelligence may be natural or artificial in a computer, for example), but rather in the way knowledge is obtained from the environment. The key distinction for Valiant is whether we are trying to reproduce learning the same way this happens in nature (this not “artificial” for Valiant) or whether we are trying to do something different, e.g. shortcutting evolutionary processes (imagine a classical Chess-playing AI).
- The idea that unsupervised learning is not a natural phenomenon, because the training is present even in the natural state, e.g. via evolutionary fitness for evolution-as-learning.
- Some interesting thoughts on the role of ecorithms in the development of human culture and knowledge, and how humans have tipped the balance from evolutionary learning to stronger forms of algorithmic learning through their ability to pass on culture and knowledge to future generations.
In addition to Valiant’s ideas, a few thoughts that struck me while reading the book, which I wonder whether others have considered in more depth:
- During our work together on resource-constrained model-predictive control, my friend and colleague Eric Kerrigan would often say to me that the best resource-constrained model for an open-loop system may be very different to the best resource-constrained model for that same system in a closed-loop setting for control purposes. It strikes me that the notion of fitness used by Valiant to pose evolvability as a learning might be further enriched by thinking along these lines, especially as the species being evolved changes the environment within which it lives.
- Valiant argues that questions such as “can a computer have free will?” and “is a termite conscious?” are meaningless “because the concepts the words describe have meanings only for the distribution of examples from which they have been PAC learned”. This seems a step too far for me, not least because many of the most creative leaps I’ve seen in thought have come from applying ideas in one area to a completely distinct one. It would be useful, I think, to make the assertion more precise and to explore it a bit more deeply.
- Throughout the book, Valiant makes heavy use of the classical “polynomial versus non polynomial” approach to characterising complexity (e.g. arguing that useful learning takes polynomial time to learn classification of exponential items). This is, as one would expect, well justified in terms of the robustness of these classes, but it leaves me wondering what we lose by taking an asymptotic approach to drawing conclusions in the non-asymptotic settings of animals and their behaviours.
I would recommend this book to all. When I visit sixth forms to talk about my work and about engineering at Imperial, I’m often asked by teachers and students if I have recommendations for accessible but deep reading. I will be recommending Valiant’s book in the future.