Nonlinearity is Your Friend

My former PhD student Erwei Wang and I recently teamed up with some collaborators at UCL: Dovydas Joksas, Nikolaos Barmpatsalos, Wing Ng, Tony Kenyon and Adnan Mehonic and our paper has just been published by Advanced Science (open access).

Our goal was to start to answer the question of how specific circuit and device features can be accounted for in the training of neural networks built from analogue memristive components. This is a step outside my comfort zone of digital computation, but naturally fits with the broader picture I’ve been pursuing under the auspices of the Center for Spatial Computational Learning on bringing circuit-level features into the neural network design process.

One of the really interesting aspects of deep neural networks is that the basic functional building blocks can be quite diverse and still result in excellent classification accuracy, both in theory and in practice. Typically these building blocks include linear operations and a type of nonlinear function known as an activation function; the latter being essential to the expressive power of ‘depth’ in deep neural networks. This linear / nonlinear split is something Erwei and I, together with our coauthors James Davis and Peter Cheung, challenged for FPGA-based design, where we showed that the nonlinear expressive power of Boolean lookup tables provides considerable advantages. Could we apply the a similar kind of reasoning to analogue computation with memristors?

Memristive computation for the linear part of the computation performed in neural networks has been proposed for some time. Computation essentially comes naturally, using Ohm’s law to perform scalar multiplication and Kirchhoff’s Current Law to perform addition, resulting in potentially energy-efficient analogue dot product computation in a physical structure known as a ‘crossbar array’. To get really high energy efficiency, though, devices should have high resistance. But high resistance brings nonlinearity in practice. So do we back away from high resistance devices so we can be more like our mathematical abstractions used in our training algorithms? We argue not. Instead, we argue that we should make our mathematical abstractions more like our devices! After all, we need nonlinearity in deep neural networks. Why not embrace the nonlinearity we have, rather than compromise energy efficiency to minimise it in linear components, only to reintroduce it later in activation functions?

MNIST classification error trading off against power consumption

I think our first experiments in this area are a great success. We have been able to not only capture a variety of behaviours traditionally considered ‘non-ideal’ and harness them for computation, but also show very significant energy efficiency savings as a result. You can see an example of this in the figure above (refer to the paper for more detail). In high power consumption regimes, you can see little impact of our alternative training flow (green & blue) compared to the standard approach (orange) but when you try to reduce power consumption, a very significant gap opens up between the two precisely because our approach is aware of the impact this has on devices, and the training process learns to adapt the network accordingly.

We’ve only scratched the surface of what’s possible – I’m looking forward to lots more to come! I’m also very pleased that Dovydas has open-sourced our training code and provided a script to reproduce the results in the paper: please do experiment with it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s