Do Your Best: A Social Question?

I’ve always struggled with the concept of “doing your best”, especially with regards to avoiding harm. This morning from my sick bed I’ve been playing around with how I could think about formalising this question (I always find formalisation helps understanding). I have not got very far with the formalisation itself, but here are some brief notes that I think could be picked up and developed later for formalisation. Perhaps others have already done so, and if so I would be grateful for pointers to summary literature in this space.


The context in which this arises, I think, is what does it mean to be responsible for something, or even to blame? We might try to answer this by appeal to social norms: what the โ€œreasonable personโ€ would have done. But in truth I’m not a big fan of social norms: they may be politically or socially biased, I’m never confident that they are not arbitrary, and they are often opaque to those who think differently.

So rather than starting from norms, in common with many neurodivergents, I want to think from first principles. How should we define our own responsibilities when we act with incomplete information? And what does it mean to be โ€œtrying oneโ€™s bestโ€ in that situation? And how do we not get completely overwhelmed in the process?


Responsibility and Causality

One natural starting point is causal responsibility. If I take action A and outcome B occurs, we ask: would B have been different if I had acted otherwise? Causal models could potentially make this precise through counterfactuals. This captures the basic sense of control: how pivotal was my action?

But responsibility isnโ€™t just about causality. It is also about what I knew (or should have known) when I acted.


Mens Rea in the Information Age

The legal tradition ofย mens rea, the โ€œguilty mindโ€, is helpful here. It recognises degrees of responsibility, such as:

  • Intention: I aimed at the outcome.
  • Knowledge: I knew the outcome was very likely.
  • Recklessness: I recognised a real risk but went ahead regardless.
  • Negligence: I failed to take reasonable steps that would have revealed the risk.

It’s the final one of these, negligence, that causes me the most difficulty on an emotional level. A generation ago, a “reasonable step” might be to ask a professional. But in the age of abundant online information, the challenge is defining what โ€œreasonable stepsโ€ now are. No one can read everything, and I personally find it very hard to draw the line.

If we had knowledge of how the information we gain increases with the time we spend collecting that information, we would be in an informed place. We could decide, based on the limited time we have, how long we wish to explore any given problem.


From Omniscient Optimisation to Procedural Reasonableness

However, we must accept that there are at least two levels of epistemic uncertainty here. We don’t know everything there is to know, but nor do we even know how the amount of useful information we collect will vary based on the amount of time we put in. Maybe just one more Google search or just one more interaction with ChatGPT will provide the answer to our problem.

In response, I think we must shift the benchmark. Trying oneโ€™s best does not mean picking the action that hindsight reveals as correct. It means following a reasonable procedure given bounded time and attention.

So what would a reasonable procedure look like? I would suggest that we start with the most salient, socially-accepted, and low-cost information sources. We then keep going with our investigation until further investigation is unlikely to change the decision in proportion to its cost.


In principle, we may want to continue searching until the expected value of more information is less than its cost. But of course, in practice we cannot compute this expectation.

A workable heuristic appears then to be to allocate an initial time budget for exploration, and if by the end the information picture has stabilised (no new surprises, consistent signals), then stop and decide.

I suspect there is a good Bayesian interpretation of this heuristic.


The Value of Social Norms

What then of social norms? What counts as an obvious source, an expert, or standard practice, is socially determined. Even if I am suspicious of social norms, I have to admit that they carry indirect value: they embody social learning from othersโ€™ past mistakes. Especially in contexts where catastrophic harms have occurred such as in medicine and engineering, norms, heuristics and rules of thumb represent distilled experience.

So while norms need (should?) not be obeyed blindly, they deserve to be treated as informative priors: they tell us about where risks may lie and which avenues to prioritise for exploration.


Trying Oneโ€™s Best: A Practical Recipe

Pulling these threads together, perhaps โ€œtrying oneโ€™s bestโ€ under uncertainty means:

  1. Start with a first-principles orientation: aim for causal clarity and avoid blind conformity.
  2. Consult obvious sources of information and relevant social norms as informative signals.
  3. Allocate an initial finite time for self-investigation.
  4. Stop when information appears stable. If significant new evidence arises during investigation, continue. The significance threshold should vary depending on the potential impact.
  5. Document your reasoning if you depart from norms.

Responsibility is not about hindsight-optimal outcomes. It is about following a bounded, transparent, and risk-sensitive procedure. Social norms play a role not as absolute dictates, but as evidence of collective learning obtained in the context of a particular social environment. Above all, โ€œtrying oneโ€™s bestโ€ means replacing the impossible ideal of omniscience with procedural reasonableness.

While this approach still seems very vague, it has at least helped me to put decision making in some perspective.


Acknowledgements

The idea for this post came as a result of discussions with CM over the last two years. The fleshing out of the structure of the post and argument were over a series of conversations with ChatGPT 5 on 26th October 2025. The text was largely written by me.

Productivity and Rate of Profit

This blog post sets up and analyses a simple mathematical model of Marx’s controversial “Law of the Tendency of the Rate of Profit to Fall” (Capital III ยง13-15). We will work under a simple normalisation of value that directly corresponds to the assumption of a fixed labour pool combined with the labour theory of value (LTV). I will make no remarks on the validity of LTV in this post, but all results are predicated on this assumption. We will also work in the simplest possible world of two firms, each generating baskets containing both means of production and consumer goods, and in which labour productivity equally improves productivity in the production of both types of goods. I will provide the mathematical derivations in sufficient detail that a sceptical reader can reproduce them quite easily with pen and paper.

The key interesting results I am able to derive here are:

  1. If constant capital grows without bound, then the rate of profit falls, independently of any assumption on the rate of surplus value (a.k.a. the rate of exploitation).
  2. There is a critical value of productivity \rho_c (which we derive), above which innovation across firms raises the rate of profit of both firms, and below which the rate of profit is reduced.
  3. There are investments that could be made by a single firm to raise its own productivity high enough to improve its own rate of profit when the competing firm does not change its production process. (We locate this threshold, \rho_*). However, none of these investments would reduce the rate of profit of when rolled out across both firms (i.e. \rho_c < \rho_*). There is therefore no “prisoner’s dilemma” of rates of profit causing capital investment to ratchet up.
  4. However, a firm motivated by increasing its mass of profit (equivalently, its share of overall surplus value) is motivated to invest in productivity above a third threshold \rho_m, which we also prove exists, and show that \rho_m < \rho_c. Thus there is a range of investments that improve the profit mass of a single firm but also reduce the rate of profit when rolled out to both firms.

As a result, under the assumptions with which I am working, we can interpret Marx’s Law of the Tendency of the Rate of Profit to Fall to be an accurate summary of economic evolution in purely value terms, under the assumption that firms often seek out moderately productivity-enhancing innovations, so long as radically productivity-enhancing innovations are rare.


1. The Law of the Tendency of the Rate of Profit to Fall

Marx famously argued that the very innovations that make labour more productive eventually pull the average rate of profit downward. My understanding as a non-economist is that, even in Marxian economic circles, this law is controversial. Over my summer holiday, I have been reading a little about this (the literature is vast) and there are many complex arguments on both sides, often very wordy or apparently relying on assumptions that do not line up with a simple LTV analysis.

The aim of this post is thus to show, in the simplest possible algebra, how perfectly reasonable investment decisions by competitive firms reproduce Marxโ€™s claim, providing explicit analysis for how the scale of productivity gains impact on the rate of profit and on individual motivating factors for the firms involved.


2. Set-up and notation

I will be working purely in terms of value, as I understand it from Capital Volume 1, i.e. socially necessary labour time (SNLT) – I will pass a very brief comment on price at the end of the post. We will use the following variables throughout the analysis, each with units of “socially necessary labour time per unit production period”.

SymbolDefinitionEconomic reading in Capital
C \geq 0overall constant capitalmachinery + materials
V \geq 0overall variable capitalvalue of wages
Soverall surplus valueunpaid labour
c_iconstant capital advanced by Firm imachinery + materials
v_ivariable capital advanced by Firm ivalue of wages

In addition, we will consider the following dimensionless scalar variables:

SymbolDefinitionEconomic reading in Capital
u > 0The baseline production rate of commodities by Firm 2, as a multiple of that produced by Firm 1
\alpha > 1capital deepening factor: the ratio of the post-investment capital advanced to the pre-investment capital advanced, evaluated in terms of SNLT at the time of investmentdearer machine (III ยง13)
\rho > 1labour-productivity factor: the ratio of the number of production units produced per unit production time post-investment to the same quantity prior to the investment.fewer hours per unit (III ยง13)

We will work throughout under the constraint that V + S = 1. This can be interpreted as Marx’s “limits of the working day” (Capital I ยง7) and flows directly from the LTV. Below I will refer to this as “the working day constraint”.

Before embarking on an analysis of productivity raising, it is worth understanding the rate of profit at an aggregate level. Marx defined the rate of profit as:

r = \frac{S}{C+V}

During my reading on the rate of profit, I have seen a number of authors describe the tendency of the rate to fall in the following terms: divide numerator and denominator by V, to obtain r = \frac{\frac{S}{V}}{1 + \frac{C}{V}}. At that point an argument is sometimes made that keeping \frac{S}{V} (the rate of surplus labour, or the rate of exploitation) constant (or bounded from above), r \to 0 as as \frac{C}{V} \to \infty. This immediately raises two questions: why should \frac{S}{V} remain constant (or bounded), and why should \frac{C}{V} \to \infty? In this post, I will attempt to show that the first of these assumptions is irrelevant, while providing a reasonable basis to understand the second. As a prelude, we will now prove the following elementary theorem:

Theorem 1 (Rate of Profit Declines with Increasing Capital Value). Under the working-time constraint, C \to \infty \Rightarrow r \to 0.

Proof. r = \frac{S}{C+V} \leq \frac{1}{C} \to 0. โ–ก

It is worth noting that C \to \infty implies that \frac{C}{V} \to \infty, but the latter is not sufficient for r \to 0 – a simple counterexample would be to keep C constant while V \to 0, leading to an asymptotic rate of profit of \frac{1}{C}.

Theorem 1 forms our motivation to consider why we may see C increasing without bound. To understand this, we will be considering three scenarios:

Scenario A: the baseline. Each firm is producing in the same way, but they may have a different scale of production. We will normalise production of Firm 1 to 1 unit of production, with Firm 2 producing u units of production per unit production time.

Scenario B: after a general rise in productivity. Each firm is again producing in the same way, with the same ratio of production units as Scenario A. Each firm has, however, made the same productivity-enhancing capital investment, so that Firm 1 is now producing \rho units and Firm 2 is now producing \rho u units of production per unit time.

Scenario C: only a single firm, Firm 1, implements the productivity-enhancing capital investment. Firm 1 is now producing \rho units and Firm 2 is still producing u units of production per unit time. This scenario allows us to explore differences in rate and mass of profit for Firm 1 compared to Scenario A and Scenario B, proving several of the results mentioned at the outset of this post.


3. Scenario A โ€” baseline

We have: V = (1+u)v_1 and C = (1+u)c_1.

The total value of the units of production produced is C + V + S = C + 1 = 1 + (1+u)c_1 (using the working day constraint). Each unit of production therefore has a value of \frac{1}{1+u} + c_1. Firm 1 is producing 1 of these units.

The surplus value captured by (profit mass of) Firm 1 is therefore s_1 = \frac{1}{1+u} - v_1.

This gives an overall rate of profit for Firm 1 of: r_1 = \frac{s_1}{c_1 + v_1} = \frac{1 - (1+u)v_1}{(1+u)(c_1+v_1)}.

This rate of profit is the same for Firm 2, and indeed for the overall combination of the two. We now have a baseline rate and mass of profit, against which we will make several comparisons in the next two scenarios.


4โ€ƒScenario B โ€” both firms upgrade

In this scenario, both firms upgrade their means of production, leading to productivity boost by a factor of \rho, after an increase in the value of capital advanced by a factor of \alpha. We will assume that the firms keep the number of working hours constant, and ‘recycle’ the productivity enhancement to produce more goods rather than to drop the labour time employed.

We must carefully consider the impact on the value of variable and fixed capital of this enhancement. The value of variable capital has decreased by a factor \rho, because it is now possible to meet the same consumption needs of the workforce with this reduced SNLT. Of course, we could make other assumptions, e.g. the ability of the labour force to ‘track’ productivity gains in terms of take-home pay, but the assumption of fixed real wages appears to be the most conservative reasonable assumption when it comes to demonstrating the fall of the rate of profit. Equally, the value of the fixed capital has reduced from the original c_i for Firm i to \frac{\alpha c_i}{\rho}, because a greater investment has been made at old SNLT levels (\alpha > 1) but this investment has itself depreciated in value due to productivity gains. Marx refers to this as ‘cheapening’ in Capital III ยง14.

We will use singly-primed variables to denote the quantities in Scenario B, in order to distinguish them from Scenario A. We have:

C' = \frac{\alpha (1+u)}{\rho}c_1

The total value of the units of production produced is C' + V' + S' = 1 + C' = 1 + \frac{\alpha (1+u)}{\rho}c_1. Remembering that our firms together now produce \rho(1+u) units of production, each unit therefore has value \frac{1}{\rho(1+u)} + \frac{\alpha}{\rho^2}c_1.

Firm 1 is responsible for \rho of these units, and therefore a value of \frac{1}{1+u} + \frac{\alpha}{\rho}c_1. We may therefore calculate the portion of surplus labour it captures as:

s'_1 =  \frac{1}{1+u} -  \frac{1}{\rho}v_1, and its rate of profit as:

r'_1 = \frac{\frac{1}{1+u} - \frac{v_1}{\rho}}{\frac{\alpha}{\rho}c_1 + \frac{1}{\rho}v_1}.

Simplifying, we have

r'_1 = \frac{\rho - (1+u)v_1}{(1+u)(\alpha c_1 + v_1)}.

These will be the same rate for Firm 2 (and overall) and a proportionately scaled mass for Firm 2.

It is clear that s'_1 > s_1, i.e. profit mass is always increased by universally adopting an increased productivity (remember we are keeping real wages constant). However, it is not immediately clear what happens to profit rate.

Theorem 2 (Critical productivity rate for universal adoption). For any given set of variables from Scenario A, there exists a critical productivity rate \rho_c = 1 + (\alpha - 1)(1 - V)\frac{c_1}{c_1+v_1} < \alpha. We have r'_1 > r_1 iff \rho > \rho_c.

Proof. Firstly, we will establish that \rho_c < \alpha. This follows from the fact that 1-V and \frac{c_1}{c_1+v_1} are both strictly less than 1.

Now r'_1 > r_1 iff \dfrac{\rho-(1+u)v_1}{(1+u)(\alpha c_1+v_1)} \;>\; \dfrac{1-(1+u)v_1}{(1+u)(c_1+v_1)}. Clearing (positive) denominators and noting V = (1+u)v_1 gives the equivalent inequality \rho(c_1 + v_1) - (\alpha c_1 + v_1) - (1 - \alpha)Vc_1 > 0. Isolating \rho gives \rho > \frac{\alpha c_1 + v_1 - (\alpha - 1)Vc_1}{c_1 + v_1} = \rho_c. โ–ก

The the overall rate of profit in Scenario 2 can rise or fall, depending on the level of productivity those gains bring. The more the cost of the new means of production, the less is spent on wages, and the greater the fixed-capital intensive nature of the production process, the higher the productivity gain required to reach a rise in profit rate, arguably raising the critical threshold as capitalism develops. However, the fact that and that \rho_c < \alpha indicates it will always raise the rate of profit to invest in a productivity gain that’s more than the markup in capital required to implement that gain. That it is still beneficial for the overall rate of profit to adopt productivity gains less significant than the markup in capital expenditure, i.e. when \rho_c < \rho < \alpha, appears to be consistent with Marx’s Capital III ยง14, where he appears to be assuming this scenario.


5โ€ƒScenario C โ€” Firm 1 upgrades alone

We are interested in now exploring a third scenario: the case where Firm 1 implements a way to increase its own productivity through fixed capital investment, but this approach is kept private to Firm 1. This scenario is useful for understanding cases that may motivate Firm 1 to improve productivity and to understand to what degree they differ from the cases motivating both firms to improve productivity by the same approach.

As per Scenario B, it is important to understand the changes to the SNLT required in production. In Capital I ยง1, it is clear that Marx saw this as an average. A suitable cheapening factor for labour in these circumstances is \frac{u+1}{\rho+u} because with the same labour (V+S=1) as in Scenario A, we have gone from the overall social production of u+1 units of production (u from Firm 2 and 1 from Firm 1), to u+\rho units of production.

In this scenario we will use doubly primed variable names to distinguish the variables from the former two scenarios. We will begin, as before, with fixed capital, noting that both firms’ capital and labour has cheapened through the change in SNLT brought about by the investment of Firm 1, however Firm 1 also bears the cost of that investment:

c_1'' = \alpha c_1 \frac{u+1}{\rho+u} and c_2'' = u c_1 \frac{u+1}{\rho+u}.

C'' = c_1'' + c_2'' = c_1(\alpha+u)\frac{u+1}{\rho+u}.

The total value of the units of production produced by both firms during unit production time is now V'' + S'' + C'' = 1 + C'' = 1 + c_1(\alpha+u)\frac{u+1}{\rho+u}.

This means that since \rho+u units of production are being produced, each unit of production now has value \frac{1}{\rho + u} + c_1\frac{(u+1)(\alpha+u)}{(\rho+u)^2}. As we did for Scenario B, we can now compute the value of the output of production of Firm 1 by scaling this quantity by \rho, obtaining output value \frac{\rho}{\rho + u} + c_1 \frac{\rho(u+1)(\alpha+u)}{(\rho+u)^2}. We may then calculate the surplus value captured by Firm 1 by subtracting the corresponding fixed and variable capital quantities, remembering that both have been cheapened by the reduction in SNLT:

s_1'' = \frac{\rho}{\rho + u} + c_1 \frac{\rho(u+1)(\alpha+u)}{(\rho+u)^2} - c_1 \frac{\alpha(u+1)}{\rho+u} - \frac{(u+1)v_1}{\rho+u}

= \frac{\rho - (u+1)v_1}{\rho+u} + c_1 \frac{u(u+1)(\rho - \alpha)}{(\rho+u)^2}.

The first term in this profit mass expression simply reveals the revalued living labour contributed. The second term is intriguing. Why is there a term that depends on fixed capital value when considering the profit mass under the labour theory of value? Under the assumptions we have made, this happens because although fixed-capital-value-in = fixed-capital-value-out at the economy level, this is no longer true at the firm level; this corresponds to a repartition of SNLT in the market between Firm 1 and Firm 2, arising due to the fundamental assumption that each basket of production has the same value, independently of how it has been made. It is interesting to note that this second term, corresponding to revalued capital, changes sign depending on the relative magnitude of \rho and \alpha. For \rho > \alpha, the productivity gain is high enough that Firm 1 is able to (also) boost its profit mass by capturing more SNLT embodied in the fixed capital, otherwise the expenditure will be a drag on profit mass, as is to be expected.

Now we have an explicit expression for s_1'', we may identify the circumstances under which profit mass rises when moving from Scenario A to Scenario C.

Theorem 3 (Critical Productivity Rate for Solo-Profit Mass Increase). For any given set of variables from Scenario A, there exists a critical productivity rate \rho_m < \alpha for which we have s''_1 > s_1 iff \rho > \rho_m.

Proof. We will begin by considering the difference between surplus value captured in Scenario C and that captured in the baseline Scenario A. We will consider this as a function \Theta of \rho.

\Theta(\rho) := s_1'' - s_1 = \frac{\rho - (u+1)v_1}{\rho + u} + c_1 \frac{u(u+1)(\rho-\alpha)}{(\rho+u)^2} - \left( \frac{1}{u+1} - v_1 \right).

Differentiating with respect to \rho,

\frac{d\Theta}{d\rho} = \frac{u+(u+1)v_1}{(\rho + u)^2} + c_1 u (u+1) \frac{2\alpha + u -\rho}{(\rho+u)^3}.

Both denominators are positive. The first numerator is clearly positive. The second numerator is also positive for \rho \in (1,\alpha). Hence \Theta is strictly increasing over the interval (1,\alpha).

Now note that as \rho \to 1^+, \Theta(\rho) \to c_1 u \frac{1-\alpha}{u+1} < 0. However, at \rho = \alpha, \Theta(\alpha) simplifies to \Theta(\alpha) = \frac{(\alpha - 1)\left[ u+ (1+u)v_1 \right]}{(\alpha + u)(u + 1)} > 0. Hence by the intermediate value theorem and strict monotonicity there is a unique value \rho_m with 1 < \rho_m < \alpha such that \rho > \rho_m iff s_1'' > s_1. โ–ก

At this point, it is worth understanding where this critical profit-mass inducing productivity increase lies, compared to the value \rho_c we computed earlier as the minimum required productivity increase in Scenario B required to improve the rate of profit. It turns out that this depends on the various parameters of the problem. However, a limiting argument shows that for large Firm 1, any productivity enhancement will improve profit mass, whereas for small Firm 1 under sufficiently fixed-capital-intensive production, only productivity enhancements outstripping capital deepening would be sufficient:

Theorem 4 (Limiting Firm Size and Solo-Profit Mass). As u \to 0^+, \Theta(\rho) \to v_1 \left( 1 - \frac{1}{\rho} \right). As u \to \infty, \Theta(\rho) \to \frac{1}{u}\left( \rho - 1\right) + c_1(\rho - \alpha).

Proof. Both statements follow by direct substitution into the expression for \Theta given in the proof of Theorem 3 and taking limits. โ–ก

It is worth dwelling briefly on why there is this dichotomous behaviour between large and small firms. A large Firm 1 (u \to 0) has an output that dominates the value of the product; it takes the lion share of SNLT, and the only impact on the firm of raising its productivity is to reduce the value of labour power as measured in SNLT, which will always be good for the firm. However, a small Firm 1 (u \to \infty) has no impact on the value of the product or of labour power. It is certainly able to recover more overall value from the products in the market (the term \frac{1}{u}\left( \rho - 1\right) > 0), but it must carefully consider its outlay on fixed-capital. In particular if \rho < \alpha, only small c_1 – a low capital intensity – would lead to a rise in profit mass.

So at least for large firms (or low capital intensity) there is a range of productivity enhancements \rho_m < \rho < \rho_c that will induce a rise in profit mass for solo adopters (Scenario C) but would lead to a fall in the overall rate of profit when universally adopted (Scenario B).

It is now appropriate to consider the individual rate of profit of Firm 1 in Scenario C, having so far concentrated only on the mass of profit.

The rate of profit of Firm 1 in Scenario C is:

r1'' = \frac{\frac{\rho - (1+u)v_1}{\rho+u} + c_1\frac{u(u+1)(\rho-\alpha)}{(\rho+u)^2}}{\frac{u+1}{\rho+u}(\alpha c_1 + v_1)} = \frac{\rho - (1+u)v_1+\frac{u(u+1)c_1(\rho-\alpha)}{\rho+u}}{(u+1)(\alpha c_1 + v_1)}.

As with the mass of profit, we are able to demonstrate the existence of a critical value of productivity, above which Firm 1 increases its rate of profit as a solo adopter:

Theorem 5 (Critical Productivity Rate for Solo-Profit Rate Increase). For any given set of variables from Scenario A, there exists a critical productivity rate \rho_* < \alpha for which we have r''_1 > r_1 iff \rho > \rho_*.

Proof. As per the proof of the profit mass case, we will firstly compute the difference between the two scenarios, which we will denote \Phi(\rho) := r''_1 - r_1. Forming a common denominator D := (u+1)(\alpha c + v_1)(c_1 + v_1) that does not depend on \rho between the expressions for r''_1 and r_1 allows us to write \Phi(\rho) = N(\rho)/D where N(\rho) := (c_1+v_1)\left[ \rho - (1+u)v_1 + \frac{u(u+1)c_1(\rho-\alpha)}{\rho+u} \right] - (\alpha c_1 + v_1)\left[ 1 - (1+u)v_1 \right]. Note that N(\rho) is differentiable for \rho > 0, with:

\frac{dN}{d\rho} = (c_1+v_1)\left[ 1 + u(u+1)c_1\frac{u+\alpha}{(\rho+u)^2} \right] > 0

Hence \Phi(\rho) is strictly increasing with \rho.

Now let’s examine the sign of N(\rho) as \rho \to 1. Firstly, by direct substitution, N(\rho) \to (c_1+v_1) \left[ S + c_1 u (1-\alpha) \right] - (\alpha c_1 + v_1)S, where we have also recognised that S, total surplus value in Scenario A, is S = 1 - (1+u)v_1. This expression simplifies to (1-\alpha)c_1 \left[S+ (c_1+v_1)u \right]. Of the three multiplicative terms, the first is negative (\alpha > 1) while the other two are positive. Hence N(\rho) becomes negative as \rho \to 1, and hence so does \Phi(\rho).

We should also examine the sign of N(\rho) at \rho = \alpha. This time, N(\alpha) = (c_1+v_1)\left[ \alpha - (1 - S) \right] - (\alpha c_1 + v_1) S, which simplifies to N(\alpha) = (\alpha - 1)v_1\left[ 1+ c_1(1 + u) \right]. Since both multiplicative terms are positive, N(\alpha) and hence \Phi(\alpha) is positive.

So again we can make use of the intermediate value theorem, combined with our proof that \Phi(\rho) is strictly increasing to conclude that there exists a unique \rho_* such that \Phi(\rho) > 0 iff \rho > \rho_*, completing the proof. We have already shown that that \rho_* < \alpha via the second half of the sign-change argument. โ–ก

It remains, now, to try to place \rho_* in magnitude compared to the other critical value we discovered, \rho_c:

Theorem 6 (Productivity required for solo rate improvement dominates that required for universal rate improvement). \rho_c < \rho_*.

Proof. We can place our earlier expressions for r_1' and r_1'' over a common denominator D' = (u+1)(\alpha c_1 + v_1) > 0 that does not depend on \rho. Here r_1' = \frac{N'(\rho)}{D'} and r_1'' = \frac{N''(\rho)}{D'} where N'(\rho) := \rho - (1+u)v_1 and N''(\rho) := \rho - (1+u)v_1 + \frac{u(u+1)c_1(\rho-\alpha)}{\rho+u}.

Note that N''(\rho) = N'(\rho) + \frac{u(u+1)c_1(\rho-\alpha)}{\rho+u}. Evaluating at \rho = \rho_c, we have N''(\rho_c) = N'(\rho_c) + \frac{u(u+1)c_1(\rho_c-\alpha)}{\rho_c+u}. From Theorem 2, we know that \rho_c < \alpha and so the second term is negative and so N''(\rho_c) < N'(\rho_c).

Since D was independent of \rho and positive, we can conclude that r_1''(\rho_c) < r_1'(\rho_c). However, we also know that r_1'(\rho_c) = r_1 by definition of \rho_c. Hence r_1''(\rho_c) < r_1. And we may therefore conclude that \rho_c < \rho_*. โ–ก

This theorem demonstrates that there is no “prisoner’s dilemma” when it comes to rate of profit in this model: if a firm is motivated to invest in order to raise its own rate of profit (\rho > \rho_*) then that investment will also raise overall rate of profit if implemented universally.


6. Summary of Thresholds

We have demonstrated a number of critical thresholds of productivity increase. Collecting results, we have

\boxed{1 < \rho_c < \rho_* < \alpha}

In the case of large firms (or production that is sufficiently low in fixed-capital input), we may also write:

\boxed{1 < \rho_m < \rho_c  < \rho_* < \alpha}

A numerical example of these thresholds is illustrated below, for the large firm case.

Several conclusions flow from this order:

  1. There are no circumstances under which a firm, to improve its rate of profit, makes an investment that will not also improve the rate of profit if universally adopted (\rho_* > \rho_c).
  2. There are cases, especially for large firms, where investments that may be attractive from the perspective of improving profit mass nevertheless reduce profit rate (\rho_m < \rho_c).
  3. There are productivity-enhancing investments that may be unattractive from a purely driven (either mass or rate).

7. Conclusion: The Falling Rate of Profit

In this model, we can conclude that continuous investment in moderately productivity-enhancing means of production (\rho < \alpha) will lead to a falling rate of profit. Overall capital value C will grow, which (Theorem 1) will lead to a falling rate of profit independent of assumptions on the rate of exploitation (rate of surplus value) S/V.

The results in this post suggest that, if value after investment is considered by the investing firm, a key driver for this capital investment may not be the individual search for rate-of-profit enhancement by individual firms but rather simply a drive to increase their profit mass, capturing a larger proportion of overall surplus value, a situation particularly relevant under constraints on the total working time available (e.g. constrained population), and a common situation for large firms.

Some further remarks:

  1. It is important to note that unlike a typical Marxian analysis, none of these results rely on the dislocation of price and value. Of course, it is quite plausible that the incentive to invest is based not on value but rather on current prices, but we have been able to show that such dislocation is not necessary to explain a falling rate of profit.
  2. We have also not considered temporal aspects. It is possible that investment could be motivated by values before investment, leading to ‘apparent’ rates and masses of profit, rather than values after investment. To keep this post at a manageable length, I have not commented on this possibility, but again the aim has been to show that the falling rate can be explained without recourse to this.
  3. We have not discussed the economic advantage that Firm 1 in Scenario 3 may have, e.g. being able to use its additional surplus value for power over the market. This may be an additional motivator for Firm 1 to move despite lower rate of profit, further accelerating the decline in the rate of profit.
  4. Finally, as noted earlier in the post, we have preserved real wages in value after productivity enhancements, rather than allowed real wages to track productivity or benefit even slightly from productivity enhancement. This is a purposely conservative assumption, made as it seems the most hostile to retaining the falling rate of profit, and yet we still obtain a falling rate. If we allow wages to scale with productivity, this will of course further reduce the rate of profit.

I would very much welcome comments from others on this analysis, in particular over whether you agree that my analysis is consistent with Marx’s LTV framework.


Acknowledgements

The initial motivation for this analysis came from discussions with TG, YL and LW. I improved my own productivity by developing the mathematics and plot in this blog post using a long sequence of interactive sessions with the o3 model from OpenAI over a last week or so.

Update 28/6/25: The proof of Theorem 3 contained an error spotted by YL. This has now been fixed.

Update 12-13/7/25: Added further clarification of redistribution of value in Scenario C following discussion with YL and LW. Corrected an earlier overly simple precondition for \rho_m < \rho_c spotted by YL.

FCCM 2025

I’ve recently returned from the IEEE International Symposium on Field-Programmable Custom Computing Machines (known as FCCM). I used to attend FCCM regularly in the early 2000s, and while I have continued to publish there, I have not attended myself for some years. I tried a couple of years ago, but ended up isolated with COVID in Los Angeles. In contrast, I am pleased to report that the conference is in good health!

The conference kicked off on the the evening of the 4th May, with a panel discussion on the topic of “The Future of FCCMs Beyond Moore’s Law”, of which I was invited be be part, alongside industrial colleagues Chris Lavin and Madhura Purnaprajna from AMD, Martin Langhammer from Altera, and Mark Shand from Waymo. Many companies have tried and failed to produce lasting post-Moore alternatives to the FPGA and the microprocessor over the decades I’ve been in the field and some of these ideas and architectures (less commonly, associated compiler flows / design tools) have been very good. But, as Keynes said, “markets can remain irrational longer than you can remain solvent”. So instead of focusing on commercial realities, I tried to steer the panel discussion towards the genuinely fantastic opportunities our academic field has for a future in which power, performance and area innovation changes become a matter of intellectual advances in architecture and compiler technology rather than riding the wave of technology miniaturisation (itself, of course, the product of great advances by others).

The following day, the conference proper kicked off. Some highlights for me from other authors included the following papers aligned with my general interests:

  • AutoNTT: Automatic Architecture Design and Exploration for Number Theoretic Transform Acceleration on FPGAs from Simon Fraser University, presented by Zhenman Fang.
  • RealProbe: An Automated and Lightweight Performance Profiler for In-FPGA Execution of High-Level Synthesis Designs from Georgia Tech, presented by Jiho Kim from Callie Hao‘s group.
  • High Throughput Matrix Transposition on HBM-Enabled FPGAs from the University of Southern California (Viktor Prasanna‘s group).
  • ITERA-LLM: Boosting Sub-8-Bit Large Language Model Inference Through Iterative Tensor Decomposition from my colleague Christos Bouganis‘ group at Imperial College, presented by Keran Zheng.
  • Guaranteed Yet Hard to Find: Uncovering FPGA Routing Convergence Paradox from Mirjana Stojilovic‘s group at EPFL – and winner of this year’s best paper prize!

In addition, my own group had two full papers at FCCM this year:

  • Banked Memories for Soft SIMT Processors, joint work between Martin Langhammer (Altera) and me, where Martin has been able to augment his ultra-high-frequency soft-processor with various useful memory structures. This is probably the last paper of Martin’s PhD – he’s done great work in both developing a super-efficient soft-processor and in forcing the FPGA community to recognise that some published clock frequency results are really quite poor and that people should spend a lot longer thinking about the physical aspects of their designs if they want to get high performance.
  • NeuraLUT-Assemble: Hardware-aware Assembling of Sub-Neural Networks for Efficient LUT Inference, joint work between my PhD student Marta Andronic and me. I think this is a landmark paper in terms of the results that Marta has been able to achieve. Compared to her earlier NeuraLUT work which I’ve blogged on previously, she has added a way to break down large LUTs into trees of smaller LUTs, and a hardware-aware way to learn sparsity patterns that work best, localising nonlinear interactions in these neural networks to within lookup tables. The impact of these changes on the area and delay of her designs is truly impressive.

Overall, it was well worth attending. Next year, Callie will be hosting FCCM in Atlanta.

FPGA & HPCA 2025

I recently returned from two back-to-back conferences, FPGA 2025 in Monterey, California and HPCA 2025 in Las Vegas, Nevada. In this blog post, I will summarise some of the things I found most interesting at these conferences.

Before I even got to the first conference, I was delighted to have the chance to meet in San Francisco with Cole Harry, who runs the new Imperial Global USA. They have an exciting plan of work to develop links between Imperial and academics, industrialists, alumni and VCs in the Bay Area. I would strongly recommend reaching out to Cole if you are based in the Bay Area and would like to get involved.

FPGA, the ACM International Symposium on FPGAs, is always a joy to attend. This year we had a great balance of industry and academia attending, as is often the case. The conference recently moved to introduce keynote talks. I’m on the fence about the value of keynotes at FPGA, but this year they were both exceptionally good. The first was from Steve Reinhardt (Senior Fellow, AMD) and the second was from my long-term colleague John Wawrzynek (UC Berkeley). It was very gratifying that both keynote speakers singled out our work on LUT-based machine learning, started by my PhD student Erwei Wang (now with AMD) with his 2019 paper LUTNet, as an example of where the field should be heading in the future. In Steve’s case, this was part of his overall summary of architectures for AI. In John’s case, this was part of his call to follow Carver Mead‘s advice to “listen to what the silicon is telling you!” John’s keynote was a wonderful trip down memory lane for me – he highlighted many times in the last 30 years or so where the FPGA community has been well ahead of the broader community in working through and adopting various technologies and ideas. It was great to be reminded of the papers I had seen presented – and got excited about – when I was a PhD student myself (1998-2001). John also gave a personal shout out to my PhD student Marta Andronic for the great work she is doing.

Session 1 of FPGA was on AI for FPGAs. The first paper, FlightVGM: Efficient Video Generation Model Inference with Online Sparsification and Hybrid Precision on FPGAs, was a collaboration between various Chinese universities. This paper won the best paper prize at the conference. I liked their unusual packing of DSP blocks for non-standard word-lengths. The second paper was from Alireza Khataei, the PhD student of my colleague and friend Kia Bazargan. They presented an intriguing approach to using decision trees as FPGA inference engines. The results were good, and have left me wondering how brittle they may be to linear transformations of the input space, given that DNN based work will be invariant to these transformations (modulo quantisation error) whereas the axis-alignment of these decision tree boundaries will not. The third paper was a collaboration with us at Imperial (and others) led by Olivia Weng from Ryan Kastner‘s group at UCSD. Olivia presented an empirical exploration of the ensembling of weak classifiers, including our LUT-based classifiers. The final presentation of this session was from our group, a short paper by Olly Cassidy, our undergraduate student, which I describe in an earlier blog post.

Session 2 of FPGA was on CAD. It began with my former PhD student David Boland presenting a collaboration he has undertaken with my other former PhD student Kan Shi and others on efficient (simulation-based) verification of High-level Synthesis (HLS) designs, using FPGA-based acceleration.

Session 3 of FPGA was on HLS. This session included some interesting work presented by Jinming Zhuang, from Brown (with the excellent Peipei Zhou) and Cornell (including my friend Zhiru Zhang) on MLIR for compilation targeting AMD’s AI engines, and also great work from Suhail Basalama (who used to be affiliated with my EPSRC SpatialML centre) and his advisor and my long-term colleague Jason Cong from UCLA. It was really nice to see the shared-buffer to FIFO conversion in this work.

Session 5 of FPGA was on architecture. Readers of this blog may remember the work on dynamic HLS started by Lana Josipoviฤ‡ (now at ETH) when she was Paolo Ienne‘s PhD student at EPFL. Authors of the first paper, presented by Louis Coulon from EPFL asked the question of how one may wish to redesign FPGA architecture to better suit this model of HLS. I also liked the second talk, a collaboration between several universities, presenting a paper on incorporating n-out-of-k element sparsity in tensor processing tiles.

Session 6 of FPGA was also on High-level Synthesis. Notable contributions included a presentation from Stรฉphane Pouget (UCLA) on work with Louis-Noรซl Pouchet (Colorado State) and Jason Cong that proposed a MINLP to combine pragma insertion with loop transformations; back in 2009, I began to look at nonlinear programming for HLS transformations with my former student Qiang Liu – the paper at FPGA this year explored a really interesting practical design space for modern HLS. My former PhD student David Boland again presented in this session, this time presenting a collaboration between two of my other former PhD students who could not make the conference: Jianyi Cheng and Kan Shi (the latter mentioned above) and others, this time on verification of dynamically-scheduled high-level synthesis. The third talk on this session was presented by Robert Szafarczyk and coauthors from the University of Glasgow, looking at dynamic loop fusion in high-level synthesis based on an interesting monotonicity program analysis; dynamic transformations hold out a lot of promise – I began to look at this with my student Junyi Liu in 2015the paper at FPGA this year provides an interesting and different new direction.

HPCA in Las Vegas was a new conference to me. I was prompted to attend due to a collaboration I had with Mohamed Abdelfattah‘s group at Cornell Tech, under the auspices of my EPSRC Centre-to-Centre grant. This collaboration led to my PhD student Marta Andronic spending some months embedded in Mohamed’s group at Cornell Tech, and out of this grew a paper presented at HPCA this year by Yuzong Chen, Mohamed’s PhD student. Yuzong presented both a memory-efficient encoding and a corresponding accelerator architecture for LLM acceleration.

HPCA was co-located with PPoP and CGO, and they shared keynote sessions. Charles Leiserson – a household name in computing – gave the first keynote, associated with PPoP. The presentational style was uniquely engaging. The message was simple: the end of Moore’s Law demands more focus on performance engineering. It’s a message that’s not new, but was wonderfully delivered.

The second keynote, associated with CGO, was given by John Regehr. This was also excellent, spanning work (of his group and others) on compiler correctness and fuzzing, formalisation of compilers, alive2, souper, and bit-width independent rewrites. The latter topic is one that John’s group and mine have communicated over in the past, as it arose in the context of my PhD student Sam Coward‘s work, where we would ideally have liked bit-width independent rewrites, but settled for proving rewrites correct for reasonable bit-width ranges. John’s talk emphasised the social and economic foundations of impact in the compiler world.

The final keynote, associated with HPCA, was given by Cliff Young from Google. This talk was truly outstanding in both content and delivery. He started with a good summary of ML algorithms from an architect’s perspective. He spoke about TPUs and systems built out of TPUs at Google. Perhaps more significant, from my perspective, than the (great) technical content was the non-technical content of his talk. Cliff spoke about how the academic community is key to the long-term health of the field, and how even at Google it is literally only a handful of people who have the ability to think as long-term as academics, as people are just too busy building things. He emphasised the need for major algorithmic developments: “modern ML is not efficient, it is only effective“, was his slogan, alongside the tongue-in-cheek “I very much doubt that the transformer is all we need”. He reflected on the fallow periods in his career, and emphasised that they were absolutely necessary to enable the productive periods – a lesson that could be well learnt by research assessment processes across the world: “the only thing you can actually optimise in your career is, ‘are you enjoying yourself and learning?'” – a great manifesto. He spoke about his own imposter syndrome and about the social anxiety prevalent amongst some of the most impressive international researchers – he also had a mitigation: working together across discipline boundaries allows people to be ‘the respected expert’ in their area without the internalised expectation that you must know everything, providing an element of psychological safety. He spoke about his preference for simplicity over building complex things (something I very much share). And amusingly shared “the best piece of professional advice I was ever given”, which turned out to be “Wow, if you’d only shut the fuck up a bit, you’d be 1000x better!” This lecture was a joy to listen to.

In addition to meeting new and old friends over the past couple of weeks, it was a wonderful to meet students of former students. This year, I got to meet Muhammad Ali Farooq, who has just started off on a PhD programme with Aman Arora, but before that was the student of my former PhD student Abid Rafique at NUST.

The least joyful part of my trip was Las Vegas – a city that seems to have been designed to induce sensory overload. But no matter: the conferences definitely made up for it. And the highlight of my trip was most definitely the weekend between the two conferences where I got to spend time with the lovely family of my former PhD student Sam Bayliss in the Bay Area.

Machine Learning with Large Lookup Tables

Readers of this blog will know that I have been interested in how to bridge the worlds of Boolean logic and machine learning, ever since I published a position paper in 2019 arguing that this was the key to hardware-efficient ML.

Since then, I have been working on these ideas with several of my PhD students and collaborators, most recently my PhD student Marta Andronic‘s work forms the leading edge of the rapidly growing area of LUT-based neural networks (see previous blog posts). Central to both Marta’s PolyLUT and NeuraLUT work (and also LogicNets from AMD/Xilinx) is the idea that one should train Boolean truth tables (which we call L-LUTs for logical LUTs) which then, for an FPGA implementation, get mapped into the underlying soft logic (which we call P-LUTs, for physical LUTs).

Last Summer, Marta and I had the pleasure of supervising a bright undergraduate student at Imperial, Olly Cassidy, who worked on adapting some ideas for compressing large lookup tables coming out of the lab of my friend and colleague Kia Bazargan, together with his student Alireza Khataei at the University of Minnesota, to our setting of efficient LUT-based machine learning. Olly’s paper describing his summer project has been accepted by FPGA 2025 – the first time I’ve had the pleasure to send a second-year undergraduate student to a major international conference to present their work! In this blog post, I provide a simple introduction to Olly’s work, and explain my view of one of the most interesting aspects, ahead of the conference.

A key question in the various LUT-based machine learning frameworks we have introduced, is how to parameterise the space of the functions implemented in the LUTs. Our first work in this area, LUTNet, with my former PhD student Erwei Wang (now with AMD), took a fully general approach: if you want to learn a K-input Boolean function, then learn all 2^K lines in that function’s truth table. Since then, Marta and I have been exploring ways of parameterising that space to decouple the complexity of the function-classes implemented from the number of inputs. This gave rise to PolyLUT (parameterised as polynomials) and NeuraLUT (parameterised as small neural networks). Once we have learnt a function f, all these methods enumerate the inputs of the function for the discrete space of quantised activations to produce the L-LUT. Olly’s work introduces `don’t cares’ into the picture: if a particular combination of inputs to the function is never, or rarely, seen in the training data, then the optimisation is allowed to treat the function as a don’t care at that point.

Olly picked up CompressedLUT from Khataei and Bazargan, and investigated the injection of don’t care conditions into their decomposition process. The results are quite impressive: up to a 39% drop in the P-LUTs (area) required to implement the L-LUTs, with near zero loss in classification accuracy of the resulting neural network.

To my mind, one of the most interesting aspects of Olly’s summer work is the observation that aggressively targeting FPGA area reduction through don’t care conditions without explicitly modelling the impact on accuracy, nevertheless has a negligible or even a positive impact on test accuracy. This can be interpreted as a demonstration that (i) the generalisation capability of the LUT-based network is built into the topology of the NeuraLUT network and (ii) that, in line with Occam’s razor, simple representations – in this case, simple circuits – generalise better.

Our group is very proud of Olly!

School Accountability Reform

Following hot on the heels of the Ofsted consultation, the Department for Education has launched a new consultation on English school accountability reform. This is a system that very much does need reform! In this post, I will briefly summarise my view on the Governmentโ€™s proposals.

Firstly, I think itโ€™s unfortunate that the phrase โ€œschool accountabilityโ€ has stuck. It smacks too much of โ€œwe will give you the slack to let you fail, but woe betide you if you do!โ€. I would much prefer something like โ€œschool improvement frameworkโ€.

Having said that, I think โ€œPurposes and Principlesโ€ outlined by the government in the consultation are sound. But what about the detailed measures proposed?

Profiles

The proposal for School Profiles, incorporating but going beyond the new Ofsted report card (my very brief comments on Ofsted proposals here) is perfectly reasonable, but nothing particularly new (check out GIAS!). More fundamental, in my view, is the need to revisit what counts as โ€œschool performance dataโ€ (hint: Attainment 8 ainโ€™t it!) But sadly there is nothing in the consultation about this. One would at least hope that certain data hidden behind an ASP login might become public in the short term.

Intervention

It is disappointing that by default, a maintained school placed in special measures will become an academy but there is no scope for an academy placed in special measures to become a maintained school.

On the other hand RISE Teams are a good idea, especially sign-posting of best practice, regional events for school staff, etc. They are not a new idea โ€“ remember when local authorities could actually afford support teams, anyone? โ€“ but they are a good idea nevertheless. I support the mandatory nature of some interventions, which for academies will presumably come via Section 43 of the Childrenโ€™s Wellbeing and Schools Bill. I am a little worried, though, that currently the RISE teams seem to be described as brokers of support โ€œwith a high-quality organisationโ€ rather than as actually having in-house expertise โ€“ there is a danger that RISE Teams become ways to mandate schools to buy in services from favoured MATs. The devil will be in the implementation.

It is disappointing that there has been no focus so far by this government on the structural problems present in the sector. The former governmentโ€™s failed Schools Bill 2022, while having many significant problems, did at least aim to replace the patchwork of academy funding contracts signed at different times with different models with a uniform footing. And the peculiar nature of the Single Academy Trust remains an untackled issue to this day.

Overall

Overall, I would say the proposals are OK. More of a tinkering around the edges than anything profound, although the RISE proposals have some promise and could โ€“ with the right resourcing, local democratic control, and remit, genuinely help the sector with self-sustaining school improvement.

Ontology and Oppression

This Autumn I read Katharine Jenkins’ book Ontology and Oppression. The ideas and approaches taken by Jenkins resonated with me, and I find myself consciously or subconsciously applying them in many contexts beyond those she studies. I therefore thought it was worth a quick blog post to summarise the key ideas, in case others find them helpful and to recommend you also read Jenkins’ work if you do.

Jenkins studies the ontology of “social kinds” from a pluralist perspective – that there can be many different definitions of social kinds of the same name, e.g. ‘woman’, ‘Black’ – and that several of them can be useful and/or the right tool to understand the world in the right circumstances. After a general theoretical introduction, she focuses on gender and race to find examples of such kinds, but the idea is clearly applicable much more broadly.

Jenkins begins by describing her “Constraints and Enablements” framework, arguing that what it means to be a member of a social kind is at least partly determined by being subject to certain social constraints and enablements, which Jenkins classifies in certain ways. These can be imposed on you by (some subset of) society or can even be self-imposed through self-identification as a member of a given social kind. Jenkins defines two types of wrong that can come about as a result of being considered a member of a given social kind, ‘ontic injustice’, where the constraints and enablements constitute a ‘wrong’, and a proper subclass, ‘ontic oppression’, where the constraints and enablements additionally “steer individuals in this kind towards exploitation, marginalisation, powerlessness, cultural domination, violence and/or communicative curtailment”. She argues that a pluralist framework can be valuable as a philosophical tool for liberation, and studies how intersectionality arises naturally in her approach.

The race and gender kinds Jenkins studies, she classifies as ‘hegemonic kinds’, ‘interpersonal kinds’ and ‘identity kinds’. I find this classification compelling for wanting to really understand power structures and help people rather than simply shout about identity politics from the sidelines – a form of intervention that sadly characterises much of the ‘debate’ in ‘culture wars’ at the moment. It also provides a useful toolbox to understand how a social kind (e.g. ‘Black’, ‘woman’) can be both hegemonically oppressive and yet corresponding interpersonal and identity kinds can sometimes serve an emancipatory function.

Ultimately, Jenkins’ description allows us to break away from some of the more ridiculous lines of argument we’ve seen in recent years, trying to ‘define away’ issues. At the end of the book, Jenkins takes aim at the ‘ontology-first approach’: the idea that one should first settle ‘the’ meaning of a social kind, e.g. ‘what is a woman?’ and from that apply appropriate (in this case gendered) social practices. This approach, so widespread in society, Jenkins shows does not fit with her framework. She challenges us to ask: what do we actually want to change about society? And from that, to understand what kinds make sense to talk about, and in what context, and how.

NeuraLUT: Networks inside LUTs

In early September, my PhD student Marta Andronic will be off to Turin to present our latest work “NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions” at the Field-Programmable Logic and Applications conference. Ahead of the detailed presentation at the conference, this blog post provides a short accessible summary of her exciting work.

In 2019 I first proposed making better use of FPGA lookup tables by exposing them as trainable hardware, together with my then PhD student Erwei Wang and coauthors, in our LUTNet work. In common with AMD’s LogicNets and our PolyLUT, our new work NeuraLUT hides certain aspects of a neural network within a synthesisable Boolean lookup table (which we call an L-LUT), to achieve very efficient and very low latency inference. LogicNets hid a dot product and activation function – the clever thing in LogicNets was that, as a result, the weights can be real-valued – no quantisation needs to be performed, because the only thing that’s important is the finite truth table of the lookup table; once this has been enumerated, the real-valued weights are irrelevant, the only quantisation is at the inputs and outputs of the L-LUT. The tradeoff here is that LogicNets networks needed to be extremely sparse.

NeuraLUT takes this a step beyond by hiding whole neural networks inside Boolean lookup tables! These internal neural networks can be fully dense – or even irregularly connected – and real-valued in both weight and activation, for the same reason. The only thing that’s important is that the inputs and outputs of these “sub networks” are quantised and connections between sub networks are sparse, because these are the only parts that get exposed to the hardware design itself. One can interpret the resulting network as a standard deep neural network, with a specific hardware-friendly sparsity pattern, as illustrated in the figure below.

The increased expressive power of NeuraLUT leads to considerable reductions in latency. We’re targeting here very low latency applications like you may find in particle physics. 12 nanosecond MNIST classification, anyone? 3 nanoseconds to tag jet substructures in a particle accelerator? Come and listen to Marta’s talk to find out how!

Open Source MX Library

Readers of this blog may be aware that several key industrial players recently released the MX standard for low-precision computation, mainly targeting its use in machine learning. I reviewed the standard in an earlier blog post.

I’m pleased to report that my PhD student Ebby Samson has released an open source RTL hardware implementation of the key operators from the standard. In joint work with Naveen Mellempudi from AMD, Wayne Luk from Imperial and myself, he describes the library in our forthcoming paper at the International Conference on Field-Programmable Logic and Applications. If you will be in Turin in early September, please come and hear Ebby talking about his work.

The library supports all the concrete formats in the standard and more besides. Ebby has also released an extension to the AMD Brevitas quantisation-aware training PyTorch library that lets you train your models with eventual MX implementation in mind.

Please do read our paper, integrate our hardware designs into your work, and use our Brevitas library to do your neural network training! Links to all in the paper.

Notes on Computational Learning Theory

This blog collects some of my notes on classical computational learning theory, based on my reading of Kearns and Vazirani. The results are (almost) all from their book, the sloganising (and mistakes, no doubt) are mine.

The Probably Approximately Correct (PAC) Framework

Definition (Instance Space). An instance space is a set, typically denoted X. It is the set of objects we are trying to learn about.

Definition (Concept). A concept c over X is a subset of the instance space X.

Although not covered in Kearns and Vazirani, in general it is possible to generalise beyond Boolean membership to some degree of uncertainty or fuzziness – I hope to cover this in a future blog post.

Definition (Concept Class). A concept class {\mathcal C} is a set of concepts, i.e. {\mathcal C} \subset \mathcal{P}(X), where \mathcal P denotes power set. We will follow Kearns and Vazirani and also use c to denote the corresponding indicator function c : X \to \{0,1\}.

In PAC learning, we assume {\mathcal C} is known, but the target class c \in {\mathcal C} is not. However, it doesn’t seem a jump to allow for unknown target class, in an appropriate approximation setting – I would welcome comments on established frameworks for this.

Definition (Target Distribution). A target distribution {\mathcal D} is a probability distribution over X.

In PAC learning, we assume {\mathcal D} is unknown.

Definition (Oracle). An oracle is a function EX(c,{\mathcal D}) taking a concept class and a distribution, and returning a labelled example (x, c(x)) where x is drawn randomly and independently from {\mathcal D}.

Definition (Error). The error of a hypothesis concept class h \in {\mathcal C} with reference to a target concept class c \in {\mathcal C} and target distribution {\mathcal D}, is \text{error}(h) = Pr_{x \in {\mathcal D}}\left\{ c(x) \neq h(x) \right\}, where Pr denotes probability.

Definition (Representation Scheme). A representation scheme for a concept class {\mathcal C} is a function {\mathcal R} : \Sigma^* \to {\mathcal C} where \Sigma is a finite alphabet of symbols (or – following the Real RAM model – a finite alphabet augmented with real numbers).

Definition (Representation Class). A representation class is a concept class together with a fixed representation scheme for that class.

Definition (Size). We associate a size \text{size}(\sigma) with each string from a representation alphabet \sigma \in \Sigma^*. We similarly associate a size with each concept c via the size of its minimal representation \text{size}(c) = \min_{R(\sigma) = c} \text{size}(\sigma).

Definition (PAC Learnable). Let {\mathcal C} and {\mathcal H} be representation classes classes over X, where {\mathcal C} \subseteq {\mathcal H}. We say that concept class {\mathcal C} is PAC learnable using hypothesis class {\mathcal H} if there exists an algorithm that, given access to an oracle, when learning any target concept c \in {\mathcal C} over any distribution {\mathcal D} on X, and for any given 0 < \epsilon < 1/2 and 0 < \delta < 1/2, with probability at least 1-\delta, outputs a hypothesis h \in {\mathcal H} with \text{error}(h) \leq \epsilon.

Definition (Efficiently PAC Learnable). Let {\mathcal C}_n and {\mathcal H}_n be representation classes classes over X_n, where {\mathcal C}_n \subseteq {\mathcal H}_n for all n. Let X_n = \{0,1\}^n or X_n = {\mathbb R}^n. Let X = \cup_{n \geq 1} X_n, {\mathcal C} = \cup_{n \geq 1} {\mathcal C_n}, and {\mathcal H} = \cup_{n \geq 1} {\mathcal H_n}. We say that concept class {\mathcal C} is efficiently PAC learnable using hypothesis class {\mathcal H} if there exists an algorithm that, given access to a constant time oracle, when learning any target concept c \in {\mathcal C}_n over any distribution {\mathcal D} on X, and for any given 0 < \epsilon < 1/2 and 0 < \delta < 1/2:

  • Runs in time polynomial in n, \text{size}(c), 1/\epsilon, and 1/\delta, and
  • With probability at least 1-\delta, outputs a hypothesis h \in {\mathcal H} with \text{error}(h) \leq \epsilon.

There is much of interest to unpick in these definitions. Firstly, notice that we have defined a family of classes parameterised by dimension n, allowing us to talk in terms of asymptotic behaviour as dimensionality increases. Secondly, note the key parameters of PAC learnability: \delta (the ‘probably’ bit) and \epsilon (the ‘approximate’ bit). The first of these captures the idea that we may get really unlucky with our calls to the oracle, and get misleading training data. The second captures the idea that we are not aiming for certainty in our final classification accuracy, some pre-defined tolerance is allowable. Thirdly, note the requirements of efficiency: polynomial scaling in dimension, in size of the concept (complex concepts can be harder to learn), in error rate (the more sloppy, the easier), and in probability of algorithm failure to find a suitable hypothesis (you need to pay for more certainty). Finally, and most intricately, notice the separation of concept class from hypothesis class. We require the hypothesis class to be at least as general, so the concept we’re trying to learn is actually one of the returnable hypotheses, but it can be strictly more general. This is to avoid the case where the restricted hypothesis classes are harder to learn; Kearns and Vazirani, following Pitt and Valiant, give the example of learning the concept class 3-DNF using the hypothesis class 3-DNF is intractable, yet learning the same concept class with the more general hypothesis class 3-CNF is efficiently PAC learnable.

Occam’s Razor

Definition (Occam Algorithm). Let \alpha \geq 0 and 0 \leq b < 1 be real constants. An algorithm is an (\alpha,\beta)-Occam algorithm for {\mathcal C} using {\mathcal H} if, on an input sample S of cardinality m labelled by membership in c \in {\mathcal C}_n, the algorithm outputs a hypothesis h \in {\mathcal H} such that:

  • h is consistent with S, i.e. there is no misclassification on S
  • \text{size}(h) \leq \left(n \cdot \text{size}(c)\right)^\alpha m^\beta

Thus Occam algorithms produce succinct hypotheses consistent with data. Note that the size of the hypothesis is allowed to grow only mildly – if at all – with the size of the dataset (via \beta). Note, however, that there is nothing in this definition that suggests predictive power on unseen samples.

Definition (Efficient Occam Algorithm). An (\alpha,\beta)-Occam algorithm is efficient iff its running time is polynomial in n, m, and \text{size}(c).

Theorem (Occam’s Razor). Let A be an efficient (\alpha,\beta)-Occam algorithm for {\mathcal C} using {\mathcal H}. Let {\mathcal D} be the target distribution over X, let c \in {\mathcal C}_n be the target concept, 0 < \epsilon, \delta \leq 1. Then there is a constant a > 0 such that if A is given as input a random sample S of m examples drawn from oracle EX(c,{\mathcal D}), where m satisfies m \geq a \left( \frac{1}{\epsilon} \log \frac{1}{\delta} + \left(\frac{\left( n \cdot \text{size}(c) \right)^\alpha}{\epsilon}\right)^\frac{1}{1-\beta}\right), then A runs in time polynomial in n, \text{size}(c), 1/\epsilon and \frac{1}{\delta} and, with probability at least 1 - \delta, the output h of A satisfies error(h) \leq \epsilon.

This is a technically dense presentation, but it’s a philosophically beautiful result. Let’s unpick it a bit, so its essence is not obscured by notation. In summary, simple rules that are consistent with prior observations have predictive power! The ‘simple’ part here comes from (\alpha,\beta), and the predictive power comes from the bound on \text{error}(h). Of course, one needs sufficient observations (the complex lower bound on m) for this to hold. Notice that as \beta approaches 1, and so – by the definition of an Occam algorithm – we get close to being able to memorise our entire training set – we need an arbitrarily large training set (memorisation doesn’t generalise).

Vapnik-Chervonenkis (VC) Dimension

Definition (Behaviours). The set of behaviours on S = \{x_1, \ldots, x_m\} that are realised by {\mathcal C}, is defined by \Pi_{\mathcal C}(S) = \left\{ \left(c(x_1), \ldots, c(x_m)\right) | c \in {\mathcal C} \right\}.

Each of the points in S is either included in a given concept or not. Each tuple \left(c(x_1), \ldots, c(x_m)\right) then forms a kind of fingerprint of X according to a particular concept. The set of behaviours is the set of all such fingerprints across the whole concept class..

Definition (Shattered). A set S is shattered by {\mathcal C} iff \Pi_{\mathcal C}(S) = \{0,1\}^{|S|}.

Note that \{0,1\}^{|S|} is the maximum cardinality that’s possible, i.e. the set of behaviours is all possible behaviours. So we can think of a set as being shattered by a concept class iff there’s no combination of inclusion/exclusion in the concepts that isn’t represented at least once in the set.

Definition (Vapnik-Chervonenkis Dimension). The VC dimension of {\mathcal C}, denoted VCD({\mathcal C}), is the cardinality of the largest set shattered by {\mathcal C}. If arbitrarily large finite sets can be shattered by {\mathcal C}, then VDC({\mathcal C}) = \infty.

VC dimension in this sense captures the ability of {\mathcal C} to discern between samples.

Theorem (PAC-learning in Low VC Dimension). Let {\mathcal C} be any concept class. Let {\mathcal H} be any representation class off of VC dimension d. Let A be any algorithm taking a set of m labelled examples of a concept c \in {\mathcal C} and producing a concept in {\mathcal H} that is consistent with the examples. Then there exists a constant c_0 such that A is a PAC learning algorithm for {\mathcal C} using {\mathcal H} when it is given examples from EX(c,{\mathcal D}), and when m \geq c_0 \left( \frac{1}{\epsilon} \log \frac{1}{\delta} + \frac{d}{\epsilon} \log \frac{1}{\epsilon} \right).

Let’s take a look at the similarity between this theorem and Occam’s razor, presented in the last section of this blog post. Both bounds have a similar feel, but the VCD-based bound does not depend on \text{size}(c); indeed it’s possible that the size of hypotheses is infinite and yet the VCD is still finite.

As the theorem below shows, the linear dependence on VCD achieved in the above theorem is actually the best one can do.

Theorem (PAC-learning Minimum Samples). Any algorithm for PAC-learning a concept class of VC dimension d must use \Omega(d/\epsilon) examples in the worst case.

Definition (Layered DAG). A layered DAG is a DAG in which each vertex is associated with a layer \ell \in {\mathbb N} and in which the edges are always from some layer \ell to the next layer \ell+1. Vertices at layer 0 have indegree 0 and are referred to as input nodes. Vertices at other layers are referred to as internal nodes. There is a single output node of outdegree 0.

Definition (G-composition). For a layered DAG G and a concept class {\mathcal C}, the G-composition of {\mathcal C} is the class of all concepts that can be obtained by: (i) associating a concept c_i \in {\mathcal C} with each vertex N_i in G, (ii) applying the concept at each node to its predecessor nodes.

Notice that this way we can think of the internal nodes as forming a Boolean circuit with a single output; the G-composition is the concept class we obtain by restricting concepts to only those computable with the structure G. This is a very natural way of composing concepts – so what kind of VCD arises through this composition? This theorem provides an answer:

Theorem (VCD Compositional Bound). Let G be a layered DAG with n input nodes and s \geq 2 internal nodes, each of indegree r. Let {\mathcal C} be a concept class over {\mathbb R}^r of VC dimension d, and let {\mathcal C}_G be the G-composition of {\mathcal C}. Then VCD({\mathcal C}_G) \leq 2ds \log(es).

Weak PAC Learnability

Definition (Weak PAC Learning). Let {\mathcal C} be a concept class and let A be an algorithm that is given access to EX(c,{\mathcal D}) for target concept c \in {\mathcal C}_n and distribution {\mathcal D}. A is a weak PAC learning algorithm for {\mathcal C} using {\mathcal H} if there exist polynomials p(\cdot,\cdot) and q(\cdot,\cdot) such that A outputs a hypothesis h \in {\mathcal H} that with probability at least 1/q(n,\text{size}(c)) satisfies \text{error}(h) \leq 1/2 - 1/p(n,\text{size}(c)).

Kearns and Vazirani justifiably describe weak PAC learning as “the weakest demand we could place on an algorithm in the PAC setting without trivialising the problem”: if these were exponential rather than polynomial functions in n, the problem is trivial: take a fixed-size random sample of the concept and memorise it, randomly guess with probability 50% outside the memorised sample. The remarkable result is that efficient weak PAC learnability and efficient PAC learnability coincide for an appropriate PAC hypothesis class, based on ternary majority trees.

Definition (Ternary Majority Tree). A ternary majority tree with leaves from {\mathcal H} is a tree where each non-leaf node computes a majority (voting) function of its three children, and each leaf is labelled with a hypothesis from {\mathcal H}.

Theorem (Weak PAC learnability is PAC learnability). Let {\mathcal C} be any concept class and {\mathcal H} any hypothesis class. Then if {\mathcal C} is efficiently weakly PAC learnable using {\mathcal H}, it follows that {\mathcal C} is efficiently PAC learnable using a hypothesis class of ternary majority trees with leaves from {\mathcal H}.

Kearns and Varzirani provide an algorithm to learn this way. The details are described in their book, but the basic principle is based on “boosting”, as developed in the lemma to follow.

Definition (Filtered Distributions). Given a distribution {\mathcal D} and a hypothesis h_1 we define {\mathcal D_2} to be the distribution obtained by flipping a fair coin and, on a heads, drawing from EX(c,{\mathcal D}) until h_1 agrees with the label; on a tails, drawing from EX(c,{\mathcal D}) until h_1 disagrees with the label. Invoking a weak learning algorithm on data from this new distribution yields a new hypothesis h_2. Similarly, we define {\mathcal D_3} to be the distribution obtained by drawing examples from EX(c,{\mathcal D}) until we find an example on which h_1 and h_2 disagree.

What’s going on in these constructions is quite clever: h_2 has been constructed so that it must contain new information about c, compared to h_1; h_1 has, by construction, no advantage over a coin flip on {\mathcal D}_2. Similarly, h_3 contains new information about c not already contained in h_1 and h_2, namely on the points where they disagree. Thus, one would expect that hypotheses that work in these three cases could be combined to give us a better overall hypothesis. This is indeed the case, as the following lemma shows.

Lemma (Boosting). Let g(\beta) = 3 \beta^2 - 2 \beta^3. Let the distributions {\mathcal D}, {\mathcal D}_2, {\mathcal D}_3 be defined above, and let h_1, h_2 and h_3 satisfy \text{error}_{\mathcal D}(h_1) \leq \beta, \text{error}_{{\mathcal D}_2}(h_2) \leq \beta, \text{error}_{{\mathcal D}_3}(h_3) \leq \beta. Then if h = \text{majority}(h_1, h_2, h_3), it follows that \text{error}_{\mathcal D}(h) \leq g(\beta).

The function g is monotone and strictly decreasing over [0,1/2). Hence by combining three hypotheses with only marginally better accuracy than flipping a coin, the boosting lemma tells us that we can obtain a strictly stronger hypothesis. The algorithm for (strong) PAC learnability therefore involves recursively calling this boosting procedure, leading to the majority tree – based hypothesis class. Of course, one needs to show that the depth of the recursion is not too large and that we can sample from the filtered distributions with not too many calls to the overall oracle EX(c,{\mathcal D}), so that the polynomial complexity bound in the PAC definition is maintained. Kearns and Vazirani include these two results in the book.

Learning from Noisy Data

Up until this point, we have only dealt with correctly classified training data. The introduction of a noisy oracle allows us to move beyond this limitation.

Definition (Noisy Oracle). A noisy oracle \hat{EX}^\eta( c, {\mathcal D}) extends the earlier idea of an oracle with an additional noise parameter 0 \leq \eta < 1/2. This oracle behaves in the identical way to EX except that it returns the wrong classification with probability \eta.

Definition (PAC Learnable from Noisy Data). Let {\mathcal C} be a concept class and let {\mathcal H} be a representation class over X. Then {\mathcal C} is PAC learnable from noisy data using {\mathcal H} if there exists and algorithm such that: for any concept c \in {\mathcal C}, any distribution {\mathcal D} on X, any 0 \leq \eta < 1/2, and any 0 < \epsilon < 1, 0 < \delta < 1 and \eta_0 with \eta \leq \eta_0 < 1/2, given access to a noisy oracle \hat{EX}^\eta( c, {\mathcal D}) and inputs \epsilon, \delta, \eta_0, with probability at least 1 - \delta the algorithm outputs a hypothesis concept h \in {\mathcal H} with \text{error}(h) \leq \epsilon. If the runtime of the algorithm is polynomial in n, 1/\epsilon, 1/\delta and 1/(1 - 2\eta_0) then {\mathcal C} is efficiently learnable from noisy data using {\mathcal H}.

Let’s unpick this definition a bit. The main difference from the PAC definition is simply the addition of noise via the oracle and an additional parameter \eta_0 which bounds the error of the oracle; thus the algorithm is allowed to know in advance an upper bound on the noisiness of the data, and an efficient algorithm is allowed to take more time on more noisy data.

Kearns and Vazirani address PAC learnability from noisy data in an indirect way, via the use of a slightly different framework, introduced below.

Definition (Statistical Oracle). A statistical oracle STAT(c, {\mathcal D}) takes queries of the form (\chi, \tau) where \chi : X \times \{0,1\} \to \{0,1\} and 0 < \tau \leq 1, and returns a value \hat{P}_\chi satisfying P_\chi - \tau \leq \hat{P}_\chi \leq P_\chi + \tau where P_\chi = Pr_{x \in {\mathcal D}}[ \chi(x, c(x)) = 1 ].

Definition (Learnable from Statistical Queries). Let {\mathcal C} be a concept class and let {\mathcal H} be a representation class over X. Then {\mathcal C} is efficiently learnable from statistical learning queries using {\mathcal H} if there exists a learning algorithm A and polynomials p(\cdot, \cdot, \cdot), q(\cdot, \cdot, \cdot) and r(\cdot,\cdot,\cdot) such that: for any c \in {\mathcal C}, any distribution {\mathcal D} over X and any 0 < \epsilon < 1/2, if given access to STAT(c,{\mathcal D}), the following hold. (i) For every query (\chi,\tau) made by A, the predicate \chi can be evaluated in time q(1/\epsilon, n, \text{size}(c)), and \tau \leq r(1/\epsilon, n, \text{size}(c)), (ii) A has execution time bounded by p(1/\epsilon, n, \text{size}(c)), (iii) A outputs a hypothesis h \in {\mathcal H} that satisfies \text{error}(h) \leq \epsilon.

So a statistical oracle can be asked about a whole predicate \chi, for any given tolerance \tau. The oracle must return an estimate of the probability that this predicate holds (where the probability is over the distribution over X). It is, perhaps, not entirely obvious how to relate this back to the more obvious noisy oracle used above. However, it is worth noting that one can construct a statistical oracle that works with high probability by taking enough samples from a standard oracle, and then returning the relative frequency of \chi evaluating to 1 on that sample. Kearns and Vazirani provide an intricate construction to efficiently sample from a noisy oracle to produce a statistical oracle with high probability. In essence, this then allows an algorithm that can learn from statistical queries to be used to learn from noisy data, resulting in the following theorem.

Theorem (Learnable from Statistical Queries means Learnable from Noisy Data). Let {\mathcal C} be a concept class and let {\mathcal H} be a representation class over X. Then if {\mathcal C} is efficiently learnable from statistical queries using {\mathcal H}, {\mathcal C} is also efficiently PAC learnable using {\mathcal H} in the presence of classification noise.

Hardness Results

I mentioned earlier in this post that Pitt and Valiant showed that sometimes we want more general hypothesis classes than concept classes: the concept class 3-DNF using the hypothesis class 3-DNF is intractable, yet learning the same concept class with the more general hypothesis class 3-CNF is efficiently PAC learnable. So in their chapter Inherent Unpredictability, Kearns and Vazirani turn their attention to the case where a concept class is hard to learn independent of the choice of a hypothesis class. This leads to some quite profound results for those of us interested in Boolean circuits.

We will need some kind of hardness assumption to develop hardness results for learning. In particular, note that if P = NP, then by Occam’s Razor (above) polynomially evaluable hypothesis classes are also polynomially-learnable ones. So we will need to do two things: focus our attention on polynomially evaluable hypothesis classes (or we can’t hope to learn them polynomially), and make a suitable hardness assumption. The latter requires a very brief detour into some results commonly associated with cryptography.

Let {\mathbb Z}_N^* = \{ i \; | \; 0 < i < N \; \wedge \text{gcd}(i, N) = 1 \}. We define the cubing function f_N : {\mathbb Z}_N^* \to {\mathbb Z}_N^* by f_N(x) = x^3 \text{ mod } N. Let \varphi define Euler’s totient function. Then if \varphi is not a multiple of three, it turns out that f_N is bijective, so we can talk of a unique discrete cube root.

Definition (Discrete Cube Root Problem). Let p and q be two n-bit primes with \varphi(N) not a multiple of 3, where N = pq. Given N and f_N(x) as input, output x.

Definition (Discrete Cube Root Assumption). For every polynomial P, there is no algorithm that runs in time P(n) that solves the discrete cube root problem with probability at least 1/P(n), where the probability is taken over randomisation of p, q, x and any internal randomisation of the algorithm A. (Where N = pq).

This Discrete Cube Root Assumption is widely known and studied, and forms the basis of the learning complexity results presented by Kearns and Vazirani.

Theorem (Concepts Computed by Small, Shallow Boolean Circuits are Hard to Learn). Under the Discrete Cube Root Assumption, the representation class of polynomial-size, log-depth Boolean circuits is not efficiently PAC learnable (using any polynomially evaluable hypothesis class).

The result also holds if one removes the log-depth requirement, but this result shows that even by restricting ourselves to only log-depth circuits, hardness remains.

In case any of my blog readers knows: please contact me directly if you’re aware of any resource of positive results on learnability of any compositionally closed non-trivial restricted classes of Boolean circuits.

The construction used to provide the result above for Boolean circuits can be generalised to neural networks:

Theorem (Concepts Computed by Neural Networks are Hard to Learn). Under the Discrete Cube Root Assumption, there is a polynomial p and an infinite family of directed acyclic graphs (neural network architectures) G = \{G_{n^2}\}_{n \geq 1} such that each G_{n^2} has n^2 Boolean inputs and at most p(n) nodes, the depth of G_{n^2} is a constant independent of n, but the representation class {\mathcal C}_G = \cup_{n \geq 1} {\mathcal C}_{G_{n^2}} is not efficiently PAC learnable (using any polynomially evaluable hypothesis class), and even if the weights are restricted to be binary.

Through an appropriate natural definition of reduction in PAC learning, Kearns and Vazirani show that the PAC-learnability of all these classes reduce to functions computed by deterministic finite automata. So, in particular:

Theorem (Concepts Computed by Deterministic Finite Automata are Hard to Learn). Under the Discrete Cube Root Assumption, the representation class of Deterministic Finite Automata is not efficiently PAC learnable (using any polynomially evaluable hypothesis class).

It is this result that motivates the final chapter of the book.

Experimentation in Learning

As discussed above, PAC model utilises an oracle that returns labelled samples (x, c(x)). An interesting question is whether more learning power arises if we allow the algorithms to be able to select x themselves, with the oracle returning c(x), i.e. not just to be shown randomly selected examples but take charge and test their understanding of the concept.

Definition (Membership Query). A membership query oracle takes any instance x and returns its classification c(x).

Definition (Equivalence Query). An equivalence query oracle takes a hypothesis concept h \in {\mathcal C} and determines whether there is an instance x on which c(x) \neq h(x), returning this counterexample if so.

Definition (Learnable From Membership and Equivalence Queries). The representation class {\mathcal C} is efficiently exactly learnable from membership and equivalence queries if there is a polynomial p(\cdot,\cdot) and an algorithm with access to membership and equivalence oracles such that for any target concept c \in {\mathcal C}_n, the algorithm outputs the concept c in time p(\text{size}(c),n).

There are a couple of things to note about this definition. It appears to be a much stronger requirement than PAC learning, as the concept must be exactly learnt. On the other hand, the existence of these more sophisticated oracles, especially the equivalence query oracle, appears to narrow the scope. Kearns and Vazirani encourage the reader to prove that the true strengthening over PAC-learnability is in the membership queries:

Theorem (Exact Learnability from Membership and Equivalence means PAC-learnable with only Membership). For any representation class {\mathcal C}, if {\mathcal C} is efficiently exactly learnable from membership and equivalence queries, then {\mathcal C} is also efficiently learnable in the PAC model with membership queries.

They then provide an explicit algorithm, based on these two new oracles, to efficiently exactly learn deterministic finite automata.

Theorem (Experiments Make Deterministic Finite Automata Efficiently Learnable). The representation class of Deterministic Finite Automata is efficiently exactly learnable from membership and equivalence queries.

Note the contrast with the hardness result of the previous section: through the addition of experimentation, we have gone from infeasible learnability to efficient learnability. Another very philosophically pleasing result.