Block Number Formats are (Still!) Direction Preservers

In my previous post, I argued that block number formats can be understood geometrically as direction preservers. That argument relied on an idealization: once a block direction had been chosen, its scale could be set optimally as an arbitrary real number.

Real hardware formats do not usually work that way. In many practical schemes, block scales are quantized very coarsely, sometimes all the way down to powers of two. In particular, in the MX specification, all the concrete compliant formats use E8M0 scaling.

So does the directional picture I painted in my last post survive this brutal scaling? Here I will argue, in the first of what I hope will be a short sequence of follow-up blog posts, that it does.

From ideal block scales to quantized block scales

Recall the setup from the earlier post. A vector is partitioned into blocks, v = (v_1,\dots,v_B), and each block is approximated as \hat v_b = \beta_b m_b, where m_b is a low-precision mantissa vector and \beta_b is a scalar block scale.

In the earlier post, I assumed that \beta_b was an arbitrary real value, chosen optimally in the least-squares sense. That gave the ideal blockwise representation \hat v.

Now let us keep the same mantissa vectors m_b, but suppose that the scale factors themselves must be quantized. Write the implemented scale as \tilde \beta_b, so that the represented block becomes \tilde v_b = \tilde \beta_b m_b.

It is convenient to define the multiplicative scale error x_b = \frac{\tilde \beta_b}{\beta_b}. Then \tilde v_b = x_b \hat v_b.

Note that, of course, quantizing the block scale does not change the chosen direction of a block at all; it only changes its length. So the only directional distortion comes from the relative rescaling of different blocks.

An exact cosine formula

Let \alpha_b = \frac{\|\hat v_b\|^2}{\sum_j \|\hat v_j\|^2}, so that \alpha_b is the fraction of the ideal projected vector’s energy contained in block b.

Then it can be shown that \cos(\hat v,\tilde v) = \frac{\sum_b \alpha_b x_b}{\sqrt{\sum_b \alpha_b x_b^2}} (the proof is included at the end of this post).

So the effect of scale quantization on direction depends only on how uneven the factors x_b are across blocks. If all blocks were rescaled by the same factor, direction would be unchanged.

Exponent-only power-of-two scaling

Now consider the coarsest plausible case: each block scale is rounded to the nearest power of two. Then each multiplicative error satisfies x_b \in [2^{-1/2},\,2^{1/2}].

So, from our exact cosine formula, we are interested in how small \frac{\sum_b \alpha_b x_b}{\sqrt{\sum_b \alpha_b x_b^2}} can be, when all the x_b lie in the interval [2^{-1/2},\,2^{1/2}].

A simple inequality shows that the answer depends only on the two extreme values of the interval. If all the block rescaling factors lie in [\ell,u], then

\frac{\sum_b \alpha_b x_b}{\sqrt{\sum_b \alpha_b x_b^2}}\ge \frac{2\sqrt{\ell u}}{\ell+u} (proof at the end of this blog post).

In the power-of-two case we have \ell=2^{-1/2}, u=2^{1/2}, so \ell u=1, and therefore

\cos(\hat v,\tilde v)\ge \frac{2}{2^{-1/2}+2^{1/2}} = \frac{2\sqrt2}{3}\approx 0.943.

Equivalently,  \angle(\hat v,\tilde v)\le 20^\circ.

So even if every block scale is rounded to the nearest power of two, the resulting vector remains within about 20^\circ of the ideally scaled one.

That is the main result of this post.

One striking feature of the bound is that it does not depend on the dimension of the vector. The reason is that the worst case is already attained by a two-group energy split: some blocks rounded up, others rounded down. Once those two groups exist, adding more blocks or more dimensions does not make the bound worse, as is apparent from the proof below.

20 degrees is less than it sounds

Our everyday intuition may tell us that this angle is not huge, but it’s not that small either. In a sense, that’s true. But angles behave very differently in high-dimensional spaces. In high dimension, most random vectors are almost orthogonal to one another: their angle is close to 90^\circ, so a guarantee that an approximation remains within 20^\circ of the original vector is much stronger than it would sound in two or three dimensions.

Beyond power-of-two

We’ve analysed power-of-two scaling here for two reasons: because it’s in a sense the crudest possible floating-point rounding, and because it’s commonly used in real hardware designs.

That does not mean it’s optimal. But it does raise two further questions. Firstly, we’ve assumed here that the exponent range is sufficiently wide – what if it’s not? Secondly – and relatedly – how much better can this angular bound get by spending some of the scale bits on greater precision?

My view is that the answer becomes clearer once a tensor-wide high-precision scale is introduced, something NVIDIA has recently done. In that setting, the block scales get relieved of their additional duty to capture global magnitude. This will be the subject of the next post on the topic!

Proofs

Readers not interested in the algebra can safely skip this section.

Cosine formula

Recall that \tilde v_b = x_b \hat v_b for each block b.

Then, because the blocks occupy disjoint coordinates, \langle \hat v,\tilde v\rangle = \sum_b \langle \hat v_b,\tilde v_b\rangle = \sum_b x_b \|\hat v_b\|^2.

Also, \|\hat v\|^2 = \sum_b \|\hat v_b\|^2, and \|\tilde v\|^2 = \sum_b x_b^2 \|\hat v_b\|^2.

Therefore \cos(\hat v,\tilde v) = \frac{\langle \hat v,\tilde v\rangle}{\|\hat v\|\,\|\tilde v\|} = \frac{\sum_b x_b \|\hat v_b\|^2}{\sqrt{\sum_b \|\hat v_b\|^2}\sqrt{\sum_b x_b^2 \|\hat v_b\|^2}}.

Now, as per the main blog post, define \alpha_b = \frac{\|\hat v_b\|^2}{\sum_j \|\hat v_j\|^2}.

Writing S=\sum_j \|\hat v_j\|^2, so that \|\hat v_b\|^2=\alpha_b S, the numerator becomes S\sum_b \alpha_b x_b and the denominator becomes S\sqrt{\sum_b \alpha_b x_b^2}, giving

\cos(\hat v,\tilde v)=\frac{\sum_b \alpha_b x_b}{\sqrt{\sum_b \alpha_b x_b^2}}.

\square

20 degree bound

Assume that all the multiplicative error factors lie in an interval x_b \in [\ell,u] with u > \ell > 0.

Let \mu := \sum_b \alpha_b x_b,\qquad q := \sum_b \alpha_b x_b^2.

Then the cosine is just \mu/\sqrt q. Since each x_b\in[\ell,u], we have

(x_b-\ell)(x_b-u)\le 0.

Expanding this gives

x_b^2 \le (\ell+u)x_b - \ell u.

Multiplying by \alpha_b and summing over b gives

q \le (\ell+u)\mu - \ell u.

Therefore \frac{\mu^2}{q}\ge \frac{\mu^2}{(\ell+u)\mu-\ell u}.

Now the weighted mean \mu also lies in the interval [\ell,u], so it remains to minimize

\frac{\mu^2}{(\ell+u)\mu-\ell u} over \mu\in[\ell,u].

Differentiating shows that the minimum occurs at \mu=\frac{2\ell u}{\ell+u}, the harmonic mean of \ell and u.

Substituting this value gives

\frac{\mu^2}{q}\ge \frac{4\ell u}{(\ell +u)^2},

and therefore

\frac{\mu}{\sqrt q}\ge \frac{2\sqrt{\ell u}}{\ell +u}.

So we have proved that

\frac{\sum_b \alpha_b x_b}{\sqrt{\sum_b \alpha_b x_b^2}}\ge \frac{2\sqrt{\ell u}}{\ell+u}.

Finally, in the power-of-two case we have

\ell=2^{-1/2} and u=2^{1/2}, so \ell u=1, and hence

\cos(\hat v,\tilde v)\ge \frac{2}{2^{-1/2}+2^{1/2}} = \frac{2\sqrt2}{3}.

Numerically,

\frac{2\sqrt2}{3}\approx 0.943,

so

\angle(\hat v,\tilde v)\le \arccos\left(\frac{2\sqrt2}{3}\right) < 20^\circ.

\square