A long time ago, we discussed the problem of using fractions of bits to encode, jointly, values without wasting much bits. But we considered then only special cases, but what if we need to shift precisely by, say, half a bit?

But what does shifting by half a bit even means?

Let’s come back to the basics of shifting, shifting by integer amounts. We know (or should know) that

x >> 3 is the same as

and

x << 2 is the same as

or, in general,

x >> a is ,

and

x << b is .

These equivalences suggest a direct generalization to shifts and that aren’t integers. If x >> ais equivalent to , we can certainly have be a rational number, or something else, and the operation would still be correctly defined, albeit maybe not as intuitively as before. Let’s examine an example, say, , or, that is, shifting right half a bit.

With , we have that

x >> a is ,

or x >> (1/2.0) is x*0.70710678118.... Likewise, x<<(1/2.0) becomes , or x*1.4142135623....

* * *

But multiplying by an arbitrary constant, if it does give you a shift by the desired (fractional) amount, is cumbersome. If we do it with floats, we are liable to the limits of this representation… it will work well for smallish values of x, is subject to rounding errors, and the speed of the operation will strongly depends on whether or not your processor supports correctly floats.

One possibility is to use rational approximations, and, surprisingly enough, even for “complicated” constants like and , there are “small” fraction that gives good approximations. Indeed:

,

,

,

,

and the same for

,

,

,

,

,

* * *

These rational approximation, while not that precise, will allow us to write simple, machine-efficient code, using only integer arithmetic:

[…] the series on fractional bits to understand how this works: part 1, part 2, and part 3.). The above encoding uses 29 bits, and is perfectly […]

[…] the series on fractional bits to understand how this works: part 1, part 2, and part 3.). The above encoding uses 29 bits, and is perfectly […]