## Fractional Bits (Part III)

A long time ago, we discussed the problem of using fractions of bits to encode, jointly, values without wasting much bits. But we considered then only special cases, but what if we need to shift precisely by, say, half a bit?

But what does shifting by half a bit even means?

Let’s come back to the basics of shifting, shifting by integer amounts. We know (or should know) that

x >> 3 is the same as $x \div 2^3=x \div{8}=2^{-3}x$

and

x << 2 is the same as $x \times 2^2=x\times{4}=2^2 x$

or, in general,

x >> a is $2^{-a} x$,

and

x << b is $2^b x$.

These equivalences suggest a direct generalization to shifts $a$ and $b$ that aren’t integers. If x >> a is equivalent to $2^{-a} x$, we can certainly have $a$ be a rational number, or something else, and the operation would still be correctly defined, albeit maybe not as intuitively as before. Let’s examine an example, say, $a=\frac{1}{2}$, or, that is, shifting right half a bit.

With $a=\frac{1}{2}$, we have that

x >> a is $\displaystyle x 2^{-\frac{1}{2}} = \frac{x}{\sqrt{2}}$,

or x >> (1/2.0) is x*0.70710678118.... Likewise, x<<(1/2.0) becomes $x 2^{\frac{1}{2}}=x\sqrt{2}$, or x*1.4142135623....

*
* *

But multiplying by an arbitrary constant, if it does give you a shift by the desired (fractional) amount, is cumbersome. If we do it with floats, we are liable to the limits of this representation… it will work well for smallish values of x, is subject to rounding errors, and the speed of the operation will strongly depends on whether or not your processor supports correctly floats.

One possibility is to use rational approximations, and, surprisingly enough, even for “complicated” constants like $\sqrt{2}$ and $\frac{1}{\sqrt{2}}$, there are “small” fraction that gives good approximations. Indeed:

$\displaystyle \frac{1}{\sqrt{2}}=0.70710678\ldots$,

$\displaystyle \frac{29}{41}=0.707317\ldots$,

$\displaystyle \frac{70}{99}=0.707070707\ldots$,

$\displaystyle \frac{169}{239}=0.707112970\ldots$,

$\displaystyle \frac{985}{1393}=0.70710696338\ldots$

and the same for

$\displaystyle \sqrt{2}=1.414213562373\ldots$,

$\displaystyle \frac{41}{29}=1.413793103\ldots$,

$\displaystyle \frac{99}{70}=1.414285714\ldots$,

$\displaystyle \frac{577}{408}=1.414215686274\ldots$,

$\displaystyle \frac{1393}{985}=1.414213197969\ldots$,

*
* *

These rational approximation, while not that precise, will allow us to write simple, machine-efficient code, using only integer arithmetic:

unsigned shift_right_half_a_bit(unsigned x)
{
return (x*985)/1393;
}

unsigned shift_left_half_a_bit(unsigned x)
{
return (x*1393}/985;
}



*
* *

Some encoders push the idea of shifting by fractional amounts to its limit. For example,

### 2 Responses to Fractional Bits (Part III)

1. […] the series on fractional bits to understand how this works: part 1, part 2, and part 3.). The above encoding uses 29 bits, and is perfectly […]

2. […] the series on fractional bits to understand how this works: part 1, part 2, and part 3.). The above encoding uses 29 bits, and is perfectly […]