## Deriving the 1 bit = 6 dB rule of thumb

This week, a more mathematical topic. Sometime ago, we—friends and I—were discussing the fidelity of various signals, and how many bits were needed for an optimal digitization of the signal, given known characteristics such as spectrum and signal-to-noise ratio.

Indeed, at some point, when adding bits, you only add more power to represent noise in the signal. There’s a rule of thumb that say that for every bit you add, you can represent a signal with $\approx 6 dB$ more of signal to noise ratio. Let me show you how you derive such a result.

The SNR, or signal to noise ratio, is a measure that compares the power of the signal to the power of the noise. Both are squared (by the very definition of the power). We therefore have:

$SNR=\frac{P_{signal}}{P_{noise}}=\left(\frac{A_{signal}}{A_{noise}}\right)^2$

where $A$ stands for amplitude and $P$ for power. A frequent measure to map the SNR to a (somewhat linear) perceptual measure is the decibel, and the SNR is often expressed in decibels:

$dB(SNR) = 10 \log_{10} SNR$

I usually use $\ln$ for the natural log, base $e$ logarithm, $\lg$ for base 2 logarithms, most often used in computer science, and $\log$ for base 10 logarithms, but to make sure it is unambiguous, I’ll use $\log_{10}$ to make clear it is the base 10 logarithm. So, back to the SNR: when we are measuring the amplitude of the signal, we really mean the average amplitude, and likewise for the error. We can rewrite the SNR equation as:

$SNR=\left(\frac{E[signal]}{E[noise]}\right)^2$

OK, let us derive that 1 bit $\approx 6 dB$ result. Let us first suppose that we are interested in the PSNR (peak signal-to-noise ratio) of an $n$ bits signal, where only the last bit is corrupted. Considering the PSNR lets us throw away the expectations ($E$, in the formulæ) and write:

$dB(PSNR)=10 \log_{10} \left(\frac{2^n-1}{1}\right)^2$

because the maximum value of a $n$ bits integer is not $2^n$ but $2^n-1$, and that we suppose, as a simplification, that the last bit is always wrong, which contributes a (squared) error of $\leqslant 1$. So, simplifying the previous formula, we have:

$dB(PSNR) \geqslant 10 \log_{10}(2^n-1)^2$

$= 10 \log_{10}(2^n(1-2^{-n}))^2$

$= 10 \log_{10}(2^n)^2 + 10 \log_{10}(1-2^{-n})^2$

$= n 20 \log_{10} 2 + 20 \log_{10}(1-2^{-n})$

$\approx n 6.02059\ldots$

because the second term, $20 \log_{10}(1-2^{-n})$, goes to zero very rapidly with a growing $n$. Already, $n=10$ gives $\approx{}-0.0085$. Therefore, it is true that adding a bit (growing $n$) adds (at least) about $6.02 dB$ to the signal. Using this relation, it is now easier to find how many bits you need to encode a signal with sufficient precision given the amount of noise embedded in it.

(Conversely, you now know that if a sound card touts a $90dB$ SNR, it really means that if offers $90/6.02 \approx 15$ bits reconstruction, even if it claims to have 24 bits DACs.)

However, that derivation is only for the PSNR, which doesn’t take into account for all of the signal’s characteristics. A much better approximate measure is the RMS, for root mean square—not for the eccentric hippie. RMS is a more realistic approximate measure for ondulatory phenomena as it approximate the average power of the signal. The RMS of a function $f$ on interval $a,b$ is given by:

$RMS(f) = \sqrt{ \frac{1}{b-a} \int_{a}^b f(t)^2 \partial t}$

which takes different forms depending on the function $f$. Taking the sine as a “typical function” (which remains entirely debatable), on a complete period (that is, on the interval $0,2\pi$) we have:

$RMS(\sin)= \sqrt{ \frac{1}{2\pi}\int_{0}^{2\pi}\sin(t)^2 \partial t}$

$= \sqrt{\frac{1}{2\pi}\pi}$

$= \frac{1}{\sqrt{2}}$

so the RMS of a sine is $\frac{1}{\sqrt{2}}$. Let us plug this value $A$, into the previous derivation:

$dB(RMS) \geqslant 10 \log_{10}(A(2^n-1))^2$

$= 20 \log_{10}(2^n-1) + 20 \log_{10}A$

$= n 20 \log_{10} 2 + 20 \log_{10}(1-2^{-n})+ 20 \log_{10}A$

which leads to the expected result. Since $A=\frac{1}{\sqrt{2}}$, the last term is $\approx -3.010 dB$, so the final result is still that each additional bit yields an increase of (at least) $\approx 6.02 dB$.

This result bugged me a long time until I figured out how to derive it by myself. As you can see, there’s nothing to it: we start from hypotheses (like the signal is sinusoidal) and we apply the decibel formula quite mechanically.

### 8 Responses to Deriving the 1 bit = 6 dB rule of thumb

1. justin says:

PSNR is actually 6.02N + 1.76… arn’t you missing something?

2. justin says:

Found what was missing, sqrt = square root, ^ to the power
rms signal = (2^(n-1))*q/sqrt2
rms noise = q/sqrt12

PSQNR = 20log(rms signal/rms noise)
re-arranging with a little algebra
PSQNR = 20log(sqrt6*(2^(n-1)) //q’s cancel sqrt2/sqrt12 = sqrt 6
PSQNR=20log(sqrt6) + 20log(2^(n-1))
PSQNR = 7.7815 + (n-1)20log(2)
PSQNR = 7.7815 + (n-1)*6.02
PSQNR = 1.76 + 6.02N

• justin says:

One last comment should explain the signal/noise values…
signal is just the size of the least significant which for full scale converters is (2^(n-1) *q) and divide by sqrt 2 for rms value

noise is quantisation noise i.e. half the lease significant bit, then if we estimate the noise as a triangle across the analog input signal we devide by sqrt3 (LSB/2)/sqrt3 = q/sqrt12

3. Wedge009 says:

Thanks for the article. Just wanted to point out what seems to be an ‘auto-correct’ error. 15 bits of restitution? I think you were meaning to write 15 bits of resolution.

4. […] ten years ago I wrote an entry about the “1 bit = 6 dB” rule of thumb. This rule states that for each […]

5. هيئة المهندسين التجمعيين – corps des ingenieurs du parti du RNI

corps des ingenieurs du parti du RNI