This week, let’s discuss dithering, a technique based on error diffusion used in print and computer graphics to soften the effects of harsh color quantization. There are many types of dithering (sometimes called halftoning, despite techniques not being all of the same type), some optimized for monochrome or color screen printing, some merely convenient to use in computer graphics.

When we quantize an image to a very small number of colors, all kind of artifacts appear: false contours resulting from two too different colors that aught to be smoothly faded, noticeable color changes, etc. Dithering tries to alleviate this problem by mixing colors using random-looking dot patterns, replacing color resolution by sampling aliasing. Despite the image being composed of dots of very different colors, we still perceive the color mix, and that effect, while not adding any objective quality to the image, does augment the *perceived* quality of the image.

The main idea to preserve some sort of color precision is to make sure that in a region, *on average*, we have the right intensity of each color component. That means that if we’re to take a given region of the dithered image, the average of the red component of each pixel should be as close to the average of the original image as possible. This is achieved by “pushing” the error made on one pixel (by quantization) to its neighbors, so that when they are quantized too, they are more likely to contribute correctly to the local average.

If is the original pixel’s value (let us neglect for now that pixels may have more than one color component) and its chosen representation (say, the closest reproducible color or maybe a color from a palette), then error is

.

This error is spread in the neighborhood of the pixel, some part of the error here, some part there. The diffusion of the error is controlled by a diffusion matrix that indicate what part of the error is spread where:

At the end of each arrow we would find a coefficient that determines the part of error that is send to this pixel. Code performance considerations dictate that the coefficients are integers divided by a number that is “machine-friendly”—a power of two. Say the divisor is 16, the coefficient is 5, then the pixel receives 5/16th of the error.

If the image has more than one component (i.e., is not black and white or grayscale), then the error from each color component diffuses to the neighbors using the same diffusion matrix: the red diffuses over the neighbors’ red components, the green diffuses over the neighbors’ green components, same for each component, regardless of what colorspace you’re using.

Let us have a look at some of the matrices.

- Floyd-Steinberg, with divisor 16:

- Burkes, with divisor 32:

- Stucki, with divisor 42:

- Jarvis, Judice, and Ninke, with divisor 46:

- Atkinson, with divisor 8:

(This matrix is due to Bill Atkinson, one of the early Apple programmers. We owe him Macpaint and the invention of the menu bar. I have found very little other reference to this dithering technique.) Atkinson’s main innovation is to have chosen a divisor larger than the sum of the coefficients, thus somewhat muffling the error and keep it from propagating too far.

- And, while we’re at it, let me propose one, with divisor 14:

*

* *

Let us have a look at the results. First, let’s consider an image:

(This image is drawn from a(n open) Kodak dataset.) Let’s quantize it to 16 colors (with -means, say). This is the result of mapping each pixel to the nearest color in the palette:

The result of quantizing the colors without dithering is exactly this: false contours, large regions that are now exactly flat… not very interesting! Now, with dithering:

*

* *

Let’s do it again with another image from the same data set:

In 16 colors:

With dithering:

*

* *

Dithering seems pretty futile in our age of 24-bits displays (and advanced image compression techniques), but still has its use in less capable systems, such as, maybe, tablets, phones, and portable game devices, not all of which have very good screens. One can incorporate dithering at the driver-level to compensate for the low color resolution of the device, unbeknownst to application. If we chose the diffusion matrix wisely, we can probably have a very efficient implementation, or better yet, have dithering supported directly at the hardware level.

-Dithering seems pretty futile in our age of 24-bits displays

Dithering can also find it’s new use in HDR to 24bit transformation after tone mapping.

“Age of 24-bits displays?” Are you sure? A lot of LCD monitors are 18-bit, even 15-bit. I’ve noticed my computer monitor will quickly switch between different dither patterns for certain colors (aka temporal dithering). By contrast, my laptop screen uses some sort of ordered dithering without temporal dithering. In both cases, it’s hard to notice unless you’ve got areas of the same color/smooth gradients and are up close to the screen.

I have noticed 18 bits screens (abd haven’t seen 15 bits in a long time) but never ordered dithering. On what device did you observe this?