The other day—well, a year ago or so—I was invited to visit CBC’s digital TV studios in Montréal by the SMPTE Montréal. We were shown around, even in the somewhat small control rooms. Amongst all the displays, dials, monitors, and misc. blinkenlights, I noticed a small LCD display showing an hexagonal projection of the current show’s color gamut in (or maybe ?), probably for quality assessment purposes. I thought it was pretty cool, actually.

Let’s see how we can realize this projection with as little CPU time as possible.

Let us take some image (sorry, don’t know the source):

It suffice to sample the image, as it is quite unnecessary to scan the whole picture to display a cloud of color as shown above. But that’s a detail because maybe you can adjust the density of the original image’s sampling depending on the size of your final display. Here, in the spectrum above, it’s pixels. The original source image, the camel, is so it was entirely scanned to produce the gamut.

Now, how do we map a point into the hexagon? Well, for starters, let’s stay in the RGB domain. Not that I do not like , but for what I’m trying to show here it’s an unnecessary complication. The desired projection is as if we’re looking at the RGB cube from the white corner down to the black corner.

We are therefore interested in finding two functions that maps a RGB point onto the plane:

Let us first look at the desired result and the corresponding original space:

We will try to find the point on the R-G plane where a pixel with color would land. That is:

If the projection is centered at We therefore find that:

The is added just to squish the hexagon a bit so that the projection looks somewhat isometric. We haven’t included the axis yet. But since it corresponds to elevation, it must be mapped somehow to .

Indeed, we get:

The two equations can be implemented in a C-like language as:

int x = cx - (r-g); int y = cy + b - (r+g)/2; // or maybe >>1 if the compiler sucks

Scanning each of the camel image pixels and projecting them using the above transform, you get the projected gamut shown at the top of this post.

Now, how does it look when we scan a whole video sequence? Well, let’s use some of the time-honored standard video sequences such as “football” and “bus”. The football sequence is a short clip from some football game. This clip have been around for so long I think all the players died of old age. The bus sequence shows a pan following a bus in an urban setting. These are nothing exciting, but they’re standard and easily found.

A typical frame from the “football” sequence:

Resulting in the live gamut (a big fat animated gif—click on it if your browser doesn’t animate it already):

Similarly with “bus”:

with live gamut:

What if the video you’re looking at is somehow borked? Let’s say the green channel is off. For the “football” sequence, the live gamut is now rather weird:

…and so the defect is spotted right away—as long as someone looks at the display.

*

* *

Small mappings from, say, 3D to 2D or from 2D to 2D, bijective or not, are useful more often than we generally think. For example, in compact tree storage I presented one such bijective function, mapping a region of the plane (the tree) to a line. In other cases, like triangular matrices, for example, it may be convenient to devise a compact mapping from one arbitrary-shaped space to a linear space for storage purposes.

In this case, we mapped, using a surjection, a 3D space to a plane. It could have been that the mapping is bijective, but not in this case: all gray pixels end up along the black-white axis that is projected to in the hexagon.

Another example of such useful functions are folding and pairing functions. In the first case, a folding function is a bijective map between the integers, , and the naturals, . Pairing functions are bijective functions that map a tuple to a single natural number, something like . The really interesting ones aren’t all that trivial.

Interesting. Thank you for sharing!