In this blog, we have discussed data compression more than once, but not really about its philosophy. Data compression is seen as an “enabling technology”—a technology helping make things happen—but what makes it so interesting is that it ultimately gives us insights into the very nature of data, of information itself.
Data compression is not, however, simply about “dropping redundant bits” (although it’s a great way of putting it in order to dismiss the matter with non-technical people), because bits very rarely just stand there being very conspicuously redundant1. No, data compression is about transforming the data into a representation in which there is, after analysis, exploitable redundancy.
To illustrate data compression as a whole, I presented, necessarily cursorily, the different stages of a compression engine, whose typical embodiment is a codec (for COder and DECoder pair). The stages are not always distinct, often they’re intertwined, but can be summarized as:
- (Optionally) Quantization,
- Entropy Coding.
I tried to stay away from data- and codec-specific details as much as possible, to give the broadest sense of the techniques and algorithms used in data compression, but many concepts are best explained using specific examples. For one thing, I explain how distortion measures are necessarily data-specific. I also discuss compaction of energy into a (small) subspace for signal processing. And many other topics.
You get get the PDF of the slides.
1 But sometimes they really do. I am sure you can think of data that’s really repetitive, likely XML-coded ;)