So for an experiment I ended up needing conversions between 8 bits and 16 bits samples. To upscale an 8 bit sample to 16 bits, it is not enough to simply shift it by 8 bits (or multiply it by 256, same difference) because the largest value you get isn’t 65535 but merely 65280. Fortunately, stretching correctly from 8 bit to 16 bit isn’t too difficult, even quite straightforward.

So merely shifting the values isn’t sufficient. We must make sure that 0 maps to 0 but 255 maps to 65535. Luckily, we notice that 65535 is divisible evenly by 255, and that 65535/255=257. This gives the conversion pair:

int16_t _8_to_16 (uint8_t x) { return x*257; } // promotion to int uint8_t _16_to_8 (int16_t x) { return x/257; }

That’s nice, but that’s not always exactly the code we need. A peculiarity of the WAV format is that samples on 8 bits are from 0 to 255, but samples on 16 bits are from -32768 to 32767. Therefore, we need to worry about that bias. And, while we’re at it, let’s not trust the compiler to promote the operands to (a 32 bits) int:

int16_t _8_to_16 (uint8_t x) { return ((int32_t)x*257)-(1<<15); } uint8_t _16_to_8 (int16_t x) { return ((int32_t)x+(1<<15))/257; }

Spiffy. Now, what about, say, 16 to 24 and back? 65535 doesn’t stretch as well on 16777215 as 255 did on 65535: 16777215/65535=256.0038…, so it wont be as easy. However…16777215/65535 is also 65793/257:

int32_t _16_to_24(int16_t x) { int64_t t=x; return ((t+(1u<<15))*65793)/257-(1u<<23); } int16_t _24_to_16(int32_t x) { int64_t t=x; return ((t+(1u<<23))*257+256)/65793-(1u<<15); // with a bit of rounding! }

The rounding make the thing perfectly reversible: `_24_to_16(_16_to_24(x))` always gives back `x`, which is nice.

*

* *

You might object—rightfully so—that integer operations such as multiplication an division aren’t free, especially on weak processor, and even more so with weird constants like 65793. OK, let’s do this with additions and shifts then.

int16_t _8_to_16 (uint8_t x) { return ((x<<8)+x)-(1u<<15); } uint8_t _16_to_8 (int16_t x) { return (x+(1u<<15))>>8; } int32_t _16_to_24(int16_t x) { int32_t t=x; t+=(1u<<15); return ((t<<8)+(t>>8))-(1u<<23); } int16_t _24_to_16(int32_t x) { return (x>>8); }

To stretch 0 to 255 on 0 to 65535, we may notice, if we think in hex, that since 255 is 0xff and 65535 is 0xffff, 254, being 0xfe should stretch to 0xfefe (which indeed it does: 254*65535/255=65278=… 0xfefe! To get back the original 8 bit value, we need only the 8 most significant bits, so a shift right by 8 bits does the trick. In the code above, we also take into account the fact that for WAVE files, 8 bits samples are from 0 to 255 while 16 bits samples are from -32768 to 32767.

The conversion from 16 to 24 bits (and back) uses the same trick, copying the most significant bits onto the missing lower bits. Again, quite easily reversible. The code for 16 to 24 bits must compensate for sign before adding the least significant bits (that’s why merely setting the least significant bits with an or wouldn’t work). The code for 24 to 16 bits is even simpler, because the bias is also correctly shifted! Wunderbar.