## Stretch Codes

About ten years ago, I was working on a project that needed to log lots of information and the database format that was chosen then was Microsoft Access. The decision seemed reasonable since the data would be exported to other applications and the people who would process the data lived in a Microsoft-centric world, using Excel and Access (and VBA) on a daily basis. However, we soon ran into a major problem: Access does not allow a single file to be larger than 2GB.

After sifting through the basically random error messages that had nothing to do with the real problem, we isolated the bug as being reaching the maximum file size. “Damn that’s retarded!” I remember thinking. “This is the year 2000! If we don’t have flying cars, can we at least have databases larger than 2GB!?“. It’s not like 2GB was a file size set far off into the future as are exabytes hard-drives today. There were 20-something GB drives available back then, so the limitation made no sense whatsoever to me—and still doesn’t. After the initial shock, I got thinking about why there was such a limitation, what bizarre design decision lead to it.

The first thing that comes to mind is that all internal pointers are 32 bits. But then, you’d expect the maximum file size to be 4GB. So they used 31 bits. Maybe they used the sign bit to indicate free space in the file, thus effectively limiting the file size to 2GB? Then it dawned on me that addressing a database down to the byte is wasteful. While I didn’t know the internal format, I surmised that each record contained, in addition to its data, a number of pointers to other records in the database, next, previous, etc., so even if the record’s data consists of a single byte, the record would still occupy a lot of space because of the extra, hidden data necessary for database operation. So, basically, a record can be tens of bytes wide, and they chose 32 bits pointers to limit the memory expenditure. 32 bits seemed like a good choice: larger than 16, yet not as big and cumbersome as 64 bits pointers (at that time, 64 bits was not a native data size, it had to be emulated).

My first idea was to address the database on a line level, say 16 bytes per line. With 32 bits (not using the dead-bit trick), you’d be able to address $2^{36}$ bytes (64 GB). So, that’s it, problem solved: the effective address in the file is given effective_addr=pointer*16, and you allocate storage on a 16 bytes line basis, for $2^{32}$ different storage locations.

Then again, maybe a 16 bytes line is a bit large. What if I have very small records? How much memory do I end up wasting to internal fragmentation? Maybe a lot. What can we do to still store pointers on 32 bits but address a lot more memory, yet, without paying the full cost of internal fragmentation?

Maybe what we need is a code à géométrie variable, one that does not address the storage space uniformly, one that stretches the addressing space. What if we divided the address space into four regions with varying granularity, and that we reinterpret the 32 bits pointer using an unusual convention? Say we used the following addressing scheme:

 MSBs of Pointer Granularity Addressing range 00 1 $0\ldots{}2^{30}-1$ 01 4 $2^{30}\ldots{}2^{30}+4*2^{30}-1$ 10 8 $2^{30}+2^{32}\ldots{}2^{30}+2^{32}+8*2^{30}-1$ 11 32 $2^{30}+2^{32}+2^{33}\ldots{}2^{30}+2^{32}+2^{33}+32*2^{30}-1$

This scheme allows the addressing of $2^{30}+2^{32}+2^{33}+2^{35}=48318382080$ bytes (45GB) with the last 32GB allocated in chunks of 32 bytes. Varying the scheme would allow one to change the number and granularity of regions to better suit his needs, while still not expending more than 32 bits per pointers.

The logical 32 bits pointer to 64 bits linear address translation is not a very expensive process. For the above example, it pretty much boils down to something like

```inline
{
static const uint64_t base[]={0,...constants...};
static const uint64_t scales[]={1,...constants...};
unsigned range=(ptr>>30);
return base[range]+(ptr & 0x3fffffff)<<scale[range];
}
```

Which I’m sure you can optimize quite a bit.

The storage allocation strategy must be fully aware of the addressing scheme to be efficient. It would send smaller data to lower addresses whenever possible, using a wasteful upper address only when the lower addresses are exhausted; and preferring upper addresses for larger records. At one point, many contiguous locations owned by the same record could be moved to a higher address, freeing space for smaller records, yet not necessarily wasting more space.

I think the general scheme can be used to address much smaller amounts of data as well. While it may be profitable to use stretch codes to address way over 4GB while still using 32 bits pointers, we could scale the code down to a 8 bit pointer and use it to address data within a 8KB memory block.

### 4 Responses to Stretch Codes

1. […] more aggressive technique would use stretch codes, a technique I discussed quite a while ago. The idea is to use, say 32 bits, to points to regions […]

2. […] solution is to use stretch codes, as I proposed a while ago, trading off precision of addressing for shorter pointers. However, […]

3. Armando Herrera says:

“we could scale the code down to a 8 bit pointer and use it to address data within a 8KB memory block.” Could you explain this a little bit more?

• What I had in mind, I guess (it’s been 10 years), is that we could use 0-63 for 1-byte addresses (64 bytes), 64-127 for 8-byte addresses (for 512 bytes), 128-191 for 32-bytes addresses (2K) and 192-255 for 128-bytes addresses (for the remaining memory).

I’m sure we can devise a much better assignment. Ranges do not have to be powers of two. You could have 10 codes or 17 or 23 for a given range.