Smaller enums

October 17, 2017

As you may have noticed, efficient representation of information and data structure is kind of a hobby of mine. I often look at ways I can reduce the memory footprint, and, as often, it’s the small details that are the most annoying, like, for example, enums that use up pretty much anything the compiler feels like.

Indeed, the standard does not require that the compiler uses the smallest type, but merely one that can hold all values (§7.2.6, in ISO/IEC 14882:2011(E)), so you end up with something “convenient”, that is, int. Unless, of course, you do specify storage.

Read the rest of this entry »


Rational Approximations (Part II)

October 10, 2017

Last week, we noticed a fun connection between lattices and fractions, which helped us get rational approximations to real numbers. Since only points close to the (real-sloped) line are of interests, only those points representing fractions are candidates for rational approximation, and the closer to the line they are, the better.

But what if we find a point real close to the line? What information can we use to refine our guess?

Read the rest of this entry »


Rational Approximations

October 3, 2017

Finding rational approximations to real numbers may help us simplify calculations in every day life, because using

\displaystyle \pi=\frac{355}{113}

makes back-of-the-envelope estimations much easier. It also may have some application in programming, when your CPU is kind of weak and do not deal well with floating point numbers. Floating point numbers emulated in software are very slow, so if we can dispense from them an use integer arithmetic, all the better.

However, finding good rational approximations to arbitrary constant is not quite as trivial as it may seem. Indeed, we may think that using something like

\displaystyle a=\frac{1000000 c}{1000000}

will be quite sufficient as it will give you 6 digits precision, but why use 3141592/1000000 when 355/113 gives you better precision? Certainly, we must find a better way of finding approximations that are simultaneously precise and … well, let’s say cute. Well, let’s see what we could do.

Read the rest of this entry »


Whatever sums your floats (Part II)

September 26, 2017

A while ago, Martin Capoušek drew my attention on Kahan’s summation algorithm and ask how it compared to the other methods I presented then.

Well, let’s find out.

Read the rest of this entry »


Medians (Part IV)

September 19, 2017

I’ve visited the median problem a couple of times already. The median is the value in the middle of a sorted list of values. If the list is already sorted, that’s not much of a problem. If it isn’t, then we must use some efficient way to do it.

Read the rest of this entry »


Undo that mess

September 12, 2017

During last marking season (at the end of the semester), I had, of course, to grade a lot of assignments. For some reason, every semester, I have a good number of students that write code like they just don’t care. I get code that looks like this:

int fonction              (int random_spacing)^M{           ^M
  int            niaiseuses;

  for (int i=0;i<random_spacing;         i++){
                    {
       {
        std::cout
         << bleh
         << std::endl;
    }}

  }
}

There’s a bit of everything. Random spacing. Traces of conversions from one OS to another, braces at the end of line. Of course, they lose points, but that doesn’t make the code any easier to read. In a previous installment, I proposed something to rebuild the whitespaces only. Now, let’s see how we can repair as many defects as possible with an Emacs function.

Read the rest of this entry »


Halton Sequences (Generating Random Sequences VII)

September 7, 2017

Quite a while ago, while discussing Monte Carlo Integration with my students, the topic of choosing sample locations came up, and we discussed low-discrepancy sequences (a.k.a. quasi-random sequences). In a low-discrepancy sequence, values generated look kind of uniform-random, but avoids clumping. A closer examination reveal that they are suspiciously well-spaced. That’s what we want in Monte Carlo integration.

But how do we generate such sequences? Well, there are many ways to do so. Some more amusing than other, some more structured than others. One of the early example, Halton sequences (c. 1964) is particularly well behaved: it generates 0, 0.5, then 0.25 and 0.75, then 0.125, 0.375, 0.625, and 0.875, etc. It does so with a rather simple binary trick.

Read the rest of this entry »