On rectangles

19/02/2013

While reading Cryptography: The Science of Secret Writing [1], the author makes a remark (p. 36) (without substantiating it further) that the number of possible distinct rectangles composed of n squares (a regular grid) is quite limited.

Mondrian_CompRYB

Unsubstantiated (or, more exactly, undemonstrated) claims usually bug me if the demonstration is not immediate. In this case, I determined fairly rapidly the solution, but I guess I would have liked something less vague than “increased in size does not necessarily provide greater possibilities for combination.” Let’s have a look at that problem.

Read the rest of this entry »


Suggested Reading: Cryptography: The science of secret writing

16/02/2013

Laurence Dwight Smith —Cryptography: The science of secret writing — Dover, 1943, 164 pp. ISBN 0-486-20247-X

(Buy at Amazon.com)

(Buy at Amazon.com)

Read the rest of this entry »


Float16

12/02/2013

The possible strategies for data compression fall into two main categories: lossless and lossy compression. Lossless compression means that you retrieve exactly what went in after compression, while lossy means that some information was destroyed to get better compression, meaning that you do not retrieve the original data, but only a reasonable reconstruction (for various definitions of “reasonable”).

Destroying information is usually performed using transforms and quantization. Transforms map the original data onto a space were the unimportant variations are easily identified, and on which quantization can be applied without affecting the original signal too much. For quantization, the first approach is to simply reduce precision, somehow “rounding” the values onto a smaller set of admissible values. For decimal numbers, this operation is rounding (or truncating) to the nth digit (with n smaller than the original precision). A much better approach is to minimize an explicit error function, choosing the smaller set of values in a way that minimizes the expected error (or maximum error, depending on how you formulate your problem).

Read the rest of this entry »


Crates of Bottles

05/02/2013

The other day I was looking at a crate of 12 water bottles, and looked at the wasted space between them. Why not use a honeycomb? or even other bottle shapes?

Well, let us have a mathematical look at bottle packing. A round bottle occupies a surface (at the bottom of the box) of S=\pi r^2 (for some radius r) in a square of 4r^2. The occupancy is therefore

Read the rest of this entry »


An (Incomplete) Experiment with Sensors

29/01/2013

When you have lm-sensors installed on Linux, you can invoke sensors to list all detected sensors and their states. While it is generally of mild interest except when you suspect that something’s wrong with your box—or, more precisely, when you want to make sure something doesn’t go wrong.

thermo-small

Amongst the sensors, there are temperature sensors that gives you information about the chipset and CPU. You can also find out about fan speeds. So I wondered if could use the sensors to see if there’s a significant difference between when the computer is idle and when I am using it. I thought I could, maybe, also see temperature differences between day and night.

Read the rest of this entry »


Piri-Piri Jelli

22/01/2013

And now, for something entirely different: Piri-Piri Jelly.

Read the rest of this entry »


Damn you, Napier!

15/01/2013

Briggs‘ logarithms (often mistakenly understood to be Napier‘s logarithms) is such an useful function that most of us don’t really think about it, until we have to. Everyone’s familiar with its properties:

\displaystyle\log_b a = \frac{\log a}{\log b}

\log b^x = x \log b

\log a b = \log a + \log b (1)

\displaystyle\log \frac{a}{b} = \log a - \log b

but suddenly,

\log (a+b) = ?

What can we do with this last one?

Read the rest of this entry »


Python Memory Management (Part II)

08/01/2013

Last week we had a look at how much memory basic Python objects use. This week, we will discuss how Python manages its memory internally, and why it goes wrong if you’re not careful.

To speed-up memory allocation (and reuse) Python uses a number of lists for small objects. Each list will contain objects of similar size: there will be a list for objects 1 to 8 bytes in size, one for 9 to 16, etc. When a small object needs to be created, either we reuse a free block in the list, or we allocate a new one.

Read the rest of this entry »


Python Memory Management (Part I)

01/01/2013

[This is a piece I initially wrote while at the LISA at U de M, for the newbie coders in the lab.]

One of the major challenges in writing (somewhat) large-scale Python programs, is to keep memory usage at a minimum. However, managing memory in Python is easy—if you just don’t care. Python allocates memory transparently, manages objects using a reference count system, and frees memory when an object’s reference count falls to zero. In theory, it’s swell. In practice, you need to know a few things about Python memory management to get a memory-efficient program running. One of the things you should know, or at least get a good feel about, is the sizes of basic Python objects. Another thing is how Python manages its memory internally.

So let us begin with the size of basic objects. In Python, there’s not a lot of primitive data types: there are ints, longs (an unlimited precision version of int), floats (which are doubles), tuples, strings, lists, dictionaries, and classes.

Read the rest of this entry »


I’m back.

01/01/2013

So fall has been somewhat busy for me. I moved, got a new job (as a professor at UQàR), but pretty much everything is fine now. I’ve got a nice house with a view that I rent; it’s smallish but comfy.

Well, anyway. I write to say that I will resume my blogging, posting every tuesday morning, as I used to. Starting today.