Faster than Bresenham’s Algorithm?

28/07/2009

There’s always a number of graphics primitives you will be using in all kinds of projects. If you’re lucky, you’re working on a “real” platform that comes with a large number of graphics libraries that offer abstractions to the graphics primitives of the OS and, ultimately, to its hardware. However, it’s always good to know what’s involved in a particular graphics primitives and how to recode it yourself should you be in need to do so, either because you do not have a library to help you, or because it would contaminate your project badly to include a library only to draw, say, a line, within an image buffer.

math06-detail

Lines are something we do a lot. Perfectly horizontal or vertical lines have very simple algorithms. Arbitrary lines are a bit more complicated, that is, to get just right. This week, let us take a look at a few algorithms to draw lines. First, we’ll discuss a naïve algorithm using floating point. We’ll also have a look at Bresenham’s algorithm that uses only integer arithmetic. Finally, we’ll show that we can do better than Bresenham if we used fixed point arithmetic.

Read the rest of this entry »


Checksums (part I)

21/07/2009

I once worked in a company specializing in embedded electronics for industrial applications. In one particular project, the device communicated through a RS-422 cable to the computer and seemed to return weird data once in a while, causing unwanted behavior in the control computer whose programming did not provide for this unexpected data. So I took upon myself to test the communication channel as it seemed that the on-board software was operating properly and did not contain serious bugs. I added a check-sum to the data packet and it turned out that some packets came in indeed corrupted despite the supposedly superior electrical characteristics of the RS-422 link.

After a few days’ work, I implemented the communication protocol that could detect and repair certain errors while reverting to a request to retransmit if the data was too damaged. I then started gathering statistics on error rate, number of retransmit, etc, and the spurious behavior on the controller’s side went away. My (metaphorically) pointy-haired boss opposed the modification because “we didn’t have any of these damn transmission errors until you put your fancy code in there“. Of course, this was an epic facepalm moment. I tried to explain that the errors have always been there, except that now they’re caught and repaired. Needless to say, it ended badly.

abstract-0002

Notwithstanding this absurd episode, I kept using check-sum to validate data whenever no other layer of the protocol took care of transmission errors. So, this week, let us discuss check-sums and other error detection algorithms.

Read the rest of this entry »


Powers of Ten (so to speak)

29/06/2009

I am not sure if you are old enough to remember the 1977 IBM movie Powers of Ten (trippy version, without narration) [also at the IMDB and wikipedia], but that’s a movie that sure put things in perspective. Thinking in terms of powers of ten helps me sort things out when I am considering a design problem. Thinking of the scale of a problem in terms of physical scale is a good way to assess its true importance for a project. Sometimes the problem is the one to solve, sometimes, it is not. It’s not because a problem is fun, enticing, or challenging, that it has to be solved optimally right away because, in the correct context, considering its true scale, it may not be as important as first thought.

atomic-cycle

Maybe comparing problems’ scales to powers of ten in the physical realm helps understanding where to put your efforts. So here are the different scales and what I think they should contain:

Read the rest of this entry »


Sorting Lists (part II)

23/06/2009

Last week I showed you the radix sort on simple linked lists. This week, I will present a version of QuickSort modified to sort simply linked lists.

Read the rest of this entry »


Sorting Linked Lists (part I)

16/06/2009

The sorting algorithms we were taught in class were typically simplified versions of the algorithms that assumed that the data remained in memory and in contiguous memory locations, also known as an array. What we often have, however, are linked lists. Generalizing algorithms like the QuickSort to lists is not very hard in fact if we use a modification of the basic algorithm.

For lists with numerical keys (or values), there might be a simpler algorithm that scans the list in only one direction, so that simply linked lists are a perfectly cromulent data structure, that is, the radix sort.

Read the rest of this entry »


Recycling Random Bits

09/06/2009

It is not uncommon that for large-scale simulations you need a large volume of high-quality (pseudo)random numbers. The relatively short period of the C standard library’s rand function may make it impractical for your purpose; and you might need to resort to stronger generators such as the Linux-provided /dev/random and /dev/urandom pseudo-devices. But calling those is a relatively expensive process as /dev/random may stall until its “entropy pool” is repleted.

The /dev/urandom is the unblocked version of /dev/random, but it is also demonstrably less random. Moreover, one has to access either pseudo-devices through the file system, which in certain case introduces a noticeable impact on performance. Using other sources of randomness such as a hardware random number generator may also incur significant delays.

To avoid being stalled, I propose a simple method that will allow you to recycle bits from an expensive pseudo-random source without polling it too often.

Read the rest of this entry »



Not Loosing the Perspective

19/05/2009

My first true computer, a TI 99/4a (which I talk about also in a previous entry), had 16 or 48 KB of RAM, depending whether or not you had one of the memory expansion cartridges, and that limited quantity of memory severely curbed the complexity of the programs one could write on the machine. However, the limited complexity of the programs, the relative crudeness of the development environment (a BASIC shell) and the slow execution speeds weren’t very obvious to me back then. They were somewhat mitigated by the novelty of the computer itself as a machine, and by the perpetual intense excitement of discovery. The arduous design of programs to save memory, fit more graphics or more code, or even getting our programs to work at all was less about constraints than challenge.

The same kind of constraints—or challenge—followed me over the years as I moved on to different computers. Despite their being more powerful, both faster and sporting more memory, the challenge remained there because while the computers got better, so did I at programming. I kept asking more out of the machine, writing increasingly complex programs needing either more memory or more speed, often both. That meant better algorithms, better data structures, and better implementations1.

Read the rest of this entry »


The Fizzbuzz problem

12/05/2009

You are given the following assignment:

You are to write a program that must fulfill these simple requirements:

For the numbers from 1 to 100,

  • If the number is a multiple of 3, print fizz instead of the number.
  • If the number is a multiple of 5, print buzz instead of the number.
  • If the number is a multiple of 15, print fizzbuzz instead of the number.
  • Otherwise, print the number itself.
  • Each output should be followed by a new line.

Can you code that and get it right in less than one minute? In less than two? How many retries will be necessary? Open your favorite editor for your favorite programming language. Ready? Set? Go!

Read the rest of this entry »


Suggested Reading: A Field Guide To Genetic Programming

09/05/2009

Riccardo Poli, William B. Langdon, Nicholas F. McPhee — A Field Guide to Genetic Programming — Lulu, 2008, 240 pp. ISBN 978-1-4092-0073-4

(Buy at Amazon.com)

(Buy at Amazon.com)

This is not an ordinary textbook as it does not follow the expected pattern but is rather an extensive survey of the field of genetic programming. Each chapter introduces a major concept or issue in genetic programming and covers the subject in a rather authoritative way, supported by copious documentation—the last 57 pages of the books are occupied by the bibliography.

Read the rest of this entry »