The CFM-00

Parallel computing is the next paradigm shift, everybody knows this, but not everyone is taking the proper action to face it adequately. One thing to do is to read on the subject and force oneself to code using threads and various degrees of parallelism; and that’s pretty easy now that a quad core machine doesn’t cost all that much. But the next step, distributed computing, necessitates, well, more than one machine, and if you have different levels of memories and communication channels, all the better.

So out of a bunch of old x86 PCs, I’ve decided to build my own portable mini-cluster with 8 nodes. Nothing all that impressive, but still plenty of fun to build.

Getting the hardware. I got the computers (iPaqs) from my friend Nicolas for next to nothing. He got them from eBay, nicely delivered to his home on a crate or something. For the next generation, if there’s one, I will consider getting a batch of identical computers from a shop like Insertech, they often have deals like Pentium4s for 50$ each. In either cases, I don’t think you’ll have to resort to dumpster diving to find inexpensive, yet working, computers.

I also got a CRT screen (I had no LCD to spare), a keyboard, and 10 6′ Ethernet cables from Insertech and a local shop to complete the hardware. I had a 16-ports 100-mbits Ethernet switch, so I’ll reuse it as the cluster switch (these computers do not handle gBit Ethernet anyway).

Disassembling the iPaqs. These machines contain an inordinate amount of junk. This is what an assembled iPaq looks like:

The iPaq before disassembly.

and that’s what it looks like once taken apart

The iPaq, mostly disassembled.

You don’t need much for that step, except a set of torx screwdriver bits—just a set with the “security” ones.

You cannot not be shocked by the amount of steel and plastic such a computer can hold. I think more than half of the weight of the computer is junk. Good thing computers usually last rather long, because that’d be quite wasteful to just throw all that junk to the curb. In fact, I made sure all parts were sent to the right recycling facilities.

It takes about 20 minutes to disassemble one of these computers completely, as shown in the above picture. So, about two and a half hours’ work and you’ve disassembled the eight computers.

Testing the hardware. Before using of a computer in the cluster, I made sure they were fully functional: USB ports, networks, IDE controller, and especially memory. Memory is especially important to test, because bad memory chips will result not in outright crashes, but in a series of unpredictable symptoms. If your machine acts like it’s possessed by a spawn of Cthulhu, despite being virus-free, take an hour to run a memory test. Fortunately, it’s most easy using memtest86+, conveniently bundled as a boot image on Ubuntu (either from the hard drive or from the live CD, anyway you can even make a boot floppy with it).

Testing iPaqs

Testing memory takes about 30 to 45 minutes on those machines, so I had plenty of time to do other stuff in the mean time, like sketching the final assembly. Turns out, however, that the memory of the machines was mostly OK, except for one 64MB stick with a sticky 1-bit. As I got ten machines, I got a spare from one of the two other machines, so everything’s good.

Testing iPaqs... takes forever.

Assembling the cluster. I used 36″ gauge 10 screw rods, with 24 threads per inch. These can be found in any home hardware/improvement centers for something like 2$ each, they’re really inexpensive. I also got a box of 100 matching hex-nuts, also quite inexpensive (4$ each or so). The four rods (I ended up using only three) plus two 100 hex-nuts boxes cost less than 25$, tax included. Since they didn’t have the right size to fit directly into PC-boards screw holes, I used a titanium drill bit to widen these a tiny bit, to make sure the threaded rods would glide in nicely.

For the first board, I left about 1.5 inch of threaded rod under the board, so I would be able to insert the rod into the side panels that would eventually hold everything together. The rest is simply a question of patience: you slide a board in on top of the hex-nuts, you screw three more hex-nuts to fasten the board into place, then three more to support the next stacked board, about 4″ apart. You also have to add two extra nuts to hold the power switch board.

So screwing those may take a long time, but I eventually used a drill and a rubber band to act as a transmission strap wrapped around an hex-nut to screw this a bit faster. Since this is an O(n^2) type job, it gets faster and faster as you reach the top of the stack. For the last board, that is, computer0, I left a bit of extra space to mount the USB and audio plugs board, as well as a small hard-drive that will later serve as the dhcp/boot-p image server.

Cutting MDF is easy, but incredibly messy. I did that a few hours before starting assembling the boards onto the rod and left the dust in my shop clear out a bit. MDF is versatile, rather strong, and about as hard to work as cardboard, which made piercing holes, handles (the big round holes) and sanding corners quite easy. For all holes, I used a drill press. It ensures, for one thing, that the holes are indeed perpendicular to the wood (or MDF) sheet’s plane.

The rest is assembling the whole thing: first the bottom with a side (using screws and steel right-angle joints, not visible on the pictures as under the cabinet) sliding in the rods with the motherboards, fastening the PSUs, then the other side panel. And you’re done. So I thought.

All finished!

All Finished! (reverse side, showing cabling)

*
* *

Turns out that even if the rods and motherboards assembly is rather light—a few pounds—the 10-gauge rods themselves aren’t strong enough to hold all this without bending, something I didn’t foresee. I added an extra MDF board (shows only a little on the picture) to lift the boards and make sure it shows a straight back.

*
* *

I’m thinking of several names for this first experiments. Surely, CFM-00 would be appropriate (I’ll let you make out what the letters stand for). Maybe MDF-00 would work out as well.

*
* *

All in all, it’s a 15 to 20h job. About 3h to disassemble the computers, a few minutes to cut the MDF, a few hours to stack and fasten the motherboard together and about 10 to test the individual computers. It’s basically a week-end project but for many reasons I completed it over a period of six months, from acquiring the computers from Nicolas to fastening the last hex-bolt.

*
* *

The next step is to figure out what software to use. Bootp is a must in this case, and I am considering using Mosix2. So, that’s were I’m at with this project. Feel free to comment or suggest parallel and distributed processing software and frameworks (but I’d like to stay in the Linux/open source family of solutions).

30 Responses to The CFM-00

  1. […] condition but the march of technology had left them behind as casualties. He’s given them new life by assembling a cluster. The first order of business was testing the hardware to make sure it’s working. [Steven] […]

  2. netinfinity says:

    You can use any linux distribution to setup a beowulf clustering. Here are two links on ubuntu clustering:

    https://help.ubuntu.com/community/Clustering

    https://wiki.ubuntu.com/EasyUbuntuClustering/UbuntuKerrighedClusterGuide

    Download the iso @ (you can probably upgrade it):

    http://www.ehu.es/AC/ABC

    Cheers,

  3. Doktor Jeep says:

    Out-freaking standing!

    Thanks for now opening the door for me to get into distributed computing.

  4. Josh says:

    Im building a cluster myself right now and am planning on building a distribute modular power supply. Have you addressed supplying power to your boards?
    In my reading I recall seeing somthing along the lines of Mosix being somewhat defunct. Rocks appears to be the most supported distro as of right now I believe

    • Steven Pigeon says:

      Mosix’s quite dead yes, but apparently Mosix2 lives.

      No, I haven’t addressed the problem of pooling the PSUs, but I might in a future version. As of now, the PSUs are something like 90W power, but I doubt that with everything stripped, a “blade” draw more than 40-50W.

  5. Simon says:

    Somebody that use old computers with a new purpose should be one of my friends. That guy should be a buddy if is doing it with Linux and in his woodworking workshop.

    Laisses-moi savoir si je pourrais t’être utile ou simplement si prendre une bière t’intéresse.

    (Let me know if I can help in someway or simply if you prefer to meet at the Pub.)

    • Steven Pigeon says:

      Come have a chat with fellow (french-speaking) programmers on channel #programmeur on irc.freenode.net

  6. Joe says:

    How much power do you think this cluster would consume… VS one Quad Core i7?

  7. Kris says:

    Great work! I have a bunch of old thin clients hanging around, and this gets my juices flowing.

    As more of a hardware hacker then a software person – I highly recommend you save the earth and your wallet by replacing the individual power supplies with ONE or TWO 80%+ efficient units of the day. I’m betting that each machine is only using maybe 30% of the power supply under load – and a properly sized 600 watt unit will have more then enough left over for 4 – 5 of those machines. Less heat and power and more computing!

    Good luck!
    – Kris

    • Steven Pigeon says:

      The PSU came from the original iPAQs, and I’m not sure how to distribute the power across many board and keep the individual computers happy. I would guess a master power button is feasible?

  8. I says:

    master power switch is perfectly possible, the green wire from the psu is the powers witch sense,pull it to ground to turn it on, these can all me connected in parallel and collectively pulled to ground.

    as for using less power supplies, if you make sure that all the rails of the psu’s support the load then they can be paralled also,

    i currently have my main computer and a small server board in one case powered by a single 850w power supply

  9. ShaneG says:

    Maybe you could look at high reliability computing? Using a net bootable Linux image with an Erlang runtime? Playing with services being migrated from machine to machine as they fail or the workload changes? Using a cluster of older machines like this would be a reasonable simulation of a distributed (or cloud based) services.

    Erlang – http://www.erlang.org/

    • Steven Pigeon says:

      I am not familiar with Erlang, though I know the language by name.

      You point an important problem that is not addressed very often: developers tend to assume that the hardware is infallible (*chuckles*) and do not plan very well for failure. Redundancy and data integrity are both part of my interests, and the cluster may be used to do some research work on this.

      (having separate power switches will help simulate nodes coming and going, so maybe I need more than a fused PSU for a bunch of nodes?)

  10. AmonRa says:

    GreaT w0rK…. 5/5

  11. KeithB says:

    I’ve got a spare iPaq or two around here I’d love to scrap and donate the components. Let me know if you’d like ’em ;^)

    • Steven Pigeon says:

      I got ten iPaqs from Nicolas, 8 of which are in the cluster; I still have one in its undismantled state; and tons of pieces from the others. So, thank you for the offer, but no, I don’t want them.

  12. Steven Pigeon says:

    From Reddit:

    FlyingBishop

    You cannot not be shocked by the amount of steel and plastic such a computer can hold. I think more than half of the weight of the computer is junk. Good thing computers usually last rather long, because that’d be quite wasteful to just throw all that junk to the curb.

    Yes… let’s remove all the shielding, it’s just junk, right?

    I think most of the junk is there so that the computer itself feel rigid and therefore is perceived as a “quality product”, whereas a plastic-only casing would be perceived as flimsy because it can’t be as rigid and would (slightly) warp when manipulated.

    If it were for interferences only, the shielding would be a much lighter net-like Faraday cage.

  13. Nick says:

    Just 2 of these up off of eBay for 10 bucks each, will have to try this…

    When I got to the security screw, I just whacked the peg in the middle until it bend enough for my regular T15 screwdriver to get a hold of it. Is that bad? :D

    • Steven Pigeon says:

      OMG!!!!1! That’s horrible!!

      (j/k)

      Of course, you can also take a titanium tip drill bit and just drill it away. Or get a more complete torx kit. Surprisingly, they’re not that expensive. I always found it most pointless to have “security” screws if you can get the screwdriver for it for 1.50$ more. That’s retarded, in fact. And for me, I couldn’t care less about how destructive (to the casing) the method was, it was all going to recycling anyway.

  14. […] due to a few shocking posts such as Is Python Slow? and high-visibility posts such as the CFM-00 (which was featured on Hack-a-Day). The first post nears 20000 hits, the second 10000 (although […]

  15. Steven Pigeon says:

  16. […] a previous post, I presented the CFM-00, a “cluster” of 8 Pentium III at 500MHz, assembled into one MDF casing. The assembly […]

  17. […] a couple of occasions, I presented some hardware hacks, not always very elaborate, and today I add another […]

  18. […] mini-cluster with 8 nodes. Nothing all that impressive, but still plenty of fun to build….. Read More [ Source ] […]

  19. […] This is great project and very, very, good use for old hardware u may have…. Great work AuthorZ n0te: Parallel computing is the next paradigm shift, everybody knows this, but not everyone is taking the proper action to face it adequately. One thing to do is to read on the subject and force oneself to code using threads and various degrees of parallelism; and that’s pretty easy now that a quad core machine doesn’t cost all that much. But the next step, distributed computing, necessitates, well, more than one machine, and if you have different levels of memories and communication channels, all the better. So out of a bunch of old x86 PCs, I’ve decided to build my own portable mini-cluster with 8 nodes. Nothing all that impressive, but still plenty of fun to build…..   […]

Leave a reply to Doktor Jeep Cancel reply