Need More Computing Power

I’m working with several products from Adobe corporation right now. That means several things: first, getting used to various ‘quirks’ in the user interface that no other company does the same way. I occasionally say, “There’s the Mac way, the Windows way, and the Adobe way.” The Adobe way doesn’t necessarily mean the same thing in different Adobe products, alas.

Second, running multiple products from Adobe, along with their infamous memory leaks, means that my little Mac Mini is severely challenged. Adobe makes big products, and seems much more worried about features than performance. I have an income (made in part from using Adobe products), and can justify upgrading hardware at some point, but then what happens to the old machine? There’s still plenty of computing left in the little guy. It’s actually pretty fast.

Then it occurred to me that the perfect answer would be a second mini just like the first, that I could connect in such a way that they could share the workload. Suddenly my upgrade gets a lot cheaper and I’m not getting rid of a perfectly good computer.

I know that there is a supercomputer built from a bazillion macs all hooked together and sharing the load, so why can’t I get some of that action? What would it take to get two macs hooked together to become a single computer? It seems just too damn obviously a good thing to not exist.

I’m filing this under Get-Poor-Quick Schemes, since it’s probably one of those ideas that looks good on paper but is in fact a major PITA. Still, what a great OS feature that would be.

6 thoughts on “Need More Computing Power

  1. If you are hooking computers together, you are not buying the latest and greatest and putting money into the pockets of companies. You are also not throwing away your computer, and therefore denying poor chinese people the opportunity to get heavy metal poisoning as they pick apart your leavings. Selfish bastard.

  2. I did a little bit of reading about this (very little, actually) and it seems that the obstacles to getting several cpu’s to work together on a problem comes in two flavors:1) the programs we buy aren’t built with any sort of task-divvying in mind, and 2) the overhead of passing the information around on a network far outweighs the benefits of splitting up the tasks.

    There’s a simple task-dividing system that no one seemed to consider, though. One could have entire applications designated to run on one cpu or the other. One computer would be in charge of the windowing environment (what most people outside of the linux worls thik of as the OS) while for instance my browser and text editor ran on one machine and the bloated pig that is OpenOffice ran on another. That would be a very simple way to share the load between machines.

    Now that I think of it, having small machine to run the UI while big machines in the background do the work was exactly what Sun and Oracle were pushing a decade ago. Guess there are some issues there as well.

  3. I don’t know how much this relates to what you want to do, but I remember hearing a background static of parallel processing and beowulf clusters and such things as being the next greatest thing. Circa 1990. In my limited knowledge, they did takeoff but didn’t fly to quite the altitude expected. I have used a “distributed computing linux cluster” here at Duke. It uses a special queuing engine that sits on top of the OS and takes care of parcelling out your program to many individual cpus. The cpus are all relatively expensive blade servers, bought by individual research groups and used as their ante to the game. The programs you run have to be especially written and compiled for multiprocessor environments and then the queuing engine does its thing. I liken it to a dump truck that can carry a buttload but is no race car. My research mostly wanted to run one model for a year as quickly as possible rather than run several perturbations of the model all at once. So it was better when we just ran on our own, inhouse, race car server. I hear the genome sequencers were getting lots of use out of the cluster because they needed to do millions of routines with just a tiny difference between them.
    When debugging it was always fun to write out values from inside a loop like so:
    do i = 1,100
    print, 1, variable
    enddo
    On the cluster you’d get weird answers of:
    1, xxxx
    17, xxxxx
    3, xxxxx
    where the loop iteration was out of order, because of different processors running different pieces at the same time. (not the loop itself, but a version of the subroutine it resided in).
    One of my IT folks still has a mac mini in the closet she just can’t make herself toss. It is totally obsolete, but runs great, and was purchased, years ago, for a steal at a Best Buy that was reducing inventory. Her cell phone has more memory than the thing.
    So slide a cigar into the corner of your mouth and put on your best workin man accent and say, “Whatcha need here is a queuin engine.”

    • The queuin’ engine most folks are talking about is XGrid (I think that was the name), but you’re right about the sort of tasks that benefit from clusters. Unfortunately most of what I do doesn’t fall into that category and according to people who appear to know more than I do about it using a cluster of computers would actually do more harm than good.

      I have an old Mac Cube in storage that I kept thinking I would use it as a music server (it’s damn near silent and draws very little power) but now I think even modern music software might overtask it. Software hunger seems to be outpacing Moore’s law.

      Now as far as the cube is concerned my best hope is waiting for it to become a collectible. (It does look mighty cool.) Yep, me and a million other people are waiting for that day.

  4. Sweeney uses his cube for a music server, and it works just peachy (and with a screen sharing program, you get UI from any room in the house with a bunch of minis and cheap monitors). With screen sharing or a KVM, you could also BE the queueing engine … run keystroke-input and cpu-intensive apps on separate machines, and pull back the results. I’ve done that with photoshop and accounting software in the past, where my CPU upgrade meant I would have had to spend >$1K to get my software to continue to run.

    • Who knew that iChat had screen sharing built in, including copy/paste between machines? It’s still not the 1-desktop/two computers setup I think would be ideal, but getting a helper computer on the job may be simpler than I thought. I will be getting my newer PowerBook fixed to see if I can make load sharing work in principle. Thanks for the prompt, bugE.

      I think the ideal for app sharing is having two cpu’s that share a storage system and a windowing application. For simplicity, one cpu would probably have to be in charge of reading and writing data – or! better yet, there could be a little pony cpu that manages requests from the main processors. Maybe there’s another small CPU that handles the windowing system and user interface. In between, an arbitrary number of CPU’s that do the actual work, probably each with its own RAM.

      All the machines would be connected together on a really fast internal network that does not have to play nice with the rest of the world. As far as the world is concerned, this is a single machine.

      I could then decide, “When I run GIMP, fire it up on the machine with the most Processing power and RAM. Run Apache on the CPU with good file system access.” If they all have the same file access, then there’s no need for me to even have this level of interaction with the system.

      Once that is set up, the system would be indistinguishable from a single computer for the person using it. (Until it’s time to upgrade and all that’s necessary is adding another cpu.)

      It seems like most of the pieces for this must already exist. Network storage devices must have solved the problems with multiple computers accessing them. As I (reluctantly) learn more unix, I find all sorts of tools that are there to allow machines to control each other and to pass data around transparently. Really the only thing missing is the single unified windowing system to display the output of the applications. The way mac applications ‘own’ their windows (I don’t know how other operating systems compare) makes this component a bit tricky, I think, but it would not be insurmountable. It’s really just a matter of the application not knowing that the “OS” it’s talking to is actually on another physical machine.

      Somebody get on that!

Leave a Reply to jesse Cancel reply

Your email address will not be published. Required fields are marked *