[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]
Re: Three Years of Computer Time
- To: <acad!xanadu!acad!throop!kelvin>, <acad!xanadu!tribble>
- Subject: Re: Three Years of Computer Time
- From: Roger Gregory <acad!xanadu!acad!xanadu!roger>
- Date: Mon, 28 May 90 10:52:06 PDT
- Cc: <acad!xanadu!megalon!tech>, <acad!xanadu!megalon!xanadu!us>
The Spawn implementation of MarkM and Drexler's escalator algorithm
might let people use inactive computers as compute servers. I'm not
sure what class of programs Spawn can handle, but it does supply a
foundation for adaptive, distributed computation.
dean
Spawn is less of a help for this class of problems than one might think.
Actualy there are too classes of problems that this thread is (could be ) about:
a) long term background tasks
b) adaptive refinement tasks
Spawn would be very useful for the adaptive refinement stuff, but for
the background tasks, a much simpler communications model is sufficient.
The major characteristics of a background task are much compute
and little other load on the system both disk and memory. Of course the
limitations on disk and memory are relative, and could depend on system
usage statistics. The perfect background program is cpu bound and does no
IO except checkpoints and a small ammount of result. Math puzzles, encryption
tests are common examples.
Another I have found is the "optimal code fragment" problem,
this involves testing in order of size all possible code fragments of a given
machine for equivalence with a given code fragment. The only paper I have read on
this <Superoptimizer -- A look at the Smallest Program 1987 ACM 0-8979-238-1/87/1000-0122>
had some neat examples of possibly useful code fragments albiet with some subtile limitations.
Unfortunately to be of any use, a library of such tricks would have to be part
of a smart compile enviornment of some kind, not a small task.
The other obvious cantidates for background tasks: graphics & molecular modeling
suffer from requiring lots more memory and more IO. This is not insurmountable,
especially if we have the application pay attention to vm load or if the unix
priority schemes paid attention to it (I don't think they do). Ray traceing may
be too fine a level to think about, frames in a movie are more what I had in mind.
It seems strange to me that autodesk doesn't have more neat stuff built to show off
this beautiful graphics stuff, but perhaps its too early in the this technology
to make a good siggraph computer short. Appolo did a very bad flashy short using
lots of machines several years ago, so it can be done.
* I have a copy of the Superoptimizer paper, I believe it was in the sig arch
procedings of 1987.