[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]
NewsSpeak
- To: <us>
- Subject: NewsSpeak
- From: Michael McClary <acad!xanadu!michael>
- Date: Tue, 29 Aug 89 19:34:44 PDT
(I sent this to joel and marcs earlier, but wider distribution seems
appropriate.)
The way I've been explaining performance issues: (Main point is
the text between the dashes. But once I got on a roll...)
---------
"Any filing system slows down some as you add more information.
The big difference between them is how MUCH they slow down, the
shape of the slowdown curve.
"Some approaches start out blazingly fast, but bog down before
they have usable amounts of stuff stored. Others only slow down
a very little, and slow down less and less as each new thing is
added. <sketch a log curve with your hand, but don't SAY log...>
"The important thing about Xanadu's performance is that we've
kept that slowdown small enough to handle enormous data bases.
The target we met is this:
- If everything ever published was stored in a Xanadu system,
- and all the new stuff was added as it was published,
- and the rate of publication keeps rising on its current curve
- and computers don't improve faster than THEIR current curve
- and the systems are kept upgraded,
- and you don't improve the Xanadu data engine at all
- Then the total system would keep getting FASTER.
"The Xanadu data structures slow down SO little that the speedup of
the computers keeps ahead of the flood of publications.
-----
"This attention to the shape of the slowdown curve is especially
important for hypertext servers, because all the connections
BETWEEN things must be stored, too. THEY can't be allowed to
slow things down, either.
"It won't surprise us if some competitors come out with systems
that do some of the things ours does, and do it faster when the
data sets are small. The real test for speed comes when the data
set is big enough to handle an engineering project, or a set of
journals, or a growing company.
"And we believe there are more important tests. Will it lose
your company's records in a crash some day? Can you outgrow it?
Will it handle all the kinds of data you need to interconnect?
Will it straightjacket you into preconcieved formats? Will
future improvements be compatible with data entered on the early
systems? Will users be able to manage this data, without becoming
lost in the interconnections?
"Those are the issues where we've thrown our resources. We
believe we have the correct solutions to them. We know we can
get an additional overall speedup, too, but we're not going to
hold up the release of our first product until we've squeezed
every bit of speed out of it, just to look good at demos.
We know it's fast enough, and expect it to win on real data.
"We want our customers to be able to use the power of hypermedia
as soon as it's reliable, and good enough to pay back their effort
learning to use it. We want them secure that there are no nasty
surprises after they've committed.
(etc...)
michael
------------------------------
By the way,
> "Any filing system slows down some as you add more information.
isn't strictly quite true. There's "perfect hash". It doesn't slow down
at all. Trouble is, you have to chose the hash function with total knowlege
of what you're storing. This means enormous amounts of computation when
you add one unexpected thing to the data base, and must chose a new hash
function and reorganize the entire data base to match.
michael