[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]
NewsSpeak
- To: <acad!xanadu!michael>
- Subject: NewsSpeak
- From: <acad!HQ!Jacque_Scace>
- Date: Wed, 30 Aug 89 15:25 PDT
- Cc: <xanadu!xanadu>
- In-reply-to: <acad>
Finally, some Xanadu E-Mail that I understand! I agree with Phil; this
kind of writing is a joy to read.
Jacque
---------------------- Replied Message Body -----------------------
Date: 8-30-89 11:26am
From: {acad!xanadu!michael}:TechNet:acad
To: jacques:hq:Acad
Subj: NewsSpeak
-----------------------
Original header follows:
-----------------------
Date: Wed, 30 Aug 89 17:20:09 PDT
From: Michael McClary <michael>
To: <nick@grand-central>, <us>
Message-Id: <8908310020.AA08336@xxxxxxxxxx>
Subject: Re: NewsSpeak and "We expect it to win on real data"
> The notion of "We expect it to win on real data" was a very tricky one
> at my last company. [description of system that did well on "real"
> but failed bullshit benchmarks]
Agreed. Remember that "NewsSpeak" is a prototype answer to media questions
about performance. The intent of that phrasing is not just to tell them
how we designed our performance, but to make readers suspicious of systems
that really fly on random benchmarks. This is called "converting
liabilities to assets". B-)
Selling this idea also may delay competitors trying to bring simple-minded
systems to market.
Unfortunately, redesigning the data structures to meet random benchmarks
isn't (currently) a viable option. Fortunately, there don't appear to be
any other hypertext backend servers out there to benchmark against, or
bullshit benchmarks to meet. Yet.
(Am I wrong? Please let me know if so.)
> 1. If you think your product provides the "real solution" to the "real
> problem"
> you need to publish and promote a set of benchmarks so that customers will
> use it to compare you with your competitors. Otherwise, someone who wants
> to get a quickie article published will invent a silly benchmark and run
> it on a few systems, and before you know it, there is now a defacto
> standard benchmark.
Indeed, some benchmark-promulgation strategies were discussed at the recent
CSDS meeting.
> 2. As disgusting as it may be, it can be worth the engineering effort to make
> sure the product performs well on well-known stupid benchmarks. In other
> words it may be cheaper to make stupid things run well than to have your
> sales and marketing people spend time convincing people that stupid things
> are stupid. (You also have more credibility convincing someone that a
> benchmark is stupid when you don't lose big on that benchmark.)
One of the best ways to head off stupid benchmarks is to promulgate a good
one. This leaves later promulgators of stupid ones with the problem of
justifying their failure to test what the good one tests. (This doesn't
stop them from promulgating one that beats the hell out of some one thing
they do well and you don't, especially if yours didn't test it.)
If "good" happens to be something they have to re-invent or surpass ALL
your best data-structure magic to meet, so much the better. Of COURSE
your benchmark tests what you do well. It's what you thought was
important, so it's what you optimized the application for. Right?
michael