[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]

Re: Why have a multipart document address?



There is a lot to reply to here, and I see the naming problem as a very
big one.  In p2p systems I've used it is the main unsolved problem,
though most people don't see it that way.  Green had names on the non
human readable side of the spectrum, and I mostly see that as a way to
go.  In p2p systems distributing illicit content the problem of
clarifying the content and providing searchable characterizations is
thwarted by legal action, so it gives us a good model for suppressed
political speech.  Any good solution for one would seem to have uses for
the other, so that gives us a test bed if we find a candidate. 

On Sat, 2005-02-12 at 03:15, Jeff Rush wrote:
> On Fri, 2005-02-11 at 17:35 -0800, roger gregory wrote:

> > which have to know about it.  We envisioned something like an enfilade
> > that has a reflection of what is in various nodes but not all the
> > information.  The point here is to have a distributed tree rather than a
> > point to point crosspoint switch.  So that we get nlogn rather nsquared
> > traffic.
> 
> This is a lot like several of the current peer-to-peer systems.  I've
> been studying them for ideas on how to set up a distributed, anonymous,
> dynamically-connected set of nodes.  The original Xanadu would have all
> nodes be part of the Xanadu Operating Company, but the days of closed
> data islands like The Source, the original Compuserve and even Prodigy
> are long gone.
> 
> But I think tumbler maps have a lot of overlooked applicability to peer-
> to-peer systems today.  And I use the best ideas I can find. ;-)
> 
Remembering that all problems in computer science can be solved with
another level of indirection.

I looked into what we would really have to do to build a system
anologious to DNS that located things at the document level.  The scheme
I thought up looked absurdly complicated with a LOT of overhead. It
required distributed tables of where to look for documents with partial
addresses, some of which required parsing down alternate paths till
resolution.  It looked absurdly hard, so I looked at how DNS actually
works.  There's a lot in the actual implementation of DNS especially for
ipv6, which made my design seem only a bit worse.  That is to say too
hard to actually do before it's really needed, but not impossible in
practice, any more that DNS is now.  We may have room to kind of slide
it in beside DNS if we really need to, I'm sure there are tricks, but I
haven't looked into them yet.


Well then, heres one silly idea informed by the stuff markm posted.  
Note this isn't quite what I'd use in green, but: take something that
looks exactly like a url, but interpret it differently, That part
zzfoo.com that looks like a domain name is a domain name, but it doesn't
get handed to DNS, instead it gets handed to some kind of distributed
Udanax_ReLocation_Engine_Reifier that finds an appropriate location for
the contents named by that URLlikethinge, possibly taking the whole
thing into account rather than just the DNS name.  In the degenerate
case this the current web.  

In any case the problem is how to identify the stuff, and note that
there is more stuff than we really want to name.  In p2p systems finding
the stuff is a real problem, it's either "guess my keyword", or
impossible.  Not a very good situation.  
> 
> > How close are you to implementing this? and what base is it going on? 
> > It's good to have thought this out a few steps in advance of that you
> > are implementing, so that you know what direction you're headed. The
> > single layer kind of system you describe seems to reflect the
> > interconnectivity we currently have in the internet, and may be more
> > appropriate, than the quasi hierarchical model we had.  Then again since
> > nodes vary in bandwidth a lot more than storage size, maybe that should
> > inform how this plays out.
> 
> I have a blob storage system that uses SHA-256 hashes for immutable,
> non-coordinated unique identifiers, and the HTTP and Twisted Perspective
> Broker protocol for access.  It runs as a standalone server.
> 
> On top of that, I have a Python object persistence system.  The Twisted
> Perspective Broker API gives me much of the capability of the E language
> re remote calls.  This of course runs as a client to the storage server
> but is itself a server to a browser.
> 
> On top of that I have fully-functional tumblers and type-1 enfilades,
> for hanging document blobs along the tumbler line and allocating unique
> tumbler IDs for nodes, users, etc.
> 
> I'm still working on a satisfactory type-2 enfilade for handling links.
> And I've got several prototypes for how to manipulate links in the
> client, striving to find an easy, pythonic representation for
> programming.
> 
> I've got nothing on user access control, in that I'm struggling with how
> to handle distributed, pseudonomous identities, and what kind of ACLs to
> use.  How to grant and revoke access, how to grant access to a subgroup
> of people, those are still dark areas.  If all documents were to be
> public, I wouldn't have this problem but I don't think that is practical
> today.
> 
Just start with some subset and expand it, if versioning is very cheap,
then you can make a version to subgroups and etc.  
> I've got nothing on a viable ecurrency and paying people for their
> information.  I feel something is needed but without a centralized,
> trusted Xanadu Operating Company, we need a strong distributed
> microcurrency system.  But far better people than I have worked on that
> and failed.
> 
I'd leave that till later, just having a way to keep track is a big win.
> I've got a Firefox browser extension, complete with Xanadu icons (bad of
> me re copyright, I know but they look cool).  I haven't hung any
> JavaScript code underneath it yet, but the Twisted Perspective Broker
> provides something called Live Link, that let's JavaScript within a
> browser seamlessly call to and from Python in the server.  I intend to
> use that to write the front-end, as Firefox is about as universal a
> front-end as we're going to get.
> 
> And I'm just starting to sketch out a backend <-> backend architecture,
> hence my questions.  I want to add other servers as soon as possible.
> When I created the Fidonet BBS system called Echomail, a newsgroup-like
> email replication system, years ago, getting others to try it was
> critical to rapid adoption and led to explosive growth.
> 
> I continue to advance toward a public system, occasionally dropping back
> to experiment with the Green C code to extract another nugget of
> knowledge.  I'm probably retracing many of the steps of the Xanadu team,
> including some of the wrong turns.  I welcome discussion with others and
> enlightenment from the Xanadu team but I'll keep plodding along
> regardless.  Nothing I do is secret in any way and my code is under the
> GPL.
> 
> I also formed the xanadu.meetup.com to try to drum up local interest
> here in Dallas, but no one shows up for meetings.  And I've presented on
> the architecture of Xanadu 2-3 times, but to very small crowds.  Few
> remember Xanadu anymore and fewer still see why anyone would want it...
> 
> -Jeff
So I apologise for the rambling and redundancy, but I thought it best to
get this out and retain some momentum.
-- 
Roger Gregory

roger@xxxxxxxxxxxxxxxxxxxxx
 
http://www.halfwaytoanywhere.com