[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]
possible garbage collector problem...
- To: <tribble>
- Subject: possible garbage collector problem...
- From: Mark S. Miller <mark>
- Date: Mon, 2 Oct 89 00:03:03 PDT
- Cc: <michael>, <eric>, <us>
- In-reply-to: <Eric>,40 PDT <8910020556.AA09769@xxxxxxxxxxxxxxxxxx>
Date: Sun, 1 Oct 89 22:56:40 PDT
From: tribble (Eric Dean Tribble)
I certainly don't want to guarantee that different processes live in
separate address spaces. At the very least, that would peclude
interaction with Mach. It would also force to heavy-weight processes
(like fork)
dean
I certainly didn't mean separate address spaces in the Unix sense at
all. I meant mearly that each thread has an associated set of
objects, and that a pointer from an object associated with thread A to
an object associated with thread B be identified as a special
inter-thread pointer. In the underlying implementation, usually all
these threads and their associated objects would be residing in one
address space. However, this address space would be logically
partitioned into a garbage collectible heap for each thread. Note
that this would seem to fit the backend structure perfectly. There
would be one thread associated with each front-end session, and its
GCable heap would correspond to the old "session" heap. In addition,
there would be a thread for managing the in-core cached form of
persistent data. This would be "Urdi write thread", i.e., the backend
thread that owns the Urdi write view. (Of course, any front-end
thread that desired a stale read-only view could become an Urdi read
view for that operation without creating another thread.)
This implies that separate front-end session threads don't syncronize
with each other at all regarding non-persistent data, and that they
syncronize on access to persistent data not via monitor locks, but
rather with KeyKOS style inter-domain messages (of course, at a higher
level the message may be a request for mutually exclusive access,
i.e., a monitor lock. This may be an indication that the boundary is
drawn at the wrong abstraction though).
For those who don't know KeyKOS, the idea is equivalent to saying that
each thread has a message semaphore for receiving incoming messages.
The thread only processes one incoming message at a time. (Those more
familiar with Mach may substitute "Port" for "message semaphore".)
The Key++ idea is that a sender sends a message to an object in
another thread the same way as it send a message to a local object,
but it does it by using operator->() on a special inter-thread
pointer. This pointer checks if the pointed to object is in the same
thread as the sender. If so, we just send in the normal C++ way. If
not, then operator->() returns instead a pointer to a stubble
generated proxy that packages the message up in a message object which
it queues up on the message semaphore of the thread of the target
object. Each thread is simply a loop that reads a message object off
of its incoming semaphore, and sends it a doIt message.
Stubble would also generate a class of message object for each message
in a PROXY section, instead of its current practice of generating a
message handler. When these messages actually do cross address spaces
(as in traditional stubble use), we simply pass the message objects
themselves by copy. The stubble support to do this is sematically a
bit simpler than current stubble, probably about the same
implementation complexity, and gives us the same tasking model both
within and between address spaces.
Inter-thread garbage collection in the absence of inter-thread cycles
should simply result from the combination of garbage collection within
a thread, weak pointers (from the CommHandler's entry table to
proxified & proxy objects), and finalization (so the CommHandler's
table can find out when such an object has gone away, and inform the
other side). Inter-thread cycles are a hard problem that I don't
expect us to need to face for a very long time (hopefully not until
version 3.0).
Note that, even if we do monitor-lock direct access to shared
persistent data in the backend, that data isn't GCable anyway. All
GCable data is generated in servicing a frontend, and unshared between
frontends. Frontend threads "communicate" with each other ONLY via
shared access to persistent backend data structures. Were this not
the case, it would be hard to see how Xanadu could be distributed
transparently (since two communicating / collaborating front-ends
should be insensitive to whether they are talking to one or multiple
backends). What this says to me is that our entire GC mechanism
doesn't have to worry about scheduling.
Was this all clear? I sure wish I could draw pictures in email
messages (soon, soon,...)