Herein are my thoughts on what part of the outcome was:


First, in terms of overall "philosophy", Van mentioned his idea of "goodness": 
 each web page travels over a given internet link at most once.  (Needless to 
say, this is horribly violated in the current system; thus, the meeting.)


Second, in terms of near-term things to be done (though no one was given the 
honor of being the 'stuckee' on any of these):

1.  Embed version numbers in URLs.

2.  A request for a specific version-numbered URL would get back exactly that 
version (if available).  Thus, the *name* is the *cache coherency protocol*.

3.  An attribute, such as "lifetime", is associated with a URL (web page).  If 
the lifetime has expired and a request is made for that *VERSION* of the URL, 
the page is returned (again, EVEN THOUGH THE LIFETIME HAS EXPIRED).

4.  Requests can be made for "non-versioned" URLs.  If (and only if?) the 
cache receiving the request has the same URL (with some arbitrary version) 
with a non-expired lifetime, the cache returns that version to the requestor 
(it would return the "most recent" version if it had several).  Otherwise, the 
cache goes and looks "up the tree" (closer to the source of the URL).


A longer term goal is more agressive, dynamic, caching servers deployed.  This 
was clearly a goal, but still a bit of a research area (in which Van Jacobson, 
Lixia Zhang, and Sally Floyd are planning on doing a fair amount of work).

The hand waving is to use IP multicast to deliver requests *up* the tree and 
web pages *down* the tree (in each case, a tree routed at the originator of 
the URL).  Based on the rules in the previous part, nodes in the tree (caches) 
would cache pages and respond to requests from below, etc.  The "Scalable 
Reliable Multicast" work in last year's SIGCOMM conference (authors: Floyd, 
Jacobson, McCanne, Liu, and Zhang) provides a model of (bits and pieces of) 
how this might work.

In this approach, it would be possible to allow a "kill packet" to travel down 
the tree (from original source towards all caches) which makes a "best effort" 
to kill a previously published (versioned) URL.  (Thus, when the NY Times 
realizes it has messed up on page 1, it can go around to all the lawns and 
replace those papers that haven't already been brought in by the faithful 
dog.)  It was clear that this keeps a page from being returned to an 
*unversioned* request; presumably it would *also* keep a page from being 
returned in a *versioned* request (for that page's version).


Greg Minshall