Skip to Content.
Sympa Menu

bluesky - Re: the three-services model

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Wei Dai <weidai AT eskimo.com>
  • To: Global-Scale Distributed Storage Systems <bluesky AT franklin.oit.unc.edu>
  • Subject: Re: the three-services model
  • Date: Mon, 19 Feb 2001 07:08:47 -0800


On Mon, Feb 19, 2001 at 12:28:56AM -0800, hal AT finney.org wrote:
> I think it is because MojoNation splits the files into 8 pieces using a
> Shamir style sharing algorithm, and stores all 8 of the content hashes.
> Then some number < 8 of them is needed to reconstruct the data.

It seems like a more efficient approach would be to identify each share by
a hash of the hashes of the shares, and a share number. This way you only
need one hash to locate all of the shares.

> OceanStore keeps careful track of where data is located as it flows
> through the network. This allows its data lookup algorithm to be highly
> reliable and fault tolerant. My concern is that it may turn out to be
> costly to keep location information up to date, and since they don't
> have any code on their web page, it's hard to judge how the efficiency
> tradeoffs will turn out.

OceanStore actually fits the three-services model pretty well. For names
it uses SDSI-style locally linked name spaces. For data location it uses a
combination of a quick scheme for popular data, and a slower but more
reliable scheme for all data. It's as if Freenet nodes in addition to
doing everything else also report what they are storing to a distributed
location database. I think the particular location database design used by
OceanStore is vulnerable to active attacks (in order to censor a document
by making it unfindable), but it doesn't seem very costly. I'll try to
elaborate on the attack later.

> I'm not sure I follow how this separation achieves these goals. How would
> caching work in this system? Is it a feature of level 2 or level 3?
> I thought in your model I would I do a lookup in level 2 with the CHK
> to get a host, and from what you say here I gather that this would be
> the host chosen by the author to hold the data inserted. Then I go to
> level 3 and contact that host and request the data. There don't seem to
> be opportunities for data caching in level 2 since no data flows there,
> and not in level 3 since you talk directly to the machine where the data
> was originally stored.

What I envision is that storage servers will get usage data from location
servers to see which documents are popular, and retrieve and store them
deliberately. The user agent can also cache copies of data the user
requests and participate in the storage/transport service. (BTW, I prefer
to use the names of the services instead of the numbers in order to avoid
confusion later when the numbers might be used for something else.)




Archive powered by MHonArc 2.6.24.

Top of Page