Skip to Content.
Sympa Menu

bluesky - Re: market-based data location (was Scalability)

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Wei Dai <weidai AT eskimo.com>
  • To: Global-Scale Distributed Storage Systems <bluesky AT franklin.oit.unc.edu>
  • Subject: Re: market-based data location (was Scalability)
  • Date: Tue, 27 Feb 2001 02:57:37 -0800


On Tue, Feb 27, 2001 at 12:12:55AM -0800, hal AT finney.org wrote:
> To clarify, the data location service described here is the one which,
> given the GUID (content hash) of a piece of data, returns the address
> of a computer somewhere in the world which has that data?

Yes.

> The question is, how would users find the right data location servers
> to query? How much have we reduced the problem in going from locating
> the node with the data, to locating the node which indexes the data?
>
> I can see that it is a step forward, because hopefully there would be
> fewer index nodes than data nodes. But even so, there could be far too
> many index nodes for end users to have lists of all of them. Do we need
> a higher level meta-index to describe the index nodes?

In my scheme, location servers track each other as well as data objects.
So if you start with one location server (which you obtain out of band)
you can eventually find the location server that covers the object
you're looking for.

> (And it might not really move us forward that much. MojoNation uses
> a similar concept for its "trackers", but they expect almost everyone
> to be running trackers, I think. MN could have as many index nodes as
> file servers.)

Is there a paper that describes MojoNation in more detail than the
technical overview? As far as I can tell from the technical overview[1],
in MojoNation there is a central meta-tracker, which knows the address of
every server in the network and what services they provide. And then there
are content trackers and publication tracks which are distributed. My
understanding is that content trackers provide keyword or metadata
searching, and publication trackers are used by block servers to find and
move data amongst themselves. So in order to find a data block from its
content-hash ID, you first query the meta-tracker to obtain a block server
whose block ID range covers your target. Then you query the block server,
and it either gives you the block directly or finds it through publication
trackers. Here the central meta-tracker is an obvious vulnerability and
bottle neck.

> The idea of nodes specializing on the basis of content or popularity
> seems to make things even more complicated. Now it is no longer a purely
> mechanical process to find the right index node, one which depends solely
> on the GUID. We would need extra hints about the type of data being
> fetched to know where to begin searching among the many flavors and
> families of index nodes which exist. It's hard to see such mechanisms
> contributing to what one would hope would be a relatively mechanical
> process of fetching data.

Hints are helpful but not necessary. A typical strategy for the user agent
would be to query the local-area location server first, then a server
that covers a larger network area, then a global server that specializes
in the target GUID range, and finally an overseas server that specializes
in censored data.

> I need to study the OceanStore system more closely; I'll try to post
> something on it tomorrow. But I was struck by the statement in the
> introduction to one of their papers that their goal is to support a
> network with 10^10 users (10 billion users, i.e. the total population
> of Earth), each storing 10000 files, for a total of 100 trillion files.
> That's a lot of files! Scaling to that level will be a real challenge.

Only 10000 files per user? On my hard drive I have directories that
contain more than 10000 files each. To be completely general purpose I
think you need to aim for at least 10^6 files per user.

[1] http://www.mojonation.net/docs/technical_overview.shtml




Archive powered by MHonArc 2.6.24.

Top of Page