Skip to Content.
Sympa Menu

bluesky - Re: market-based data location (was Scalability)

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: hal AT finney.org
  • To: bluesky AT franklin.oit.unc.edu
  • Subject: Re: market-based data location (was Scalability)
  • Date: Tue, 27 Feb 2001 00:12:55 -0800


I don't have time to respond now to the many interesting points Wei
raises in the message quoted below, so I'll just ask some questions:

> The idea I'm playing with (and have been discussing with Ted Anderson
> before creating this list) is for each data location server to self
> select, based on market conditions, some criteria for what data to track,
> for example any data located in a radius of x from itself, and has a
> (content-hash based) GUID inside a radius of y from it's own GUID. It
> would then accept paid or unpaid advertisement from data storage servers,
> and answer paid or unpaid queries for data location. It can also query
> other data location servers to fill its own database. Besides tracking
> data, it would also track other location servers whose ranges overlap with
> its own, so that if it can't answer a query directly it will return a list
> of other location servers (along with their tracking and pricing policies)
> who might be able to or at least get you closer to such a server, and
> attest to their reliability and comprehensiveness.
>
> With this system you would see locations servers that are specialized for
> different market niches. Some would be neighborhood trackers, with a small
> network radius and a large GUID radius. Some would be global tracks, with
> a large network radius and a small GUID radius. Some would be highly
> priced and specialize in censored data. Some would acquire a reputation
> for reliability and make money from long-term paid advertisements. Some
> would have a low query price and make money from queries for popular data.
> Some would only track other location servers and essentially provide a
> reputation service. It would be really cool to watch this market
> self-organize.

To clarify, the data location service described here is the one which,
given the GUID (content hash) of a piece of data, returns the address
of a computer somewhere in the world which has that data?

The question is, how would users find the right data location servers
to query? How much have we reduced the problem in going from locating
the node with the data, to locating the node which indexes the data?

I can see that it is a step forward, because hopefully there would be
fewer index nodes than data nodes. But even so, there could be far too
many index nodes for end users to have lists of all of them. Do we need
a higher level meta-index to describe the index nodes?

(And it might not really move us forward that much. MojoNation uses
a similar concept for its "trackers", but they expect almost everyone
to be running trackers, I think. MN could have as many index nodes as
file servers.)

The idea of nodes specializing on the basis of content or popularity
seems to make things even more complicated. Now it is no longer a purely
mechanical process to find the right index node, one which depends solely
on the GUID. We would need extra hints about the type of data being
fetched to know where to begin searching among the many flavors and
families of index nodes which exist. It's hard to see such mechanisms
contributing to what one would hope would be a relatively mechanical
process of fetching data.

I need to study the OceanStore system more closely; I'll try to post
something on it tomorrow. But I was struck by the statement in the
introduction to one of their papers that their goal is to support a
network with 10^10 users (10 billion users, i.e. the total population
of Earth), each storing 10000 files, for a total of 100 trillion files.
That's a lot of files! Scaling to that level will be a real challenge.

Hal




Archive powered by MHonArc 2.6.24.

Top of Page