Skip to Content.
Sympa Menu

bluesky - Re: Scalability

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Adam Back <adam AT cypherspace.org>
  • To: Global-Scale Distributed Storage Systems <bluesky AT franklin.oit.unc.edu>
  • Subject: Re: Scalability
  • Date: Tue, 27 Feb 2001 01:00:27 -0400


I'd guess there would be two systems you'd use:

- a system to find the URLs you want
- a system to locate a copy of and download the URL content

Steve Schear and I were discussing the first one and another
approach is to just let a search engine do it for you. Either
by exposing the URLs to the real web through gateways which
make it appear (to search engines only perhaps) as static content.

Or by encouraging people to set up search engines on the
disitributed URL space. People find things through other
peoples links to them, which is ultimately the same way that
the spiders do.

Locating a copy of a target URL to download is a different
problem. Graydon's log map seems like a reasaonable approach.

I guess it depends partly how much data one expects to
be stored. What fraction of available space does the content
represent. (I'm thinking it could be a very small fraction
of available (meaning available and currently online) space
as "disk is cheap"). The a low fraction helps as you can
"overcache" or mirror. Say you had a mirroring algorithm
with random cache replacement policy, and some distributed
way for new data to distribute through the system. (You
might want to weight caches by popularity, which is what
my earlier discussion of hashcash was about).

A small fraction makes graceful degradation possible also,
as the data is inherently there many times over in a
randomly distributed fashion.

If you used massive overcacheing with Graydon's log map
and algorithm which only admits to holding data after
it has ensured secondary copies, the censors could chase
references around indefinately.

Adam

On Sun, Feb 25, 2001 at 08:10:55PM -0800, hal AT finney.org wrote:
> Gnutella searches would only try to reach a few thousand nodes regardless
> of how many are on the network, which would reduce its usefulness IMO.
>
> Are there inherent scalability issues with P2P networks? Freenet goes
> to some lengths to try to locate data while using only a small amount of
> network traffic. This is intended to avoid the need for a centralized
> database of where all data is stored.
>
> Another strategy is be to use a decentralized database which indexes the
> data, which I think is how MojoNation works, using "content trackers"
> which any peer node can run. However this raises the issue of how you
> locate the index nodes. MojoNation has a MetaTracker which I think is
> there to help you find content trackers. The MetaTracker is currently
> centralized but I believe they plan to make it decentralized, but
> then how will you find them? And what will qualify a node to operate
> a MetaTracker?
>
> I'd like to see discussion of technical solutions to the problems of
> finding data without swamping the network. It would be especially good
> if this could handle a search search, something that Freenet does not
> even try to tackle yet. Mojo handles searching using the same content
> trackers that store file locations.
>
> Hal
>
> [1] http://www.monkey.org/~dugsong/mirror/gnutella.html
> [2] http://slashdot.org/articles/01/02/14/190225.shtml
> [3] http://www.gnutellanews.com/article.html?id=4206
>
> ---
> You are currently subscribed to bluesky as: adam AT cypherspace.org
> For list information visit http://www.transarc.ibm.com/~ota/bluesky/




Archive powered by MHonArc 2.6.24.

Top of Page