Skip to Content.
Sympa Menu

bluesky - Re: market-based data location (was Scalability)

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Oskar Sandberg <md98-osa AT nada.kth.se>
  • To: Global-Scale Distributed Storage Systems <bluesky AT franklin.oit.unc.edu>
  • Subject: Re: market-based data location (was Scalability)
  • Date: Tue, 6 Mar 2001 12:25:50 +0100


On Mon, Mar 05, 2001 at 05:41:24PM -0800, Jim McCoy wrote:
> At 03:02 AM 2/28/01 +0100, Oskar Sandberg wrote:
> >[...]
> >I approached the Mojonation people regarding my concern that the whole
> >content tracking system was not really decentralized at all at the Peer to
> >Peer conference.
>
> Actually our opinion is that our content tracking system is _too_
> decentralized. It would have served us better to start off with a few
> centralized content trackers (a la Napster) that were specialists in a
> particular content type rather than the Gnutella-like indexing structure
> which we currently employ. A few handicappers exist within our current
> code to do keyword-hash based lookups (woe to the poor slob who gets stuck
> with "britanny" or "spears" :) but these have not really been tested very
> much.

I guess that with the emphasis of MN's technology research being on the
monetary system you can justify doing a rough job of searching - but I
think you can understand why those of us who emphasize on data sharing
architectures are somewhat taken aback by that. Especially in light of MN
often being presented as a decentralized data sharing network.

> >It turned out however, that Mojonation's basic architecture isn't scalable
> >anyways. The content-trackers in MN do not map metadata onto a physical
> >address, they map metadata onto UIDs (the SHA1 hash for each part) of the
> >data. Apparently, to then find the data, MN simply expects each node to
> >keep a list of the IDs of every other node in the network [...]
>
> This is incorrect. Each node keeps a set of phonebook entries for other
> brokers that it has contacted in the past (not the entire list, just the
> nodes you have communicated with). These entries contain information such
> as broker ID (to link to reputation information about that broker), contact
> information and connectin strategies (e.g. to contact Broker X send the
> message to relay server Y), and specific service info such as the block
> mask for a storage service agent. It is these masks which are used to
> determine who you will send a request to when looking for a blob of data.
>
> If you can't find the blob you are looking for at any of the brokers you
> know who cover that range of the SHA1 address space then the Broker goes
> back to the centralized metatracker and asks it for a few more phonebook
> entries which cover the range of the address space it is interested in and
> caches these entries. This scales sufficiently well for our current
> prototype and when this starts to show scaling problems we will drop in the
> gossip-based mechanism to limit reliance on metatrackers for new block
> server phonebook entries.

Obviously, I don't think I need to comment on my opinion of this model.
Are you guys not concerned about waking up one morning with a friendly
letter from the RIAA in your mailbox (something that has already happened
to several Opennap admins)?

> In our gossip-based lookup system your Broker would send a phonebook lookup
> request to block servers that a Broker knows about which were closest to
> the desired range. These block servers would either answer the request as
> best they could with entries out of thier cache or direct the originator of
> the request to a few other block servers which were even closer to the
> correct mask range in the hope that they could provide useful phonebook
> info. Each Broker will hold on to phonebook entries that overlap or are
> near its mask range longer than it will for other ranges of the block
> address space. Each block server within a Broker will become a specialist
> in the metainfo about other blocks servers whose mask is a near-match,
> allowing us to replicate data blocks within the network easier and
> assisting other Brokers by being able to provide useful phonebook info.

This is closer to the sort of the self-sorting we are using for Freenet,
although we are interested only in comparing the data key (aka GUID or
whatever) values, since nodes have no static range to cover (let alone one
they can choose themselves, since we consider this opportunity for
targeted attacks on parts of the keyspace unacceptable). I can warn you
however, that making this sort of system work is not easy - and even when
it does you have given up deterministically finding the results.

In light of the fact that you are using the "rigid" model of matching data
id's and with node id's anyways, I would very recommend you look at the
Plaxton model for routing in this type of system. It's very fast for
searching (you can keep it 4 or 8 times under logarithmic for the worst
case) and also has some interesting characteristics for finding the best
located data (though you needn't employ those).

> Next time perhaps you could try to make sure that you actually understand
> how another system works before you start throwing stones...

Not a chance.

>
> jim mccoy
> AZI/Mojo Nation

--
'DeCSS would be fine. Where is it?'
'Here,' Montag touched his head.
'Ah,' Granger smiled and nodded.

Oskar Sandberg
md98-osa AT nada.kth.se




Archive powered by MHonArc 2.6.24.

Top of Page