Skip to Content.
Sympa Menu

bluesky - Re: Grapevine Technical Overview

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Jim McCoy <mccoy AT io.com>
  • To: Global-Scale Distributed Storage Systems <bluesky AT franklin.oit.unc.edu>
  • Subject: Re: Grapevine Technical Overview
  • Date: Fri, 10 May 2002 11:50:28 -0700


On 5/10/02 4:19 AM, "Stephen Blackheath" <stephen AT blacksapphire.com> wrote:
> What technical problems does it claim to solve?
[...]

Part of my curiosity about Grapevine is that it seems to be a weird hybrid
of MojoNation/Mnet and Freenet. I am still trying to figure out what
problems it solves that are not already solved by one or the other system.
As each of these two parent systems incorporates more features from the
other as their development continues I can't quite understand what niche
need this system is supposed to be fulfilling. To make the
security/anonymity claims you are making here you need a large pool of hosts
running the software, but unless there is a significant win over other
existing systems there seems to be little compelling reason for someone to
do so. I guess the big question here is this: outside of a cute university
research project what is the market for such a system? Who needs this and
why?

Having gotten that question out of the way I will now pick a bit at some of
the claims you are making because my morning mocha was delivered late and
cold and so I am a bit cranky (and I am not going to risk disrupting my
caffeine supply by bitching at the surly barrista :)

> We claim that the Grapevine solves the following specific problems:
> * Scalability
> * Traffic efficiency
> * "Karmic debt"
> * Response time
> * Robustness
> * Resistance to the [15]"Slashdot Effect"
> * Resistance to Denial-Of-Service attacks
> * Firewalls
> * Dynamic IP addresses
> * High throughput

All of these problems are already solved (and implemented and tested) by
features in MojoNation/Mnet. Given that you are walking a path already
explored by this system you might really want to take a look at how it
worked and ask around to see what problems you are going to run into here...

> * Search

This is the hard part for decentralized networks and one which I think than
Freenet work solved better than others. Information about proposed
solutions to the decentralized search problem would be really appreciated.

> * Retrieval of data nearby

Freenet did this well, but I think that this represents a fundamental
conflict with your anonymity claims. If I know what information is held by
neighbors then that is the first step in attacking either those sites or the
availability of the information itself. It will be hard to re-create the
proper balance between privacy/anonymity and pulling from neighbors that
Freenet accomplished without just copying the Freenet mechanism.

> * Mix-netting

What is the cost of your solution here? Any scheme which claims to provide
strong anonymity at the packet level and mixnet features will _always_
suffer from latency problems and an increase in bandwidth costs. This is
not negotiable. How does this feature not stomp all over your previous
claims regarding response time and retrieval of data from neighbors?

> * Plausible deniability

Plausible deniability claims are not a panacea for legal liability.
Specifically you would be wise to look around at concepts like "attractive
nuisance" and "vicarious copyright infringement", especially before you plan
on any trips to the US or Europe :) Seriously, if you are going to make any
claims about the legal protections your system might offer I would suggest
that you talk to some good lawyers. If you want I can direct you to a few
here in the US who have already gone through the learning curve of
understanding P2P systems because we (MojoNation) had to walk them through
it ourselves.

[...]
>
> When a node receives a request for a file, then if it does not have
> that file, it forwards the request on to its neighbour which is
> "nearest to" that file.

Ouch. Wave goodbye to that fast response time claim you made earlier. This
conflict between forwarded requests and direct requests for data seems to be
a fundamental balancing act that P2P data systems have to decide one way or
the other. Systems that favor privacy/anonymity like Freenet gain privacy
benefits by forwarding requests at the cost of increasing latency. How is
it that Grapevine will avoid these costs?

> Joining the network
>
> Nodes are only permitted to learn about their most immediate
> neighbours. The network protects itself by keeping all nodes ignorant
> of the IP addresses of any nodes further afield. In order to complete
> this protection, we also need a way to make sure a node cannot just
> join the network repeatedly in different locations to eventually find
> the IP addresses of all nodes.
>
> We do this with a strategy called Solve A Hard Problem [...]

How many nodes does the network need have to prevent me from running a batch
of hosts in parallel solving the hard problems and then mapping out the
network? Unless the network is rather large I think that you are
underestimating how hard it will be to map out the connections between
nodes. If the hard problem only needs to be solved once by a node then I
bet I could outrun your real new node growth rate with a good cluster of PCs
doing nothing but creating virtual nodes to map out the mesh.

If the hard problem needs to be re-calculated after a disconnect then how
much damage could I do to the network by mapping it partially and then
taking out those nodes I find which are well-connected? If the cost of
re-balancing/re-connecting the network when key nodes disappear is large
enough then I can use the SAHP protection mechanism against the network
itself (e.g. bounce a few nodes out of the mesh and then use the delay in
their return to map the remainder of the network a little bit faster.)

> Key management and plausible deniability
[...]
> In order for someone running a node to know what the contents of a
> file stored on their system is, they have to guess the filename of the
> file it belongs to, and then look up the file map and see if that CHK
> is in the file map. Even if they know the entire contents of the file
> they suspect, this is not sufficient information, due to the CBC mode
> initialization vector. Hence plausible deniability.

Actually you are just shifting the legal burden from the hosts with the data
to the person doing the mapping from filename to file map. This was the
solution that we used for MojoNation/Mnet and while it solves some legal
problems it creates a centralized point for attack (legal and network
attacks) which your network is claiming to not have. If the name->map
function is distributed then hosts that expected to be able to deny
knowledge of what is on their servers could find themselves taking on a
hidden legal liability due to the name->map bits that are on their nodes.

The system sounds interesting from a theoretical point of view but there
seems to be some claims made here that are in conflict with other claims.
Clearing some of this up would be greatly appreciated.


Jim





Archive powered by MHonArc 2.6.24.

Top of Page