Skip to Content.
Sympa Menu

bluesky - Grapevine Technical Overview

bluesky AT lists.ibiblio.org

Subject: Global-Scale Distributed Storage Systems

List archive

Chronological Thread  
  • From: Stephen Blackheath <stephen AT blacksapphire.com>
  • To: "Global-Scale Distributed Storage Systems" <bluesky AT franklin.oit.unc.edu>
  • Subject: Grapevine Technical Overview
  • Date: Fri, 10 May 2002 23:19:07 +1200


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

http://www.grapevineproject.org/


The Grapevine Project
A decentralized peer-to-peer file storage network

Technical Overview

by Stephen Blackheath - 8 May 2002

This page aims to be a brief but fairly complete overview of
1. what the Grapevine does, and
2. how it works.

A high school level of mathematics and some basic cryptography
concepts may be necessary to understand some of the detail.

The more widely this information is distributed and understood, the
happier I am. Please note that it is protected by the GNU Lesser
General Public Licence. This grants you the right to distribute the
information freely for non-commercial purposes.

I am very interested in any feedback or questions you may have about
this information, so please contact me at
stephen AT blacksapphire.com.

At the time of writing, the project is not complete. We need any
support you can give us.

What does the Grapevine do?

The Grapevine is a network composed of individual computers running
Grapevine software, which collaborate to store files. (This is quite
different to "file sharing" that you may be familiar with.) Even
though each machine may be unreliable or even malicious, the network
as a whole is realiable. The network also does its best to conceal
both the locations where files are stored, and the structure and
traffic patterns of the network. The result is reliable, efficient and
anonymous publishing and retrieval of information.

What technical problems does it claim to solve?

Peer-to-peer networking is a difficult problem - a fact that
developers of the technology have learned from experience, and some
newcomers have learned the hard way. Each project attacks each problem
with varying success, so each must be taken on its individual merits.
We claim that the Grapevine solves the following specific problems:
* Scalability - The network is completely decentralized. Scalability
is of the order N1/d where N is the number of nodes and d is
around 6.
* Traffic efficiency - From the user's point of view it is more
efficient than most other peer-to-peer software, but less
efficient than web surfing, largely because half the cost of web
traffic is paid by the owner of the website. When the technology
is mature, we expect the Grapevine will be more efficient than the
Web overall because of its intelligent use of resources.
* "Karmic debt" - each person gives in proportion to what they take.
* Response time - Grapevine should perform more slowly than the Web,
but with greater consistency.
* Robustness - i.e. tolerance of the unreliability of individual
nodes. We do not rely on any centralized infrastructure (except
for Internet backbones).
* Resistance to the [15]"Slashdot Effect" - i.e. the ability to cope
well with files that are enormously commonly requested.
* Resistance to Denial-Of-Service attacks - resistance to all
attempts to shut the network down, legally sanctioned or
otherwise.
* Firewalls - not our first priority, but we have a solution to
these problems.
* Dynamic IP addresses
* High throughput - we achieve fast downloads of large files through
the very simple mechanism of splitting the files into pieces, and
then downloading the pieces in parallel.
* Search - one of the biggest problems for these networks, but in
fact quite separate to the main problems. We have a solution in
development, which we haven't documented yet.
* Retrieval of data nearby - Ideally data should be retrieved
physically nearby if possible. We have a solution to this, but
implementing it will take a low priority.
* Mix-netting - the problem of obscuring the workings of the network
to anyone who can analyze traffic ("traffic analysis attacks") at
the Internet backbone level. We have an almost complete solution
to this problem.
* Plausible deniability - God forbid that the Thought Police should
bash down your door and confiscate your computer, but if this
should happen, the files stored by the network are encrypted in
such a way that you can plausibly deny any knowledge of what is
stored there.

The Holy Grail of peer-to-peer is steganography - making the network
look as if it is not there at all by making the traffic look like
something else. This is just a dream at present. I am not aware of any
real work in this area, but there is sure to be some.

Co-ordinate space

The Grapevine is based upon a multi-dimensional co-ordinate space -
and we choose six dimensions. This is difficult to visualize, but not
difficult to understand.

The argument for this runs as follows: Imagine you have 1 million
people standing somewhere, and you want the ability to send a message
to any chosen one of them, where each person can only speak to their
immediate neighbours on all sides. Messages can be relayed from one
person to the next. If you stand them in a straight line, then with
each person passing the message on to the next person in line, it will
take of the order of 1 million "relays" to get to any one person.

If you stand them in a square of 1000 x 1000, then each has
approximately 6 neighbours. You can reach any individual person by
relaying messages with in the order of 1000 relays. Note that in the
1-dimensional straight-line example, dictating 6 immediate neighbours
(i.e. skipping 2 out of 3 people each time) does not achieve this
efficiency.

We can take it one step further: If you stack the people up in a 100 x
100 x 100 cube, then it takes of the order of 100 relays for a message
to reach any person.

Since we are not dealing with real space, we can take this to as high
a dimensionality as we like. We chose six dimensions, because it fits
the size we think the network will get to, and it gives a convenient
number of immediate neighbours.

Routing

Each node has a certain number of immediate neighbours, determined by
the number of dimensions. In two dimensional space, it is 6. In three
dimensional space, it is 15. In six-dimensional space, it is 118. A
neighbour is defined as a node for which we know the IP address, port,
and session key which are necessary to talk to it.

Nodes and files both have a "location" in our co-ordinate space. Of
course files must actually reside on nodes, but their "location"
(which is an artificial concept) is between that of the nodes. Files
are stored on the nodes "nearest" to it. (I will explain in more
detail later.)

When a node receives a request for a file, then if it does not have
that file, it forwards the request on to its neighbour which is
"nearest to" that file. For this calculation, we use Pythagoras's
Theorem. When a node does this, it does not give the requesting node
any information about where it has routed the request. It only tells
it that it has done so.

If the node to which we route the request is not responding, then we
try the next closest node.

Joining the network

Nodes are only permitted to learn about their most immediate
neighbours. The network protects itself by keeping all nodes ignorant
of the IP addresses of any nodes further afield. In order to complete
this protection, we also need a way to make sure a node cannot just
join the network repeatedly in different locations to eventually find
the IP addresses of all nodes.

We do this with a strategy called Solve A Hard Problem or SAHP. I will
not give the detail here - it is available elsewhere on the website.
It has the following properties:
1. Each attempt takes a fixed time, and gives a certain "power".
2. The average time to calculate a solution with a certain power is
proportional to that power.
3. One outcome of the solution is a location in co-ordinate space,
and this becomes the location of the joining node.
4. Because of the above, it is possible to prove that you spent a
certain amount of time calculating the solution.

Here is an analogy: Imagine you have a telescope with a very high
power, and want to search the sky for the brightest stars. Dim stars
are easy to find, because they are common, but it takes a long time to
find bright stars, since you have to painstakingly search a large area
of sky. If you find a very bright star, then you can convincingly
argue that it took you a long time to find it.

The simplest approach to protecting the network is just to have a
minimum "power" requirement before a new node can join. (We also have
more elaborate strategies.) This ensures that a node does not get much
control over what location in space it has. Of course any node can
attempt to attack its immediate neighbours, but it is computationally
difficult for any attacker to get a concentration of "cancer nodes" in
any one region of the network.

Storage of files

Each node stores files in two ways:
1. The "permanent store". If a node is within a certain distance of
the location of a file, then it is considered to be inside the
"definitive zone" for that file. The node will store any such
files it retrieves for a long time.
2. The "cache store". If a node is outside the definitive zone for a
file, but it receives a file while processing a request for
someone else, then it stores it for a shorter period than it would
for the permanent store.

These could be implemented as storage areas of a fixed size, where the
least recently accessed files are deleted once the size is exceeded.

Advertising and Mix-Netting in one

Each node advertises its presence periodically to its neighbours.
Because of the small number of neighbours, any node can be pretty sure
which of its neighbours are up or down at any given moment with a very
high efficiency.

Neighbours also advertise what files they have in both their permanent
and cache stores. These advertisements are used to pad out all traffic
other than the contents of files to a fixed size. This helps protect
against traffic analysis attacks.

To further protect against traffic analysis attacks, we can introduce
a delay in the forwarding of requests. At a cost of increasing the
response time, this - combined with a judicious method of periodic
random advertising - should allow the traffic to look almost
completely random to anyone with the power of traffic analysis. Some
work will be required to implement this fully.

Advertising as a way to improve routing

We can use the knowledge of what files our neighbours have as a means
of improving routing. If we are asked for a certain file, then we
might know that a certain one of our neighbours has that file. The
other routing choice is to forward the request in the optimal
direction. When we have two choices like this, we can branch the
request. The branch that goes to the neighbour known to have the file
will only go for one hop.

Relaying of file contents

Once a file is found on a certain node, the procedure is this:
* A notification is sent back to the immediately requesting node.
* If the immediately requesting node is within the "definitive zone"
for this file, then the contents of the file are also sent. This
ensures that commonly requested files are spread widely within the
definitive zone. (For less commonly requested files, we should
ideally add a mechanism to make sure there is always a minimum
number of copies available on the network at any one time.)
* The reply hops backwards along the request path.
* The first node outside the "definitive zone" that does not already
have the file designates itself as the "relay". The relay sends
its IP address and port (but not its SAHP credentials, so it
cannot be treated as a neighbour) along to the requesting node. As
an alternative, it might be better for the requesting node to send
its IP address and port, and for the relay to contact it directly.

We now have three nodes:
1. The node on which the file was discovered
2. The relay
3. The requestor (the node of the person who wants the file)

We have to get the file from the "discovered" node to the requestor
via the relay. The reasons for this approach are as follows:
1. The relay acts as a cache for the network. Next time the file is
requested, there will be a copy further out from the definitive
zone. This means the network does not suffer from the
[16]"Slashdot Effect".
2. The relay conceals the source of the file. We could choose
multiple relays to increase this, but at the cost of overall
network efficiency. One relay is the compromise we have chosen.
3. It allows us to deal with firewalls, as long as either 1. the
relay, or 2. both the requestor and "discovered" node, are not
firewalled. (We will also need a "buddy" system to allow
firewalled nodes to receive incoming connections.)

Publishing

The routing for file storage (i.e. publishing) is the same as file
retrieval. Once we reach the "definitive zone" for the file, we can
put several copies of the file on the nodes there.

We will use relaying to get the file there as with requesting. The
identification of which nodes are involved will be the same, but the
relaying will happen in reverse.

Key management and plausible deniability

First, we split files up into pieces of a standard size. This means
that the size will not give away the contents of the file.

We use two types of keys to store files:
1. Named keys - Keys calculated from a filename.
2. CHKs - Content Hash Keys, where the key of the file is the hash of
the file contents. This allows nodes to easily check the
authenticity of a file.

Keys are translated into a location in our co-ordinate space.

When we store a file, we do this:
* We calculate the encryption key from the hash of its filename.
* We encrypt the file with the encryption key (in CBC mode with a
random initialization vector [IV] - have a look at any
cryptography reference).
* We calculate the storage key from the hash of the encryption key.
* We store a 'file map' under the storage key. This is a list of
CHKs, which are nothing more than the hash values of the contents
of the blocks of the encrypted file.
* We store each block under its CHK.
* We pad all files to a standard block size for security reasons,
e.g. 32K.

In order for someone running a node to know what the contents of a
file stored on their system is, they have to guess the filename of the
file it belongs to, and then look up the file map and see if that CHK
is in the file map. Even if they know the entire contents of the file
they suspect, this is not sufficient information, due to the CBC mode
initialization vector. Hence plausible deniability.

Retrieval of data nearby

We have a strategy for this. This involves connecting to several
(perhaps three or four) separate network "bands", each of which has a
restriction on the maximum response time of nodes. Requests start at
the band with the fastest-responding nodes, and jumps to the next band
if the request was not successful on that band. Retrieved files can be
cached on each band, but this need not happen every time.

The detail is at [17]Searching Physically Nearby.

Karmic Debt

To solve this problem, we introduce the concept of logical network
"interfaces" which are analogous to TCP/IP interfaces. Each interface
has a separate SAHP solution, a separate location and a separate set
of neighbours. Your node "appears" logically in multiple places in the
network.

Each interface has three states: disconnected, freeloading and
participating.

When we download data, we adjust the rate of requests so that we
receive a fixed bit-rate (including overheads) from each interface,
for example 1K bytes/sec. If the user has a 56K bits/sec modem, for
example, then this is about 5K bytes/sec, and so we would need to use
5 interfaces to achieve this download rate. (Remember that we download
file chunks in parallel.)

An interface must either be in the freeloading or participating state
when it is being used for downloading.

Nodes only send and forward requests and other traffic to nodes that
are in the "participating" state. Only the barest minimum demands are
made of "freeloading" nodes.

A node ensures that it pays its way by ensuring that the total time
multiplied by number of interfaces of downloading is ultimately repaid
by an equal total time multiplied by number of interfaces spent in the
"participating" state.

Note that the node is not compelled to pay back its karma. It is quite
easy to cheat the system by using modified software. (Though it is
still better than most file-sharing software, where repayment of debt
is under direct control by the user.) We will do more research in this
area.

Search

We have a solution to the Napster-style keyword search problem, which
we have not documented yet.

Worms

Last, but by no means least, we come to worms. Until peer-to-peer
technology is implemented on secure hardware, it is especially
vulnerable to worm attacks. This is a very similar problem to email
viruses, but worse. Ultimately this technology will be implemented on
Internet routers. Until that time, it must never be used for
life-critical purposes.

The problem is this: Microsoft Windows is not a very secure operating
system, as evidenced by the ongoing problems with email worms. No
software can protect itself from invasion of the machine on which it
resides. Peer-to-peer is especially vulnerable, because it establishes
an easily exploitable connectivity from every point in the network to
every other.

"Freedom of expression - Everyone has the right to freedom of
expression, including the freedom to seek, receive, and impart
information and opinions of any kind in any form."
-- Section 14, New Zealand Bill of Rights Act 1990.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE826yz7I0ehz47OHERAqQSAJ9zlNxkHseSBn7G12gDHuS7TbKAWQCfRdzM
u+whaKmU/0WXrJPUcaGvBEI=
=SeO2
-----END PGP SIGNATURE-----




Archive powered by MHonArc 2.6.24.

Top of Page