[bittorrent] Avalanche

Joseph Ashwood ashwood at msn.com
Wed Jun 22 22:09:27 EDT 2005


Since it has garnered so much media coverage and some very strange debates 
in unusual places I figured I'd put some views up here.

I have had the opportunity to consider this problem for quite some time, and 
even on this list we have discussed online and tornado codes on a number of 
occassions, and Avalanche is really nothing more than this concept given a 
fancy name.

So is this concept useful?

That depends entirely on what your swarm looks like. If you have a greedy 
swarm (a swarm that disconnects as soon as 100% is reached) then the answer 
is yes, but if your swarm is generous it is actually harmful. My firmest 
stand though is that the Avalanche team measured the wrong thing, they 
measured the network throughput "as the total number of blocks transfered in 
a unit of time" (page 6 of the avalanche paper), when the real measure of 
performance in this situation is the amount of final file data that is 
transferred in a unit of time, this is a very important distinction.

In a greedy swarm the extra pieces create a situation where the rarest 
pieces are unnecessary (in bad cases the rarest pieces may disappear forever 
leading this absolutely no where), this delivers the extra throughput that 
the Avalanche team uncovered and in a greedy swarm this extra throughput 
will lead to file saturation faster, and as a result faster delivery of the 
final file. This kind of swarm is often seen in the wild and is there 
anytime the situation has 1 seed and a million peers, or less difference, 
but the idea holds.

The generous swarm is an entirely different concept. A generous swarm is one 
where the peers stick around after becoming seeds, these swarms often have 
more seeds than peers. The online codes become a penalty in these swarms 
because they require the transfer of comparatively more data. In these 
situations the swarm provides the redundancy WITHOUT any file based 
redundancy. The result is that a generous swarm will saturate any pipe 
between a given peer and the rest of the network. This saturation means that 
the network coding overhead reduces the file transfer rate, resulting in 
lower performance for all peers.

Eventually we end up back at the same decision that has been made a number 
of different ways throughout the years, whether the question is whather or 
not to use redundancy in RAID, or how many backups to keep, or the new one 
in how to encode the data for transfer, this decision has been made over and 
over and it is necessary for the developers to realize this so we don't 
reinvent the wheel every 3 days.
                Joe 





More information about the BitTorrent mailing list