[bittorrent] Introductory/endgame algorithms

Elliott Mitchell ehem at m5p.com
Fri Sep 23 20:23:58 EDT 2005


>From: Olaf van der Spek <olafvdspek at gmail.com>
> On 9/23/05, Andreas Aardal Hanssen <bittorrent at andreas.hanssen.name> wrote:
> > The warming-up algorithm I've got is that all connections download the
> > same piece initially, so that the client gets a full piece to share as
> > soon as possible. This works quite well, and currently they're all asking
> > for one piece. Do anyone on this list implement a similar algorithm?
> 
> XBT Client does the same by preferring partially downloaded pieces
> when less than four (4) pieces have been completed.
> How do you choose the initial piece though?

As yet uncompleted, but I've got a WIP paper that brings this up. Think
about what you're trying to accomplish, the criteria should be obvious.

The first crucial criterion is you want pieces that a number of peers
/don't/ have, as otherwise when you complete it no one will be interested
in downloading it from you. The second criterion is you want pieces that
a couple peers have so you can guarentee completion, even if one peer
goes offline or decides to choke you.

The result is you want pieces with slightly above average rarity. The one
issue here is that if more than piece_size/16K peers have a piece then it
is already so common you can request every block of the piece from a
different peer and you don't gain safety from it being more common.

> > Also, what do other clients use to determine when to enter the endgame
> > mode?
> 
> It depends on how exactly you implement end-mode, but I'd think a good
> starting point is if all chunks/pieces have already been requested.

It was stated at one point that the overhead of end-game mode was 30%.
This seemed high, but even without it being /that/ high I'm dubious of
the usefulness.

Of note, you cannot cancel a block once the other end has started sending
it. If you're doing your queueing correctly, the other end will have a
very short queue of pieces; ideally, zero. At this point cancels are
useless, as they don't save bandwidth. What *does* make sense is to
download different blocks of a piece from different peers. The worst case
then becomes you wait for one 32KB block from a particular peer, even a
modem won't take long to send 32KB. The big issue is assigning blame if
the piece hash turns out incorrect.


>From: Andreas Aardal Hanssen <bittorrent at andreas.hanssen.name>
> That does sound like a good approach. I have seen serious trashing when
> I've got something like 100+ connections downloading the same piece, but
> solved that problem by randomizing the blocks I download in warmup/endgame
> mode. Tests show that one peer usually gives me everything fairly quickly,
> but when several are giving me the same pieces concurrently, randomness
> ensures that I get one full piece with only about 3-4x overhead. This
> works very well for warming up also.

WTF are you doing with more than 100 peers? You've pushed the minimal
BitTorrent protocol overhead above 5% of your entire bandwidth right
there. The mainline peer counts are quite good for most circumstances.
What clients should make easily changeable is the queue depth, *that*
will help bandwidth far more often than more peers helps.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/





More information about the BitTorrent mailing list