[bittorrent] Introductory/endgame algorithms

Elliott Mitchell ehem at m5p.com
Tue Sep 27 01:53:31 EDT 2005


>From: Andreas Aardal Hanssen <bittorrent at andreas.hanssen.name>
> On Fri, 23 Sep 2005, Elliott Mitchell wrote:
> >> It depends on how exactly you implement end-mode, but I'd think a good
> >> starting point is if all chunks/pieces have already been requested.
> >It was stated at one point that the overhead of end-game mode was 30%.
> >This seemed high, but even without it being /that/ high I'm dubious of
> >the usefulness.
> 
> I don't follow you at all here - the problem as I understand it is that
> the last pieces you're downloading are likely to come from a very slow
> link; not necessarily that the piece is uncommon.

That is the cited reason for the existance of end-game mode. I'm
suggesting that though that may be a legitimate reason, it isn't good
enough as the cost outweighs the benefit. Notably the large cost. Also
note in the paper mentioned, the time spent in endgame is minimal so
those gains are minimal even in an optimistic situation.

> >Of note, you cannot cancel a block once the other end has started sending
> >it. If you're doing your queueing correctly, the other end will have a
> >very short queue of pieces; ideally, zero. At this point cancels are
> >useless, as they don't save bandwidth. What *does* make sense is to
> 
> Trying to tune the remote queue size to zero is silly. It's much better to
> request as many blocks as you need to fill the pipeline, with some
> reasonable upper limit. Then, cancels make a lot of sense. Of course you
> can't cancel blocks in progress (that's pretty obvious), but if the host
> is fast enough to send you all you request, then that host is likely to be
> your #1 uploader, and so you won't be sending him cancels at all.

I think we're partially on the same page. By "pipeline" I'm guessing
you mean the entire in transit (including in queues on the remote end).
I'm saying that ideally, all those requests will be in transit _on_ _the_
_network_, rather than in queues on the remote host. If they're queued on
the remote host they're wasted, they don't get handled until in progress
requests get handled. So ideally the remote queue will always be /almost/
empty (a couple present just in case of out of order packets causing a
momentary slowdown, but otherwise empty).

Requests queued on the remote peer can be canceled. Requests in transit
towards the remote peer can be canceled when they hit the peer's queue
/if/ they stay there long enough for a CANCEL to catch up to them. Pieces
returning to the local peer cannot be canceled.

The issue comes that ideally requests won't spend _any_ time in the
remote queue. In this case there isn't any window of opportunity for the
CANCEL message to overtake the REQUEST message. As such the CANCEL
message becomes worthless because it can't ever do its job.

> >download different blocks of a piece from different peers. The worst case
> >then becomes you wait for one 32KB block from a particular peer, even a
> >modem won't take long to send 32KB. The big issue is assigning blame if
> >the piece hash turns out incorrect.
> 
> Indeed; that is the caveat of the endgame mode; if you get a single junk
> block, it's hard to know who to blame.

Though at worst you only need to down evil+1 copies of the piece (evil
being the number of evil attacking clients). The moment you see one block
that two peers gave different data for a particular block, you *know*
that at least one of those peers is evil. Run the hash only using blocks
from one of the two, if the hash checks out you know the other peer is
the evil one. Otherwise run the hash using the peer's blocks, if it is
correct you can declare the first peer to be evil. If neither verifies
then you've got multiple evil peers.

> >WTF are you doing with more than 100 peers? You've pushed the minimal
> >BitTorrent protocol overhead above 5% of your entire bandwidth right
> >there. The mainline peer counts are quite good for most circumstances.
> >What clients should make easily changeable is the queue depth, *that*
> >will help bandwidth far more often than more peers helps.
> 
> No, you misunderstand. Firstly, there's nothing wrong with keeping over
> 100 peers alive. Secondly, the point was that sending endgame requests to
> them all will trash your bandwidth, and so I am interested in hearing how
> other client authors have implemented their endgame algorithms.

The above details why attempting to cancel requests is worthless. Looking
at mainline 4.0.4, the queue depth is auto-tuned, a very important
enhancement.

You get major perform damage if you keep 100 peers alive. The number of
HAVE messages is directly related to the number of peers. At 30 peers the
HAVE messages account for 50% of the BitTorrent protocol overhead, or 1%
of the payload size. At 100 peers, HAVE messages are accounting for 75%
of the overhead, 3% of the size of the payload.

Though 3% isn't a huge percentage, considering the size that payloads
run, 3% is likely to be several megabytes. Do you see a reason that
justifies an additional 2% overhead?

I'm surprised Bram increased the defaults in 4.0.4. Though the
performance gain from auto-tuning queue depth likely more than makes up
for this loss.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/





More information about the BitTorrent mailing list