[bittorrent] Queuing algorithm section

Elliott Mitchell ehem at m5p.com
Sun Aug 5 01:12:04 EDT 2007


>From: Alan McGovern <alan.mcgovern at gmail.com>
> I'm just going to give a heads up that i'm going to rewrite the 'Queuing
> Algorithm' section on the bittorrent spec page. It's been under dispute long
> enough that i'm going to resolve it.

Needed. Two people can argue, hopefully a third party is neutral.  :-)

> The text will change to something along the lines of this:
> 
> 1) A static queue is a bad idea.
> a) Slow peers will have lots of unneccessary pending requests with a static
> queue
> b) Fast peers won't have enough pending requests
> 2) Dynamic queue is a good idea:

I'd agree with this.

> a) The easiest way to do this is to give each peer a static queuedepth of at
> least 2 blocks (2x16kB). For each 10kB/sec upload the peer has, add one
> extra pending request. So, the pending requests can be calculated as: 2 +
> (downloadSpeedInKilobytes / 10).

Thinking about it, since the algorithm for queue depth does not effect
protocol conformance, merely performance, perhaps this should be a
sub-section "sample queuing algorithms". *Something* definitely needs to
be said, since this is a *crucial* performance issue. You've also made a
critical mistake in your sample algorithm here, queue depth needs to be
effected by both bandwidth (which you've accounted for) and RTT (which
you've missed). The moment you hit a link with high latency (across the
world or perhaps a satellite link), performance will fall apart.

> b) The pending requests should have an upper limit of 100. You shouldn't
> have more than 100 pending requests off a single peer.
> 
> 
> So, does anyone have any comments about that before i go ahead and make that
> change. The 10kB/sec figure is one i chose from testing different values.
> The choice of 10kB/sec means that for peers with large upload bandwidth,
> you'll end up having a lot more data requested than they can supply in one
> second. For example, with 500kB/sec available, there would be 832kB of data
> requsted. For a peer with 2000kB/sec available, you'll have 3200kB of
> pending requests. This should avoid most issues caused by a high latency
> connection. Latencies of up to 1second can be handled with this algorithm (i
> think, feel free to correct me ;) ).

Works as long as one's latency is less than 1 second. OTOH if someone has
a link with 33ms, the queue depth will be excessive. Problem is, both of
these situations exist in Real Life. Satellite links (Antarctica? places
without wired high speed) are known to have high latencies. Where I am, I
can get to many sites in well under 50ms. I'm unsure whether it effects
current generation cable modems, but they used to cause huge latency as
you approached their maximum bandwidth.

The problem isn't simple. You need to know what the maximums of each
individual connection is, and queue appropriately. Attempting to probe
each connection by slowly increasing depth, seems a reasonable approach.

> Also, is it worth putting in a hard limit of 100? Should it be less? More?
> Not there?

No. Once you get gigabit to the home, that will be too shallow.  :-)
There have been experiments on backbone links that make that halfway
plausible, though right now only Universities are likely to be in this
category.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/





More information about the BitTorrent mailing list