[BitTorrent] Re: bt2 protocol features

John Prevost j.prevost at gmail.com
Fri Jul 16 10:51:01 EDT 2004


On Fri, 16 Jul 2004 00:45:05 -0700 (PDT), Elliott Mitchell <ehem at m5p.com> wrote:
> There isn't any point in getting block-level hashes ahead of
> time, only piece-level hashes. You need to be able to verify
> pieces before you advertise having them, in order to avoid
> passing around bad pieces; however, since you can't advertise
> having individual blocks why bother retrieving the hashes for
> them, since once you've got all the blocks you can verify
> against the piece hash?
  {...}
> Depends on the fan-out factor. Your 2.3% sounds pretty high,
> and that isn't really that much overhead. This is inefficient
> because you're sending hashes multiple times. Not really that
> big a deal, but I cringe at the repeated copies.

Hmm.  First part: An interesting point.  My argument would be
simplicity--even though the ~2% overhead (100MB-600MB is the size of
individual files I'm typically downloading in torrents, the 8GB size
is the ballpark for the largest batch torrents I've seen, which are
split into smaller files) is slightly steep, this approach has a
couple of really nice features.  The first feature is that it doesn't
need additional (possibly troublesome) protocol support for additional
commands or for "special" blocks or pieces that contain the hashes. 
The second is that it can be done without either peer needing to know
any state of the other except "the other peer has this block" (which
admittedly is known at a lower resolution than that.)  The third is
that the receiving peer needs no state to record hash information:
each received block carries enough information to validate it in a
vacuum.  And finally, it's desirable to know at the individual block
level whether the data is valid or not, without having to go into some
sort of recovery mode.

By that last, I mean this: If each block carries its validation data,
identifying the source of a bad block and knowing that it's a bad
block requires these steps:

1) Receive the data for the block.
2) Validate the data using the hashes sent with it.
3) If it's bad, record that the sending peer sent another block of bad
data, take action against that peer if desired, and mark the block as
not yet received so that it will be requested again.
4) If it's good, mark the block as received.
When the piece is complete:
5) Send HAVE messages to appropriate peers.

If instead we've received and validated ahead of time the per-piece
hashes, these are the steps required:

1) Receive the data for the block.
2) Record that peer X sent block Y of piece Z.
3) Mark the block as received.
Once the piece is complete:
4) If it's bad, request from some peer the block hashes for piece Z.
5) Validate the piece hashes of Z.  If they do not validate, mark the
peer that sent the piece hashes as being bad and try another one, if
possible.
6) For each block, validate the block.
7) For each block that did not validate, mark that block as
unreceived, mark that the peer who sent it has sent another block of
bad data, take action against the peer if desired.
8) If there were any bad blocks, put this piece back in circulation to
have pieces downloaded, if needed.
9) If everything validated, mark the piece as received, send HAVEs, etc.

Which reminds me of the other nice feature of having each block
message carry all of the information needed to validate the block, as
well as the data: the data appears bad if either the hashes or the
data is tampered with.  In short, there's pretty much no bookkeeping
in this scenario, whereas only validating at the block level once a
problem is known requires lots of bookkeeping and possible repeated
tries to fetch hashes from different peers, etc.

  {... suggestion that disconnecting peers is dangerous ...}
> Why is it dangerous? You're going to expend bandwidth sending
> keepalives. They're going to be unable to keep their interested
> state correct if you don't update them. Certainly you want to 
> keep a few inactive peers available for use if an active peer
> drops out, but not very many (as they otherwise roast your 
> bandwidth).

Well, let's presume for the moment that 35 is a healthy number, that
more than 35 is enough that the trade-off of a larger pool for more
bandwidth isn't worth it, and that less than 35 is small enough that
the pool is too small for your connection to be healthy.  (In reality,
I think it's trickier than that: I think that adjusting the pool size
is a useful tweak to allow to users: while some will foolishly adjust
it the wrong way for their connection, it's a fairly important
parameter to be able to tune to manage the trade-off between overhead
and pool size.)

If we presume that, then there's little reason to consider
aggressively disconnecting peers: if the user has set this number
higher than is appropriate, they are hurting themselves.  If the user
has set this number lower than is appropriate, they are hurting
themselves.  The potential benefit to preventing many peers from
connecting is then overall network health: it's possibly a good idea
if the use of the BT protocol is pissing off network admins because
it's using too much bandwidth.

Out of curiosity, how much bandwidth would automatic TCP keepalives
require, assuming that BT protocol keepalives were removed?

As for why it's unhealthy to disconnect peers based on some metric of
how "good" they are to you: if you have a client like Azureus that
lets you watch the peer pool state in action, take a look at it.  Out
of your pool of 30+ peers, you're generally only seriously talking to
a few of them.  Let's say your number of simultaneous uploads is set
to 5: You're seriously talking back and forth with four peers, and you
have a fifth peer that changes every thirty seconds that you send a
bit of data to in order to see if it will be better to you than one of
those four.  You're also receiving data from a handful of other peers
if you've been chosen by them for their optimistic slot.  Out of the
four serious unchokes, you're not unlikely to swap one out every
thirty seconds or so.  It really depends on your network: if you find
a peer that's *really* good to you, you're going to keep it.  If all
of the peers have similar amounts of upload and download bandwidth,
though, things are more likely to shift around over time.

Now, let's imagine that we disconnect peers that we haven't received
any data from in the past, oh, five minutes.  My estimate would be
that this would be about half of the peer connections.  And then you
have to realize that the peers that *have* sent you data using
optimistic unchoke but which you haven't sent data to (because they
didn't do well enough to get into your serious unchoke set) will now
also disconnect you.

I imagine that it's possible to design things so that disconnecting
peers would work, but it's going to be quite a balancing act to figure
out what the strategy should be--and on top of that, you'll probably
want to then connect to more peers to get back up to a healthy-sized
peer pool after a short time.

The big key to me is that if two peers are connected, they *will*
exchange data.  It's sort of a contract between high-upload-bandwidth
peers and low-upload-bandwidth peers: "I may have enough upload
bandwidth that nobody will want to talk to you, but you can rest
assured that all of us *will* eventually talk to you."  Disconnecting
peers that can't pump as much data will only hurt that.

And note that I say this as someone who has an atypically high
upstream to downstream bandwidth ratio.

This is why I think it's more worthwhile to look at ways to reduce the
amount of overhead from sending the individual HAVE messages to every
peer at the moment pieces are complete.  And even then, I'd be willing
to accept Bram saying "this isn't worth losing up-to-date availability
data."  (Although I am convinced that preferring to download rare
pieces does not perform appreciably better than simple random
selection.)

John.


------------------------ Yahoo! Groups Sponsor --------------------~--> 
Yahoo! Domains - Claim yours for only $14.70
http://us.click.yahoo.com/Z1wmxD/DREIAA/yQLSAA/dkFolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 



More information about the BitTorrent mailing list