[BitTorrent] Re: bt2 protocol features

John Prevost j.prevost at gmail.com
Tue Jul 13 17:55:35 EDT 2004


Huge message here.  I apologize in advance.  Short form:
 * BT2 sounds like it should have a lot of neat potential!
 * Some thoughts about how it might actually work, and an appeal
   for more info from Bram.
 * Some thoughts about the open questions Bram mentioned.

--- In BitTorrent at yahoogroups.com, Bram Cohen <bram at b...> wrote:
> After much arguing, cogitating, arguing, and cogitating, the bt2
> protocol designs are now further along than they were before.

The changes sound very intriguing and good!  I'm going to re-order
your post a bit to get the bits I think are related together for
questions...

> Features planned for bt2 -

> merkle hash trees - this is by far the most compelling reason to
> break compatibility. Files in a multi-file will each have their own
> hash root.
  {...}
> a beefed up peer protocol - there are some subtle changes to the
> state machine planned, which are strict improvements but kind of
> involved so I'll skip over them now. Also peers will announce which
> files they want to enable cross-torrent trading. Finally,
> announcements of having parts of pieces will be added, since that's
> enabled by hash trees. Doing that last part well requires some
> smarts, but again the smarts don't change the protocol so I'm
> punting.
  {...}
> The main sticking point left is how to deal with piece sizes -
> ideally all peers should be using the same piece size, but whether
> that's a good thing to require and if so (or even if not) how to set
> it I'm not sure of yet.

I'm very interested in learning what specific changes you have in
mind for the protocol.  It seems to me (and I must admit that
I'm significantly more conservative in my thoughts than the
folks bandying around ideas of purely hash-based designs) that
switching to merkle hashes and "cross-torrent" trading can have some
pretty big impacts.  Here are my immediate thoughts:

   By using Merkle hash trees, it would be possible to remove the
   distinction between pieces and piece-parts.  Lately, I've been
   using Azureus, which always requests parts in 16kb chunks.  The
   torrents I've been on recently have piece sizes varying from 32kb
   to 2MB.

   Unfortunately, I just thought about that quite a bit, and it does
   not really work very well.  Specifically, with a very small piece
   size, the number of HAVE messages sent grows nastily, and the
   amount of data you need to track all of your peers also becomes
   awkward.  (The specific example I was thinking about, you'd need
   almost 68k per peer just to track that peer's availability data.)

   Which, I guess, is where the idea of "partial have" that you
   mention comes in.  I'm not really sure that's a great idea, either,
   though.  This introduces a lot of logic that's really only
   helping much when downloading the first couple of pieces.

   I guess my approach to this would be to say that pieces are
   always 128kB, and blocks are always 16kB.  At that level, it
   only requires 8 blocks to complete a piece (each of which can
   be checked for poison independently), and a large torrent
   still only needs about 1kB per GB of data per peer in order to
   track availability, or 800kB for 100 peers on an 8GB torrent.

   Some other related issues (and I'm very interested to hear the
   details you have in mind, this is just my best guess).  With
   the cross-torrent trading, I'm thinking you intend this to make
   it possible for a batch torrent and an individual torrent for
   one file from that batch to share data with each other.  And
   additionally, a smart peer could also be uploading and downloading
   data for completely independent files all on the same connection.

   Based on that, I figure that many messages will be per-file
   (identified by the root hash).  So the protocol might look
   (without thinking about what other things you have in mind)
   something like this:

KEEPALIVE  len(0)
CHOKE      len(1)      choke
UNCHOKE    len(1)      unchoke
INTEREST   len(1)      interest
UNINTEREST len(1)      uninterest
HAVE       len(25)     have       root-hash*20 piece*4
BITFIELD   len(n+21)   bitfield   root-hash*20 bitfield*n
REQUEST    len(33)     request    root-hash*20 piece*4 block*4
PIECE      len(28+16k) piece      root-hash*20 piece*4 block*4 data*16k
CANCEL     len(33)     cancel     root-hash*20 piece*4 block*4

WANT       len(21)     want       root-hash*20
UNWANT     len(21)     unwant     root-hash*20

(This is my best guess about the right way to do multiple files on
one connection.  The WANT message means that the peer sending it wants
the file identified by that root hash.  Any peer that dynamically
stops and starts downloading and uploading files needs to keep track
of what its peers want, so that it can send a BITFIELD message
if a file a peer wants suddenly becomes available.  A typical
interaction would follow the handshake with a sequence of WANT
and BITFIELD or HAVE messages--WANT messages identifying what the peer
wants to download, BITFIELD and HAVE messages indicating that pieces
of the given file are available.  If both peers are working on single
files, they conversation would go like: p1: WANT x, p2: WANT x, p1:
BITFIELD x ..., p2: BITFIELD x ...)

GETHASH    len(25)     gethash    root-hash*20 piece*4
HASHES     len(185)    hashes     root-hash*20 piece*4 data*160

(Presuming you can only get the set of 8 hashes for any one specific
piece.  There might be more efficient ways to do this, this is just
easy to reason about quickly.  One can presume that a peer should
always have enough data to validate any chunk from any piece it has
all the way up to the root.)

   In that picture, I'm assuming that all requested blocks are of
   a 16kB fixed size (which works well with the merkle tree), and
   that choke/unchoke interest/uninterest is per connection, not per
   file (which only makes sense.)

   So let's look at that, assuming 100 peers and 8GB of data to
   transfer in 20 files.

   Each peer needs to send a BITFIELD for each file it has to each
   other peer.  Each bitfield will be for 410 pieces, so the whole
   message will be 432 bytes.  Multiply that by 20 files (total 8200
   pieces.)  Sent: 864,000, Received: 864,000.

   Every piece's set of block-hashes needs to be retrieved once.
   That's 26 bytes for every request, 186 for every receipt.  Times
   8200 pieces.  Sent: 213,200, Received: 1,525,200

   If we assume that due to choking and unchoking, each block ends
   up being requested about twice (pessimistic, I think), that's
   34 bytes per block times two requests times 8200 pieces times 8
   blocks.  Sent: 4,460,800

   Each block received carries 29 bytes of overhead.  That's 29 bytes
   per block times 8200 pieces times 8 blocks.  Received: 1,902,400.

   Each piece completed requires a HAVE message be sent to every
   peer.  That's 26 bytes times 8200 pieces times 100 peers.  Sent:
   21,320,000.  And call it Received: 21,320,000, as well.

   So:
                    Send             Receive
   Bitfields:    864,000 (0.010%)    864,000 (0.010%)
   Hashes:       213,200 (0.002%)  1,525,200 (0.018%)
   Requests:   4,460,800 (0.052%)  1,902,400 (0.022%)
   Haves:     21,320,000 (0.248%) 21,320,000 (0.248%)
              ----------          ----------
   Total:     26,858,000 (0.313%) 25,611,600 (0.298%)

   Even though the numbers looks kind of scary and big, even on
   a really large torrent with a really large number of peers,
   the BT protocol overhead would be under 0.5% of the data size.
   It's interesting to note that even with a pessimistic look at a
   naive approach, the hash data doesn't take a huge amount of the
   overhead.  The bulk is in the haves and requests (which is
   perhaps another reason to avoid partial haves.)

   Based on the above, it's my feeling that 128kb pieces with
   16kb blocks provides enough resolution to prevent the kind
   of incredibly bad performance I see on 2MB block size torrents
   without increasing the overhead beyond a reasonable level.
   And I think that even without being able to say you HAVE
   a smaller segment than a piece, 128kb pieces should be
   completed quickly enough to get things rolling.  Finer
   resolution for HAVEs would both increase the overhead to
   a worrisome level and make the system more complicated than
   it really needs to be.

Anyway, is this even vaguely like what you have in mind?  Using this
model, the .torrent file is much smaller in most cases than it is
now.  Piece and block sizes would become more consistent.  Multiple
torrents (including the same file being shared as both part of a
multi-file torrent and individually) can be shared on a single pipe. 
If one peer is seeding file A and downloading file B, and another is
seeding file B and downloading file A, they should be able to talk to
each other happily tit-for-tat.

> udp-based tracker protocol (with http-based alternative for those
> who care less about bandwidth than convenience)
  {... other tracker things ...}

This seems like a good thing.  I'd also be interested to hear about
what you imagine the tracker conversation looking like.  Elsewhere,
you said something about imagining that a client identifies
all of the files it's interested in to the tracker, and the tracker
sends back peers that are also interested in those files.  Could
you expand on that a bit?

I don't think you intend to go that far--but the new single-connection
many-files stuff makes the world of trackers a much more interesting
place, I think.  Right now, the majority of trackers control what
torrents they're willing to track.  But with multiple files going
over single connections, things get a little more complicated.  Should
a peer identify to the tracker only the files it knows that tracker
is managing?  If it can identify others, that could make things
less controlled for the people who run trackers (which would be bad.)

On the other side, though, it would allow trackers to provide better
information on peers.  So I wonder, because this is a pretty rich
area to be explored.

> Also I'm not sure how to make trackers support scrape functionality
> (or more to the point, I'm not sure how much scrape functionality to
> carry over, and in what way). Other than those issues all that's
> left is a whole mess of details.

I wanted to address this one specifically--I think it's important to
keep the scrape functionality around.  Somehow.  The key thing for me
is that it's important for a multi-torrent client to be able to
discover what files need to be re-seeded.  This is the primary reason
I use Azureus: it's a bit bloaty, and some of its behavior makes me
wince uncontrollably, but it's the best there is for re-seeding old
torrents automatically.

If anything, having some kind of scrape-like functionality built into
the core tracker protocol ought to cut down on the occasional excesses
of clients like Azureus.  It will then be clear exactly how a client
is supposed to interact with the tracker when it's discovering
information.  The way I figure it, the easy picture is that the
tracker is acting as a sort of cache for the "HAVE" and "WANT"
messages of a client.  The resolution is reduced, to "complete" or
"incomplete" only, but that's what it's doing.  As such, it's a core
job for it to answer both "give me some peers" queries and "what's
the status of this torrent?" messages.

And to address one more thought: you could argue that a client should
just fetch peers for all of the files it wants and has available, all
together.  But I think this is both bad for users (it reduces the
amount of ability you have to queue up downloads rather than download
simultaneously) and bad for the health of the torrents.  The key
thing I want the scrape data for is so I can focus my efforts on
"needy" files.  If I (or my client) looks and sees file1 has 50 seeds
and file2 has 0 seeds, I want to spend my bandwidth seeding file2,
and completely ignore file1.

In a way, that's sort of a loose extension of super-seed mode.  Don't
bother sending stuff people already have if you can send stuff that
people don't have.  It's up to the client or the user to manage it,
but if the trackers don't provide enough information to do this,
it hurts the whole system.

John.




------------------------ Yahoo! Groups Sponsor --------------------~--> 
Yahoo! Domains - Claim yours for only $14.70
http://us.click.yahoo.com/Z1wmxD/DREIAA/yQLSAA/dkFolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 



More information about the BitTorrent mailing list