[BitTorrent] Re: bt2 protocol features

Elliott Mitchell ehem at m5p.com
Thu Jul 15 03:55:40 EDT 2004


>From: John Prevost <j.prevost at gmail.com>
>    I guess my approach to this would be to say that pieces are
>    always 128kB, and blocks are always 16kB.  At that level, it
>    only requires 8 blocks to complete a piece (each of which can
>    be checked for poison independently), and a large torrent
>    still only needs about 1kB per GB of data per peer in order to
>    track availability, or 800kB for 100 peers on an 8GB torrent.

No one has commented on this. These sizes appear reasonable, though both
could (I'm not sure whether they should) be specified in metafiles. It
should be noted that these numbers come from different places. The piece
size is the chunkyness of the BITFIELD/HAVE, whereas the block size is
the chunkyness of REQUEST/PIECE. Given that these are two separate things
I think I can reject the complaints of the inelegance of having two sizes
in the protocol, partially from an engineering viewpoint as having them
the same kills one or both.

Looking at these indeed fixing the block size at 16KB or 32KB might be
the way to go, no idea which is better though. This also has to be the
finest level of Merkle tree. You really need to be able to verify pieces
at this granularity, otherwise leech peers can steal too much bandwidth
before being caught and you have to discard too much data on a bad block.

I suspect specifying the full piece size in the metafile might be the way
to go. With smaller torrents you need the smaller pieces so peers that
are starting can start uploading sooner, with larger torrents you need
to keep the size of the bitfield messages smaller and reduce the absolute
amount of bandwidth spent announcing having pieces. If this is to be
fixed I suspect the larger 256KB size is better. If variable the 128KB
might be an absolute minimum allowable value.


> KEEPALIVE  len(0)

This can be thought of as one piece of the protocol, a physical layer of
sorts. I'd tend to ax this, and use TCP-level keepalives as those use
less bandwidth. As this is simply a consequence of the lowest level of
the protocol axing it gains nothing; unless the protocol becomes
completely synchronous and relies on the known sizes of the messages to
stay synchronized, rather than sending a message size with every message.

> CHOKE      len(1)      choke
> UNCHOKE    len(1)      unchoke
> INTEREST   len(1)      interest
> UNINTEREST len(1)      uninterest

And this is another grouping of protocol messages. They exist, they work,
no comment.

> HAVE       len(25)     have       root-hash*20 piece*4
> BITFIELD   len(n+21)   bitfield   root-hash*20 bitfield*n

Another group. Any structural changes to one effects the other. If hash
mode was used, ordering by the arithmetic order of the hashes makes more
sense, otherwise no real comment. I like grabing a byte in the BITFIELD
message to designate the hash-type in use, as opting for SHA256 is a
foreseeable future choice. Given the size of the HAVE message, you can
infer the size of the hash, and differing hashes aren't likely to
collide.


There is a weakness here. This is the single BitTorrent that can
reasonably exceed 64K. If BITFIELD messages were forced to obey a 64K
limit, then all BitTorrent messages would be less than 64K and the
physical layer could be changed to only use 2 bytes to designate a packet
length, saving 2 bytes for nearly every packet. In this case files would
have to be nearly 64GB before the BITFIELD messages needed to be
fragmented.


> REQUEST    len(33)     request    root-hash*20 piece*4 block*4
> PIECE      len(28+16k) piece      root-hash*20 piece*4 block*4 data*16k
> CANCEL     len(33)     cancel     root-hash*20 piece*4 block*4

REQUEST and CANCEL are 33 bytes if you include both the packet type byte
and the 4 byte length. As all other places you include the type byte, but
not the length, those should be 29. The PIECE message also needs a type
byte so 29+16K.

Ah, now the overhead of hash-mode is zero. Instead of using the root-hash
plus piece, if by-hash mode was implemented, just the hash of the piece
is sufficient to identify it. I would propose using the hash to identify
the block, but this would mean the client would have to retrieve the
hashes of all the 16KB blocks. Due to their greater number this would be
an expensive proposition. If the fixed block size is implemented then
only a single byte would be needed to designate which block was needed or
returned (and the extra bits could be reserved).


There is also the concern that the value of CANCEL messages is dubious.


A bandwidth optimization here. I'd like to propose that short PIECE
messages may be allowed in the case of a block partially or fully filled
with zeros (this is also solves the issue of a helper downloading the
tail piece of a file, and not knowing where the end is, simply send empty
PIECE messages for the phantom blocks).


> WANT       len(21)     want       root-hash*20
> UNWANT     len(21)     unwant     root-hash*20
> 
> (This is my best guess about the right way to do multiple files on
> one connection.  The WANT message means that the peer sending it wants
> the file identified by that root hash.  Any peer that dynamically
> stops and starts downloading and uploading files needs to keep track
> of what its peers want, so that it can send a BITFIELD message
> if a file a peer wants suddenly becomes available.  A typical
> interaction would follow the handshake with a sequence of WANT
> and BITFIELD or HAVE messages--WANT messages identifying what the peer
> wants to download, BITFIELD and HAVE messages indicating that pieces
> of the given file are available.  If both peers are working on single
> files, they conversation would go like: p1: WANT x, p2: WANT x, p1:
> BITFIELD x ..., p2: BITFIELD x ...)

A subtlety here, in a case like this a peer should not send HAVE or
BITFIELD messages to a peer unless they've sent a message saying they
want it. This would solve an issue I'd been pondering with hash-mode, a
helper's bitfield might indicate they were missing pieces, but as others
had collected them the helper wouldn't be interested in receiving HAVE
nor BITFIELD messages.

> GETHASH    len(25)     gethash    root-hash*20 piece*4
> HASHES     len(185)    hashes     root-hash*20 piece*4 data*160
> 
> (Presuming you can only get the set of 8 hashes for any one specific
> piece.  There might be more efficient ways to do this, this is just
> easy to reason about quickly.  One can presume that a peer should
> always have enough data to validate any chunk from any piece it has
> all the way up to the root.)

Or an appropriate number to validate all the individual blocks. Again
hash mode would merely need the piece hash to identify everything. I'd
tend to name these "GETBLKHASHES" and "BLKHASHES". I'd tend to allocate
a standard piece to contain all the piece hashes, and only use these for
the case of a block failing to verify. This means much smaller overhead
for transfering the hashes used in a large file.


For the goals of hash mode there is an interesting dilemma here. This
makes it quite possible for a helper to obtain details of the piece
reassembly. One option would be to have two hashes to designate a file,
the first one is the straight SHA1 hash of the file used in the BITFIELD
and WANT to designate the file, and the SHA1 root of the Merkle tree used
to figure out which piece has the piece hashes.


>    In that picture, I'm assuming that all requested blocks are of
>    a 16kB fixed size (which works well with the merkle tree), and
>    that choke/unchoke interest/uninterest is per connection, not per
>    file (which only makes sense.)
> 
>    So let's look at that, assuming 100 peers and 8GB of data to
>    transfer in 20 files.

I'm not going to bother with receive versus send, as under reasonable
circumstances these will be close to identical.

>    Each peer needs to send a BITFIELD for each file it has to each
>    other peer.  Each bitfield will be for 410 pieces, so the whole
>    message will be 432 bytes.  Multiply that by 20 files (total 8200
>    pieces.)  Sent: 864,000, Received: 864,000.

The _bitfields_ will be 410 _bytes_. Slightly idealized, if files consist
of a factor of 8 plus 4 pieces the bitfields will be a little more than
410 bytes. The total packet size would be 435 bytes not 432 (20B hash, 1B
packet type, 4B packet length).

Additionally with your proposal you'd need WANT messages for each of
these, another 25 bytes. So a total of 460 bytes per peer per file. So a
total of 920,000 bytes sent/received.

>    Every piece's set of block-hashes needs to be retrieved once.
>    That's 26 bytes for every request, 186 for every receipt.  Times
>    8200 pieces.  Sent: 213,200, Received: 1,525,200

You seem to be adding an extra byte to your messages, and forgeting the
length integer (or truncating the length integer). 29/189 bytes. I'm
wondering if you really did mean 8200 pieces above, incorrect. 65,536
pieces.

However, the first layer of hashes get packed don't get retrieved
individually as they're packed 8 to a message. You've also got the second
layer of 8,192 hashes, the third layer of 1,024 hashes, the fourth layer
of 128 hashes, the fifth layer of 16 hashes, and the sixth layer of 2
hashes. Got the picture of why I'd favor packing the hashes into an extra
piece?

So a total of 9,362 of these message pairs. For 2,040,916 bytes
sent/received.

>    If we assume that due to choking and unchoking, each block ends
>    up being requested about twice (pessimistic, I think), that's
>    34 bytes per block times two requests times 8200 pieces times 8
>    blocks.  Sent: 4,460,800

Very pessimistic. Uh, 34? Hash=20B+Piece=4B+Block=4B+PackType=1B+
tPacket=4B, 33 bytes total for for what is handed to TCP for your message
design. Once again 65526 pieces times 8 blocks (or simply directly
2^33(8GB)/2^14(16K)*2, or 2^20). 33MB or 34,603,008 bytes.

We can save 3 bytes per packet by using only a single byte to designate
the block#, dropping the bandwidth by 3MB. Hash mode would avoid sending
the piece# saving another 4MB, reducing it to 27,262,976 bytes.

>    Each block received carries 29 bytes of overhead.  That's 29 bytes
>    per block times 8200 pieces times 8 blocks.  Received: 1,902,400.

65,526 pieces, 524,288 blocks. At least your size is correct for what
goes down to the BT physical layer. Since I'm counting at the physical
layer there is a total of 33 bytes of overhead. Total of 16.5MB, or
17,301,504 bytes.

Again saving the 3 bytes per packet by using only a byte to designate the
block# saves 1.5MB. Hash mode would avoid sending the piece# saving
another 2MB, reducing it to 13,631,488 bytes.

>    Each piece completed requires a HAVE message be sent to every
>    peer.  That's 26 bytes times 8200 pieces times 100 peers.  Sent:
>    21,320,000.  And call it Received: 21,320,000, as well.

And a final time of 65,536 pieces. Also 25 bytes for the packet, 29 at BT
physical. 190,025,400 bytes. This can be optimized by only sending HAVE
messages to peers that don't already have the piece (as the ones who do
have them won't be interested). In a minimally seeded torrent this will
remove 50% of these packets, dropping it to 95,012,700 bytes.


>    So:
>                     Send             Receive
>    Bitfields:    864,000 (0.010%)    864,000 (0.010%)
>    Hashes:       213,200 (0.002%)  1,525,200 (0.018%)
>    Requests:   4,460,800 (0.052%)  1,902,400 (0.022%)
>    Haves:     21,320,000 (0.248%) 21,320,000 (0.248%)
>               ----------          ----------
>    Total:     26,858,000 (0.313%) 25,611,600 (0.298%)

Total payload:	8,589,934,592

BITFIELD/WANT	      920,000	0.011%
GETHASH/HASHES	    2,040,916	0.024%
REQUEST		   34,603,008	0.403%
PIECE		   17,301,504	0.201%
HAVE		  190,025,400	2.212%
Total:		  244,890,828	2.851%

Using the HAVE optimization:
Total:		  149,878,128	1.745%


>    Even though the numbers looks kind of scary and big, even on
>    a really large torrent with a really large number of peers,
>    the BT protocol overhead would be under 0.5% of the data size.
>    It's interesting to note that even with a pessimistic look at a
>    naive approach, the hash data doesn't take a huge amount of the
>    overhead.  The bulk is in the haves and requests (which is
>    perhaps another reason to avoid partial haves.)

Well, your 0.5% is way off. Dominated by the HAVE messages, anything else
is just gravy. If you reduce the number of peers then the REQUEST and
PIECE message overhead starts to become a noticable player.

>    Based on the above, it's my feeling that 128kb pieces with
>    16kb blocks provides enough resolution to prevent the kind
>    of incredibly bad performance I see on 2MB block size torrents
>    without increasing the overhead beyond a reasonable level.
>    And I think that even without being able to say you HAVE
>    a smaller segment than a piece, 128kb pieces should be
>    completed quickly enough to get things rolling.  Finer
>    resolution for HAVEs would both increase the overhead to
>    a worrisome level and make the system more complicated than
>    it really needs to be.

My feeling is the mainline 256K is closer to the mark, though 128K isn't
too painful. *Maybe* allow for a *single* partial piece during client
startup (even then this becomes a bandwidth attack for an evil doer).


There is a large monster lurking here, the TCP/IP overhead. This runs 40
bytes for packets with no options (99% of packets). Thing is this is
fixed whether the packets are large or small, so large packets make sense
since they keep TCP out of it. Only the PIECE messages are likely to fill
an entire TCP packet, so an idle connection that is only engaged in
sending HAVE and keepalives is a very good candidate for disconnection
(unless they have precious pieces).

Stealing some numbers from the network monitoring at a local university
(since they're handy, and as they're doing network research they have the
right ones). A computer was observed in a dorm running BitTorrent (well
presumed, the port and bandwidth consumption were right). It was observed
that 26% of the bandwidth was spent in TCP/IP overhead.

Idle peers, and those HAVE and keepalive messages are hugely expensive.


If the discussion was what the V2 will look like, then mostly good.



> And to address one more thought: you could argue that a client should
> just fetch peers for all of the files it wants and has available, all
> together.  But I think this is both bad for users (it reduces the
> amount of ability you have to queue up downloads rather than download
> simultaneously) and bad for the health of the torrents.  The key
> thing I want the scrape data for is so I can focus my efforts on
> "needy" files.  If I (or my client) looks and sees file1 has 50 seeds
> and file2 has 0 seeds, I want to spend my bandwidth seeding file2,
> and completely ignore file1.
> 
> In a way, that's sort of a loose extension of super-seed mode.  Don't
> bother sending stuff people already have if you can send stuff that
> people don't have.  It's up to the client or the user to manage it,
> but if the trackers don't provide enough information to do this,
> it hurts the whole system.

I agree, the scrape functionality is useful.

Multiple peers working together makes this interesting though, together
they have a seed (and be deliberatly aware of this), but individually
they may not.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \   (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_  \   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/





------------------------ Yahoo! Groups Sponsor --------------------~--> 
Yahoo! Domains - Claim yours for only $14.70
http://us.click.yahoo.com/Z1wmxD/DREIAA/yQLSAA/dkFolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 



More information about the BitTorrent mailing list