[BitTorrent] Have maps (was Merkle, URLs, etc)

Elliott Mitchell ehem at m5p.com
Thu Mar 10 18:16:13 EST 2005


>From: Joseph Ashwood <ashwood at msn.com>
> From: "Elliott Mitchell" <ehem at m5p.com>
> 
> >>From: Joseph Ashwood <ashwood at msn.com>
> >> From: "Konstantin 'Kosta' Welke" <kosta at fillibach.de>
> >> > On Sun, 6 Mar 2005 17:36:43 -0800, Joseph Ashwood <ashwood at msn.com> 
> >> > wrote:
> >>
> >> [Optimal case for binary trees?]
> >> > In the case of "I need to verify this one piece to be able to share 
> >> > it".
> >>
> >> Actually the optimum case for that is having the verification in the 
> >> node,
> >> regardless of branching. this then leads to the overhead to verification 
> >> =
> >> depth, binary trees will be deepest, they are not optimal.
> >
> > Incorrect.
> >
> > The binary tree will be deeper, however you only need to send one hash
> > per level.
> 
> Actually you will need 2, otherwise you cannot complete the hash computation 
> for the next level.

Olaf responded first on this.

> > With a non-binary tree you will need to send all the /other/
> > hashes for verification at each level.
> 
> For binary you will as well, or did you forget that the hash actually has to 
> be computed?

See above. b-1 hashes need to be sent for every level. In the case of
binary, you only need to send the sibling, because the remaining hash
will be computed during the verification process.

Sending all siblings allows you to verify the node before verifying the
data, but I don't see this as an improvement. In particular if one
(either data block, or verification hashes) is wrong, do you think there
is a significant likelyhood that the other is correct?

> > This means with the flat model
> > you're sending all but one hash every time to verify the node.
> 
> Completely incorrect. Each hash only needs to be known once, so the transfer 
> overhead is necessarily linear in the number of internal nodes. The n-ary 
> tree has fewer internal nodes, and hence will have lower cost.

You are correct that each hash is only /required/ to be transfered once
to do verification. You've been suggesting "in piece verification" which
sounds like you're suggesting PIECE messages be a block of payload and
some number of hashes for verification. For this situation it is simpler
to have all PIECE messages be the same size.

If you do incremental verification (making sure hashes are only sent
once), the binary tree will only need to transfer half of the hashes. The
other half will be computed by hashing the corresponding blocks and
verifying up the tree. For b-ary trees you'll have to transfer (b-1)/b
of the hashes. While higher branching factors do use fewer nodes, you
lose because you need to transfer many more hashes before you can verify
the first piece.

> > If you transfer the verification hashes with each piece (in node
> > verification), you're expending a total of nlog2(n) bandwidth over the
> > entire payload while flat will cost n^2. Guess which is better.
> 
> I seriously hope you were half-asleep when you wrote this. In the binary 
> tree case you will have nlog2(n) bandwidth, in the K-ary tree case you will 
> have nlogK(n) bandwidth. Guess which is better.

I was quite awake, and I stand by those results under the assumptions I
had to make.

I suggest you define what you mean by "in node verification". I took it
to mean that when you send a data block, you also send sufficient data to
verify it in lieu of other data (other than the root hash).

For the binary tree log2(n) hashes are needed for verify, as n nodes will
be transfered the total cost over the entire payload will be nlog2(n).
For the flat arrangement n-1 hashes are needed to verify an arbitrary
node, as n nodes will be transfered the total cost over the entire
payload will be n^2.

This is more a commentary on transfering hashes /with/ the pieces, than
on binary versus flat.

> I believe your misunderstanding is the belief that each child node needs to 
> have the hashes of all it's direct siblings in order to verify, that is 
> incorrect, each parent node needs to hashes of it's children in order to 
> verify, greater branching = lower overhead = faster verification.

True, but the computation overhead is small. Bandwidth overhead is far
more important, and there the difference is still small (hashes are much
smaller than the payload).

> > This is why I suggest handling of blocks of hashes similarly to payload
> > hashes at the lowest layer. The (possibly large) cost of transfering of
> > hashes will be accounted for with the rest of the major data transfer.
> > This also means hashes are transfered *once*, rather than multiple times
> > with every node.
> 
> I will grant that there are ways to transfer the hash once instead of the 
> twice that I have proposed, but hose methods also require downloading the 
> siblings before verification of a node. If we really want to take this as 
> far as possible it is also possible to compute the Merkle tree without any 
> transferred hashes, but that is more wasteful than even the binary trees.

This is so obviously wrong I'm having difficulty figuring out how to
respond. What the heck do you mean?

Transfering blocks of hashes similarly to payload blocks makes the lowest
layer simplier, while at the same time makes bandwidth accounting much
simplier.

> >> Modern hashes have substantial overhead in the finalization operations, 
> >> by
> >> having the smallest nodes possible the finalization code is executed the
> >> maximum number of times. As the size of the nodes shrinks linearly, the
> >> number of internal nodes increases super-linearly. As the number of nodes
> >> increases the number of times it is necessary to run the finalization 
> >> code
> >> increases. I did have a misstep there, I believe it is only a polynomial
> >> increase, not exponential.
> >
> > Even with finalization being expensive, the more than two orders of
> > magnitude more data being processed at the leaves overwhelms the cost of
> > internal node computation.
> 
> Here we have another fallacy on your part. 2 orders of magnitude will not 
> overcome the finalization cost in the sizes that are typically discussed 
> (4KB seems the most common). The choice still comes down to the number of 
> hashes per file size. In the example I gave (4KB blocks, 478 MB) this was a 
> difference of 7 fold, even if your argument held, that would still leave 
> N-ary trees more efficient, two orders of magnitude would only bring the 
> difference down to 1.75x, still well above being equal to N-ary trees.

4KB is the smallest size anyone has seriously proposed. 16KB or 32KB is
more likely when things happen though. I don't have much to respond to
here as you haven't given me a specific number.

> > The number of times the hash function is run relates to the tree depth.
> 
> That is correct.
> 
> > The amount of data run through the hash function relates to the branching
> > factor. You are decreasing the number of times the hash function is run,
> > but increasing the amount of data run through the hash each time it is
> > run.
> 
> And due to the finalization even if your 2 orders of magnitude was correct, 
> the N-ary tree would still be better.

True, but the point is this difference is small.

> > If you do verification once (either piecewise, or as the whole tree),
> > both methods are similar in cost because the node verification is
> > overwhelmed by the much greater data size of leaf verification.
> 
> Incorrect. At the sizes being discussed the dominant factor is the 
> finalization of the hash (e.g. IIRC finalization of SHA-512 takes 20 times 
> the number of computations of inserting 1024-bits), and as such the smaller 
> the number of hashes, the faster it will be. I will admit that as filesize 
> approaches blocksize the n-ary advantage disappears, but since we are 
> discussing blocks in the KB range, and files in the GB range, this is no 
> where near reality.

"finalization of SHA-512 takes 20 times the number of computations of
inserting 1024-bits"

Giving a single number would of been easier to work with. So finalization
of SHA-512 is equivalent to sending 2560 bytes of data through it? This
sounds awfully high, I could believe it even though it sounds way too
high.

So, what will the cost be? For the verification of a 4K leaf node the
cost will be 4096+2560 or 6656 byte equivalents. For 16KB/32KB the
cost will be 18944/35328 byte equivalents.

For binary trees, there will be one internal node for every leaf node.
The internal nodes will have a cost of 2560 for overhead and 64 for each
of the hashes being sent through it. So 2560+128 or 2688 byte
equivalents. So 40% overhead if your number is correct and 4KB blocks are
used. Only 14% overhead if the more likely 16KB is used though, and 7%
for 32KB.

Well, with your overhead number (which is rather extreme) binary trees
lose out. Even with this, note that at 32KB the data hash computation is
overwhelming the internal node computation.

> In addition you are making heavily flawed assumptions, you are assuming that 
> the binary tree only has to verify once, but assuming that the n-ary tree 
> must verify multiple times. In truth the n-ary tree will have to verify 
> fewer times than the binary.

Incorrect. I'm not making that assumption, in the best circumstances
either way will only need a single verification of the entire tree.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \   (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_  \   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/




 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 





More information about the BitTorrent mailing list