[BitTorrent] Back to Merkle Hash Trees...

Elliott Mitchell ehem at m5p.com
Sat Feb 5 18:44:45 EST 2005


>From: Olaf van der Spek <OvdSpek at LIACS.NL>
> Elliott Mitchell wrote:
> > One you might think about helping me murder is the "THEX" design various
> > folks have been advocating. THEX distinguishes leaves from internal nodes
> > by appending a byte depending on whether it is leaf or payload; *that*
> > is bogus!  (kills alignment, makes testing expensive)
> 
> It's not. You do not have to put the byte physically in front of the 
> block. You can 'hash' the byte and then the block.
> 
> And it's required in general because otherwise you couldn't tell the 
> difference between hash data and user data (if you only know the root 
> hash and not the file size).

You completely missed my point.

I agree *some* form of verification is required. You've pointed to one
alternative, knowing the file size. If the file size is known I can
easily build the tree and verify. I'm doubtful that a torrent file
wouldn't include the size, so this verifier should be acceptable. Adding
extra data to the payload will also work though.

_Prefixing_ the marking byte (or word) is *totally* bogus.

Justin already mentioned that SHA1 is a block-based hash. This means
you've hurt performance by causing the payload to be misaligned (perhaps
not a lot, but some).


Well, you still haven't gotten my primary point despite me saying it
repeatedly so it is time to be very explicit. For these purposes, let us
pretend we're poor C.S. students back at some University. This week a
professor issues an assignment where we are going to have to compute the
SHA1 hash of a block of data. Being reasonably intelligent students, we
notice the presence of OpenSSL's libcrypto on the system so we choose to
use that. We look at the man page and see the function SHA1(). Turns out
to be nice and easy to use, simply SHA1(somestr, strlen(somestr), NULL);
this weeks homework is done lets party.

Next week comes, and time to have more fun with SHA1. This time we're
going to need to get the SHA1 hash of an arbitrary blob of data that
we're going to read from standard input. The professor has stated this
will be tested on a blob of data of at least 100GB in size, and we're
assured that no machines in the department have that much memory (even
when accounting for swap and on ia32 we couldn't use it anyway). Well,
obviously SHA1() won't work since we can't fit the data into memory. So
we go searching the man pages again...

Oh, on the very same man page with SHA1() there turn out to be three
other functions. SHA1_Init(), SHA1_Update(), and SHA1_Final(). Looks like
the OpenSSL folks and designers of SHA1 thought ahead. They designed
things in such a way that you don't have to have everything in memory, in
fact they don't even require you to know the data size ahead of time.
SHA1_Init() initializes a SHA_CTX structure for use with the other two.
The SHA_CTX structure is opaque, but known to be of static size, notably
it has no pointers. We'd need to use memcpy() to move it around in
memory, but otherwise is pretty basic. SHA1_Update() simply adds hashes
the blobs of data you give it, mixing them into the SHA_CTX structure.
SHA1_Final() simply extracts the SHA1 value from the structure. Well,
pretty easy to go from there, this week's homework is done, more party
time.


There are two points here. First, all hashes can be done incrementally,
the data doesn't need to fit into memory, going with this is a somewhat
more advanced API that you may not of been aware of. The crucial point
here is that SHA_CTX structure. We're not supposed to peek inside it, but
we can do pretty well anything else we want with it. Of note we can
freely move it around if we wish to do so. The key issue here is that we
can freely copy SHA_CTX structures, there is no state kept outside of
structure nor any pointers to other structures. All interfaces to SHA1
computations (notably Python's) are similar, there is a blob of context
and you can freely move and copy it.

Imagine you're handed a 16K block of data and a hash value. You're unsure
whether this is internal nodes or a leaf (or even whether it is valid at
all). Since copy avoidance is a good thing, you create an SHA_CTX
structure, use SHA1_Update() to feed in the single marker byte for an
internal node, and then call SHA1_Update again to feed in the 16K of
payload. Finally call SHA1_Final() and we compare with the hash. No dice,
this isn't a valid internal node. So now we try almost the exact same set
of steps again only using the leaf marker byte. For my point it doesn't
matter whether it turns out to be valid or note, more than likely it is
though. See why I consider THEX bogus yet?

I want THEX dead so I'm going to presume I haven't made it obvious enough
yet. My main issue is that it is a byte _prefix_. In going with this, let
us again imagine we've got that 16K blob and the hash. Only this time the
marking byte will be appended to the blob, rather than prefixed. Now the
steps will be slightly different. Once again we create an SHA_CTX
structure and use SHA1_Update() to feed in the 16K blob of data. We then
use SHA1_Update() to append the marker byte and SHA1_Final() to get the
hash to compare. The key here is that once you've used SHA1_Update() with
the 16K blob, you can then copy the SHA_CTX structure and use
SHA1_Update()/SHA1_Final() on the original to test for it being payload
and again on the copy to test for it being internal nodes versus bogus.


Justin is correct. The secure hashes are relatively expensive. The
problem is that THEX *forces* you to run the hash over the *entirety* of
the payload twice to identify a bogus block. This is 32K + 2B of data
going through the hash function for every block. By moving the marker to
the end, we can save the hash context after running the payload through
and only rehash the marker. This is 16K + 2B of data going through the
hash function for every block.

The THEX folks have *doubled* their overhead by using an utterly bogus
design. I'm not sure I can call incompetence on the THEX designer for
this mistake, but I can STRONGLY RECOMMEND AVOIDING IT!


>From: Justin Cormack <justin at street-vision.com>
> > > Higher overhead...: Computing root hash using current method O(n) time.
> > > Computing root hash for Merkle O(nlogn) time.
> > 
> > Isn't that limited by disk transfer rate instead of CPU time?
> 
> sha1 (let alone sha256 - dont have an implementation lying around to test)
> is pretty CPU intensive.. 
> 
> ok my desktop can manage 130MB/s, though thats only the speed of a couple
> of disks. A VIA C3 I have lying around can only manage 10MB/s, much slower
> than its disk.
> 
> Merkle SHA256 is a significant amount more CPU time, its a question of
> whether it is worth it.

Already mentioned in my first message on this subject. Assuming you
provide for 4K block verification and a branching factor of 2, the amount
of data that gets hashed for the tree is less than 1% of the size of the
payload. This is a is measurable, but insignificant. If 16K block
verification is used and a branching factor of 512 (again minimum block
size is likely to be 16K, 512 SHA256 hashes fit into 16K), the tree
overhead is much less than 0.1% for a 4GB payload.

The main factor is that block size. No reasonable block size will make
the tree verification expensive, the payload hashing is incomparably
more expensive.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \   (    |         EHeM at gremlin.m5p.com PGP 8881EF59         |    )   /
  \_  \   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
    \___\_|_/82 04 A1 3C C7 B1 37 2A*E3 6E 84 DA 97 4C 40 E6\_|_/___/




 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/BitTorrent/

<*> To unsubscribe from this group, send an email to:
    BitTorrent-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 





More information about the BitTorrent mailing list