[bittorrent] UDP

Joseph Ashwood ashwood at msn.com
Tue Mar 29 20:34:32 EST 2005


----- Original Message ----- 
From: "Mike Ravkine" <krypt at mountaincable.net>
Subject: Re: [bittorrent] UDP


> Kenneth Porter wrote:
>
>> --On Saturday, March 26, 2005 10:58 AM -0500 Mike Ravkine 
>> <krypt at mountaincable.net> wrote:
<On using UDP for P2P syestems>


> It seems to me like you're trying to re-invent the transmission integrity 
> scheme that's already offered to us with TCP..

I believe the idea was to improve from TCP. TCP only offers detection of 
tranmission erros, but does not deal with active attacks.

>
>> You don't need to ack the sender to confirm transmission, because you can 
>> always retry individual packets to any member of the swarm containing 
>> that packet's content.
>
> How big are these packets?

Packet size is fairly irrelevant if everything is built correctly.

> The above only makes sense on a macroscopic level..

I don't quite follow you. I believe it is well established that the goal is 
to have some structure in the overall system to verify the collections of 
packets, and so by relation (and assumption) the packets themselves. From 
the serving peer's view the packet ACK serves no purpose. If the packet did 
not arrive then the receiving peer can certainly find a better way to 
acquire the piece (the piece is likely to be lost again), and if the packet 
got there there is no need to resend; best case: ACK is excess, worst case: 
ACK is wasted bandwidth. If (and I stress IF) the protocol is designed 
correctly the ACK is unnecessary in the p2p protocol.

> 256kb at least.  Within that, we DO need to confirm transmission.

All we need is the receiving peer to be able to verify the received pieces, 
if a piece was not received it is best to acquire it from another source 
anyway because the repeat piece is likely to be lost as well. Lossy 
connections tend to remain lossy.

>> This suggests that the protocol be extended to allow request of 
>> sub-fragment content, such as one can do with HTTP and FTP in requesting 
>> part of the available content with an offset/length pair.
>>
> This adds complexity and overhead.

I partly agree. Using HTTP or FTP work-alikes isn't necessary, the overhead 
is substantial for the small packets. Implementing the ability to request a 
small portion of the final set of packets is necessary, otherwise we're 
basically back to the old cannot distribute partial files problem.

>> There's a problem here with a malicious peer injecting bad sub-fragment 
>> packets (with good transmission checksums), because it becomes harder to 
>> tell which peer is corrupting your fragments. Perhaps you could request 
>> packet-level checksums for the failing fragment from several peers to 
>> isolate which packets were maliciously corrupted, and by whom.
>>
> Wow, lets add even MORE complexity and overhead.  The way final content 
> integrity is currently handled (at the block level), and the problem of 
> transmission integrity left to the network layer is very robust.  What you 
> (and others that think ditching TCP is a good idea) describe would not 
> only be to inevitably re-invent the wheel, it would be detrimental to that 
> robustness.

While I believe I was the most recent cause of the UDP discussion, I am 
uncertain that moving away from TCP is a good idea.

Reducing network complexity is important, as is reducing protocol 
complexity. Recovering from the bad peer problem is potentially difficult. I 
see two potential solutions; 1) build the verification down to the minimum 
request size, 2) computational methods combined with (semi-)random piece 
replacement.

1 is fairly self-obvious in structure

2 only seems complex until it is explained. Create 2 memory structures to 
hold the packets. Request replacement packets for some subset of the 
originals, those pieces that match are assumed correct for now. Pieces that 
are different compute the verification information for combinations of the 
packets. This has big downsides; 1) it is very much exponential in 
computation cost 2) it potentially doubles (or triple, etc) the cost of each 
verifiable segment, 3) this resolution is possitively monstrous against a 
dedicated enemy, the cost to the adversary is linear the cost to the target 
is exponential.

I would say that the safest option is to set a minimum request size either 
per file or universally. This of course has the down side that for very 
large files and dependable networks this wastes an absurd amount of 
overhead. This can be improved by using voting on pieces. Specifically the 
target requests the piece in question from a (small) number of peers and 
uses the information to take a vote on the correct packet, but this 
substantially increases network overhead, and so becomes potential option 3 
for resolution. Importantly TCP does not correct this problem.

This is a substantial trade-off that I believe should be dealt with on a 
protocol by protocol basis.
                Joe 




More information about the BitTorrent mailing list