How to divide block in piece when they overlap - bittorrent

Some input I'm looking to build a simple minimal bittorrent client.
I reading the protocol spec for a 2-3 days now.
here what my understanding on it thus far . Assuming that torrent has a piece length of 26000 bytes and according to non official spec block size is 16384. Something like this.
Now upon request of a block of piece message would look like this
piece 0
block offset 0
block length 16484
So far so good.
Now, for next block which overlap in piece 0 and 1 what should the request look like
piece 0 ## since the start of byte is in piece 0 use piece 0 instead of piece 1
block offset 16384
block length 16384
Now on the receiving end I need to recreate the piece of 26000 bytes so that I can compare that with pieces (hash) to match the piece for correctness.
Is my understanding correct ?
Also I'm let suppose the piece verification failed and may be it because of the first block i.e Block 0 (which is faulty or corrupt)
then I should requeue Block 0 and Block 1 (which was valid btw and also a part of piece 1) to retransmit again.
And now suddenly the piece and block distribution become a bit complex then what I assume it be. and I hoping there is a simpler solution to this.
Any thought

Will use the more distinct term 'chunk' instead of the ambiguous 'block'.
A torrent is divided into pieces.
A piece is divided into chunks.
A chunk is cut from one piece.
A torrent is divided into pieces when it's created. With the Request message, a piece is in turn further divided into chunks by the downloading BitTorrent client.
How the client cut the chunks out from a piece doesn't matter, as long as no single chunk is larger than 16 KB (16384 bytes).
The simplest and most rational way to divide a piece, is to do it in as few chunks as possible, by dividing it in 16 KB chunks and let the last chunk of the piece be smaller if necessary.
The Request message format: <len=0013><id=6><Piece_index><Chunk_offset><Chunk_length>
<Piece_index > integer specifying the zero-based piece index
<Chunk_offset> integer specifying the zero-based byte offset within the piece
<Chunk_length> integer specifying the requested number of bytes
When requesting a chunk:
the whole chunk must be within the piece specified by the Piece_index,
ie Chunk_offset+Chunk_length must be less or equal to the size of that specific piece*.
the Chunk_length can not be larger than 16 KB (16384 bytes) and must be at least 1 byte
the peer that get the request must have the piece specified by the Piece_index
If any of the conditions is not met, the peer receiving the request will close the connection.
* For all pieces except the very last one that is the 'piece length' defined in the info-dictionary.
The size of the last piece can by calculated as:
size_last_piece = size_of_torrent - (number_of_pieces - 1) * 'piece length'

The maximum block size commonly accepted by clients is 16KiB. Clients are free to make smaller requests.
Pieces are commonly a multiple of 16KiB, but the current spec does not require it (this changes with BEP52) and some people use prime numbers or similar things for fun, so they do exist in the wild.
Blocks only exist in the sense that you need multiple requests to get a complete piece that is larger than 16KiB. In other words, blocks are the same thing as whatever you decide to request. You could request 500 bytes, then 1017 bytes and then 13016 bytes, ... until you got a complete piece. They are arbitrary subdivisions within a piece - there is no overlap - that you need to keep track of between the start of downloading a piece and finishing the piece.
They do not participate in hashing, they do not factor into the HAVE or BITFIELD messages. Only REQUEST, PIECE, CANCEL and REJECT messages concern themselves with blocks. And instead of blocks you could also call them sub-piece offset-length tuples or something to that effect.

Last block in a piece may be smaller than the transfer block size. I.e. 26000 - 16384 = 9616 bytes should be requested in the second PIECE message. As soon as all 26000 bytes have been received, SHA-1 hash should be calculated and compared with the corresponding checksum from the pieces section of metainfo dictionary. If the checksum does not match, you have no means to know which block contained invalid data and should re-download all blocks from this piece.
My advice would be not to depend on some particular partitioning of the piece, because:
1) peers may use a different transfer block size when requesting data
2) SHA-1 algorithm is block-based, and the digester better use a bigger block size (otherwise calculations will take more time)
A proper abstraction for a piece would be a generic data range with the following methods:
read(from:int, length:int):byte[]
write(offset:int, block:byte[]):()
Then you'll be able to read/write arbitrary subranges of data.

Related

Confusion around Bitfield Torrent

I'm a bit confuse about the bitfield message in bittorrent. I have noted the confusion in form of question below.
Optional vs Required
Bitfield to be sent immediately after the handshaking sequence is
completed
I'm assuming this is compulsory i.e after handshake there must follow a bitfield message. Correct?
When to expect bitfield?
The bitfield message may only be sent immediately after the
handshaking sequence is completed, and before any other messages are
sent
assuming I read this clear although be optional message. peer can still broadcast the bitfield message prior to any message (like request, choke, uncoke etc). correct ?
The high bit in the first byte corresponds to piece index 0
If I'm correct bitfield represent the state i.e whether or not the peer has a given piece with it.
Assuming that my bitfield is [1,1,1,1,1,1,1,1,1,1 ..]. I establish the fact that the peer has 10th piece missing and if the bitfield look like this [1,1,0,1,1,1,1,1,1,1 ..] the peer has a 3rd piece missing. Then what is the high bit in the first byte corresponds to piece index 0 means.
Spare bits
Spare bits at the end are set to zero
What does this mean ? I mean if have a bit at end as 0 does it not means that peers has that as missing piece. why is the spare bit used.
The most important of all what is the purpose of the bitfield.
My hunch on this is that bitfield make it easier to find the right peer for a piece knowing available with the peer but am i correct on this?
#Encombe
here how my bitfield payload looks like
\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFE
I'm assuming this is compulsory i.e after handshake there must follow a bitfield message. Correct?
No, the bitfield message is optional, but if a client sends it, it MUST be the first message after the handshake.
Also, both peers must have sent their complete handshakes, (ie the handshaking sequence is completed), before anyone of them starts to send any type of regular messages including the bitfield message.
assuming I read this clear although be optional message. peer can still broadcast the bitfield message prior to any message (like request, choke, uncoke etc). correct ?
Yes, see above. If a client sends a bitfield message anywhere else the connection must be closed.
Assuming that my bitfield is [1,1,1,1,1,1,1,1,1,1 ..]. I establish the fact that the peer has 10th piece missing
No. It's unclear to me if your numbers is bits (0b1111111111) or bytes (0x01010101010101010101).
If it's bits (0b11111111): It means have pieces 0 to 9
If it's bytes (0x01010101010101010101): It means have pieces 7, 15, 23, 31, 39, 47, 55, 63, 71 and 79
if the bitfield look like this [1,1,0,1,1,1,1,1,1,1 ..] the peer has a 3rd piece missing.
No, pieces are zero indexed. 0b1101111111: means piece 2 is missing.
Then what is the high bit in the first byte corresponds to piece index 0 means.
It means that the piece with index 0 is represented by the leftmost bit. (Most significant bit in bigendian.)
. eight bits = one byte
. 0b10000000 = 0x80
. ^ high bit set meaning that the client have piece 0
. 0b00000001 = 0x01
. ^ low bit set meaning that the client have piece 7
why is the spare bit used
If the number of pieces in the torrent is not evenly divisible by eight; there will be bits over, that don't represent any pieces, in the last byte of the bitfield. Those bits must be set to zero.
The size of the bitfield in bytes can be calculated this way:
size_bitfield = math.ceil( number_of_pieces / 8 )
and the number of spare bits is:
spare_bits = 8 * size_bitfield - number_of_pieces
what is the purpose of the bitfield
The purpose is to tell what pieces the client has, so the other peer know what pieces it can request.

xentop VBD_RD & VBD_WR output

I'm writing a perl script that track the output of xentop tool, I'm not sure what is the meaning for VBD_RD & VBD_WR
Following http://support.citrix.com/article/CTX127896
VBD_RD number displays read requests
VBD_WR number displays write requests
Dose anyone know how read & write requests are measured in bytes, kilobytes, megabytes??
Any ideas?
Thank you
As far as I understood, xentop shows you two different measuers (two for read and write).
VBD_RD and VBD_WR's measures are unit. The number of times you tried to access to the block device. This does not say anything about the the number of bytes you have read or written.
The second measure read (VBD_RSECT) and write (VBD_WSECT) are measured in "sectors". You can find the size of the sectors by using xenstore-ls (https://serverfault.com/questions/153196/xen-find-vbd-id-for-physical-disks) (in my case it was 512).
The unit of a sector is in bytes (http://xen.1045712.n5.nabble.com/xen-3-3-testing-blkif-Clarify-units-for-sector-sized-blkif-request-params-td2620172.html).
So if the VBD_WR value is 2, VBD_WSECT value is 10, sector size is 512. We have written 10 * 512 bytes in two different requests (you tried to access to the block device two times but we know nothing about how much bytes were written in each request but we only know the total). To find disk I/O you can periodically check these values and take the derivative between those values.
I suppose the sector size might change for each block device somehow, so it might be worthy to check the xenstore-ls output for each domain but I'm not sure. You can probably define it in the cfg file too.
This is what I found out and understood so far. I hope this helps.

Why is string manipulation more expensive?

I've heard this so many times, that I have taken it for granted. But thinking back on it, can someone help me realize why string manipulation, say comparison etc, is more expensive than say an integer, or some other primitive?
8bit example:
1 bit can be 1 or 0. With 2 bits you can represent 0, 1, 2, and 3. And so on.
With a byte you have 2^8 possibilities, from 0 to 255.
In a string a single letter is stored in a byte, so "Hello world" is 11 bytes.
If I want to do 100 + 100, 100 is stored in 1 byte of memory, I need only two bytes to sum two numbers. The result will need again 1 byte.
Now let's try with strings, "100" + "100", this is 3 bytes plus 3 bytes and the result, "100100" needs 6 bytes to be stored.
This is over-simplified, but more or less it works in this way.
The int data type in C# was carefully selected to be a good match with processor design. Which can store an int in a cpu register, a storage location that's an easy factor of 3 faster than memory. And a single cpu instruction to compare values of type int. The CMP instruction runs in less than a single cpu cycle, a fraction of a nano-second.
That doesn't work nearly as well for a string, it is a variable length data type and every single char in the string must be compared to test for equality. So it is automatically proportionally slower by the size of the string. Furthermore, string comparison is afflicted by culture dependent comparison rules. The kind that make "ss" and "ß" equal in German and "Aa" and "Å" equal in Danish. Nothing subtle to deal with, taken care of by highly optimized table-driven code inside the CLR. It can't beat CMP.
I've always thought it was because of the immutability of strings. That is, every time you make a change to the string, it requires allocating memory for a whole new string (rather than modifying the original in place).
Probably a woefully naive understanding but perhaps someone else can expound further.
There are several things to consider when looking at the "cost" of manipulating strings.
There is the cost in terms of memory usage, there is the cost in terms of CPU cycles used, and there is a cost associated with the complexity of the code involved.
Integer manipulation (Add, Subtract, Multipy, Divide, Compare) is most often done by the CPU at the hardware level, in few (or even 1) instruction. When the manipulation is done, the answer fits back in the same size chunk of memory.
Strings are stored in blocks of memory, which have to be manipulated a byte or word at a time. Comparing two 100 character long strings may require 100 separate comparison operations.
Any manipulation that makes a string longer will require, either moving the string to a bigger block of memory, or moving other stuff around in memory to allow growing the existing block.
Any manipulation that leaves the string the same, or smaller, could be done in place, if the language allows for it. If not, then again, a new block of memory has to be allocated and contents moved.

CRC16 collision (2 CRC values of blocks of different size)

The Problem
I have a textfile which contains one string per line (linebreak \r\n). This file is secured using CRC16 in two different ways.
CRC16 of blocks of 4096 bytes
CRC16 of blocks of 32768 bytes
Now I have to modify any of these 4096 byte blocks, so it (the block)
contains a specific string
does not change the size of the textfile
has the same CRC value as the original block (same for the 32k block, that contains this 4k block)
Depart of that limitations I may do any modifications to the block that are required to fullfill it as long as the file itself does not break its format. I think it is the best to use any of the completly filled 4k blocks, not the last block, that could be really short.
The Question
How should I start to solve that problem? The first thing I would come up is some kind of bruteforce but wouldn't it take extremly long to find the changes that will result in both CRC values stay the same? Is there probably a mathematical way to solve that?
It should be done in seconds or max. few minutes.
There are math ways to solve this but I don't know them. I'm proposing a brute-force solution:
A block looks like this:
SSSSSSSMMMMEEEEEEE
Each character represents a byte. S = start bytes, M = bytes you can modify, E = end bytes.
After every byte added to the CRC it has a new internal state. You can reuse the checksum state up to that position that you modify. You only need to recalculate the checksum for the modified bytes and all following bytes. So calculate the CRC for the S-part only once.
You don't need to recompute the following bytes either. You just need to check whether the CRC state is the same or different after the modification you made. If it is the same, the entire block will also be the same. If it is different, the entire block is likely to be different (not guaranteed, but you should abort the trial). So you compute the CRC of just the S+M' part (M' being the modified bytes). If it equals the state of CRC(S+M) you won.
That way you have much less data to go through and a recent desktop or server can do the 2^32 trials required in a few minutes. Use parallelism.
Take a look at spoof.c. That will directly solve your problem for the CRC of the 4K block. However you will need to modify the code to solve the problem simultaneously for both the CRC of the 4K block and the CRC of the enclosing 32K block. It is simply a matter of adding more equations to solve. The code is extremely fast, running in O(log(n)) time, where n is the length of the message.
The basic idea is that you will need to solve 32 linear equations over GF(2) in 32 or more unknowns, where each unknown is a bit location that you are permitting to be changed. It is important to provide more than 32 unknowns with which to solve the problem, since if you pick exactly 32, it is not at all unlikely that you will end up with a singular matrix and no solution. The spoof code will automatically find non-singular choices of 32 unknown bit locations out of the > 32 that you provide.

To pad or not to pad - creating a communication protocol

I am creating a protocol to have two applications talk over a TCP/IP stream and am figuring out how to design a header for my messages. Using the TCP header as an initial guide, I am wondering if I will need padding. I understand that when we're dealing with a cache, we want to make sure that data being stored fits in a row of cache so that when it is retrieved it is done so efficiently. However, I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
For example: I want to send over a message header consisting of a 3 byte field followed by a 1 byte padding field for 32 bit alignment. Then I will send over the message data.
In this case, the receiver will just take 3 bytes from the stream and throw away the padding byte. And then start reading message data. As I see it, he will not be storing the 3 bytes and the message data the way he wants. The whole point of byte alignment is so that it will be retrieved in an efficient manner. But if the retriever doesn't care about the padding how will it be retrieved efficiently?
Without the padding, the retriever just takes the 3 header bytes from the stream and then takes the data bytes. Since the retriever stores these bytes however he wants, how does it matter whether or not the padding is done?
Maybe I'm missing the point of padding.
It's slightly hard to extract a question from this post, but with what I've said you guys can probably point out my misconceptions.
Please let me know what you guys think.
Thanks,
jbu
If word alignment of the message body is of some use, then by all means, pad the message to avoid other contortions. The padding will be of benefit if most of the message is processed as machine words with decent intensity.
If the message is a stream of bytes, for instance xml, then padding won't do you a whole heck of a lot of good.
As far as actually designing a wire protocol, you should probably consider using a plain text protocol with compression (including the header), which will probably use less bandwidth than any hand-designed binary protocol you could possibly invent.
I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
If I'm a receiver, I might pass a buffer (i.e. an array of bytes) to the protocol driver (i.e. the TCP stack) and say, "give this back to me when there's data in it".
What I (the application) get back, then, is an array of bytes which contains the data. Using C-style tricks like "casting" and so on I can treat portions of this array as if it were words and double-words (not just bytes) ... provided that they're suitably aligned (which is where padding may be required).
Here's an example of a statement which reads a DWORD from an offset in a byte buffer:
DWORD getDword(const byte* buffer)
{
//we want the DWORD which starts at byte-offset 8
buffer += 8;
//dereference as if it were pointing to a DWORD
//(this would fail on some machines if the pointer
//weren't pointing to a DWORD-aligned boundary)
return *((DWORD*)buffer);
}
Here's the corresponding function in Intel assembly; note that it's a single opcode i.e. quite an efficient way to access the data, more efficient that reading and accumulating separate bytes:
mov eax,DWORD PTR [esi+8]
Oner reason to consider padding is if you plan to extend your protocol over time. Some of the padding can be intentionally set aside for future assignment.
Another reason to consider padding is to save a couple of bits on length fields. I.e. always a multiple of 4, or 8 saves 2 or 3 bits off the length field.
One other good reason that TCP has padding (which probably does not apply to you) is it allows dedicated network processing hardware to easily separate the data from the header. As the data always starts on a 32 bit boundary, it's easier to separate the header from the data when the packet gets routed.
If you have a 3 byte header and align it to 4 bytes, then designate the unused byte as 'reserved for future use' and require the bits to be zero (rejecting messages where they are not as malformed). That leaves you some extensibility. Or you might decide to use the byte as a version number - initially zero, and then incrementing it if (when) you make incompatible changes to the protocol. Don't let the value be 'undefined' and "don't care"; you'll never be able to use it if you start out that way.

Resources