I'm looking to understand how file transfers working in VNC/TightVNC/RFB.
In https://github.com/rfbproto/rfbproto/blob/master/rfbproto.rst#serverinit I see there is mention of certain client messages that look relevant if using the Tight Security Type, e.g.
132 "TGHT" "FTC_UPRQ" File Upload Request
133 "TGHT" "FTC_UPDT" File Upload Data
But I don't see detail on how these messages are used in the protocol
At https://www.tightvnc.com/ there is lots of information on usage, but so far not found anything about the protocol itself.
How do the file transfers work? As in, what are the low-level details of the messages sent in both directions to initiate and complete an upload from the client to the server?
(Ultimately I am looking to implement this, say in NoVNC, but I'm quite a few steps away from any coding at this point)
Looking in the source for UltraVNC there is another protocol, based on message type of 7 to initiate a file transfer. This is part of the RFB specification, although details are not given beyond including that messages of type 7 related to "FileTransfer"
This a very partial answer from looking at the code of
https://github.com/LibVNC/libvncserver/blob/5deb43e2837e05e5e24bd2bfb458ae6ba930bdaa/libvncserver/tightvnc-filetransfer/rfbtightserver.c
https://github.com/LibVNC/libvncserver/blob/5deb43e2837e05e5e24bd2bfb458ae6ba930bdaa/libvncserver/tightvnc-filetransfer/handlefiletransferrequest.c
https://github.com/TurboVNC/tightvnc/blob/a235bae328c12fd1c3aed6f3f034a37a6ffbbd22/vnc_winsrc/vncviewer/FileTransfer.cpp
I think that an upload from the client is initiated by the client:
client -> server, 1 byte = 132: messages type of a file upload request
client -> server, 1 byte: compression level, where 0 is not compressed, and I don't think libvnc supports anything other than 0(?)
client -> server, 2 bytes big endian integer: the length of the file name
client -> server, 4 bytes big endian: the "position" - not sure what this is, but I suspect libvnc either ignores it, or there is a bug in libvnc where on little endian systems (e.g. intel) this might break in some situations if this isn't zero since there seems to be some code that assumes it's 2 bytes https://github.com/LibVNC/libvncserver/blob/5deb43e2837e05e5e24bd2bfb458ae6ba930bdaa/libvncserver/tightvnc-filetransfer/handlefiletransferrequest.c#L401.
When uploading, TightVNC's "legacy" code also seems to set this as zero https://github.com/TurboVNC/tightvnc/blob/a235bae328c12fd1c3aed6f3f034a37a6ffbbd22/vnc_winsrc/vncviewer/FileTransfer.cpp#L552
client -> server, "length of the file name" bytes: the file name itself
I'm not sure how the server responds to say "yes, that's fine". See below for how the server can say "that's not fine" and abort
And then I think that uploads are done in max 64k chunks (num of bytes are limited to 16 bits). So each chunk is:
client -> server, 1 byte = 133: message type of file upload data request
client -> server, 1 byte: compression level, where 0 is not compressed, and I don't think libvnc supports anything other than 0(?)
client -> server, 2 bytes big endian: the uncompressed(?) size of the upload data
client -> server, 2 bytes big endian: the compressed size of the upload data. I think for libvnc since compression isn't supported, this has to equal the uncompressed size
client -> server, "compressed size" bytes: the data of the current chunk the file itself
not sure how the server acknowledges that this is all fine
Then once all the data has been uploaded, there is a final empty chunk followed by the modification/access time of the file:
client -> server, 1 byte = 133: message type of file upload data request
client -> server, 1 byte: compression level, where 0 is not compressed, and I don't think libvnc supports anything other than 0(?)
client -> server, 2 bytes = 0: the uncompressed(?) size of the upload data
client -> server, 2 bytes = 0: the compressed size of the upload data
client -> server, 4 bytes: the modification and access time of the file. libvnc sets both to be the same, and interestingly there doesn't seem to be a conversion from the endianness of the messages to the endianness of the system.
and as per the other parts of upload, not sure how the server acknowledges that this has been successful
If the client wants to cancel the upload:
client -> server, 1 byte = 135: message type of file upload failed
client -> server, 1 byte: unused(?)
client -> server, 1 byte: length of reason
client -> server, "length of reason" bytes: human readable reason of why the upload was cancelled
If the server wants to fail the upload:
server -> client, 1 byte = 132: message type of file upload cancel
server -> client, 1 byte: unused(?)
server -> client, 1 byte: length of reason
server -> client, "length of reason" bytes: human readable reason of why the upload failed
It does seem odd that there doesn't seem a way for the server to acknowledge any sort of success, so the client can't really give the user a "yes, it's worked!" sign. At least, not one with high confidence that everything has indeed worked.
It also does look like at most 1 upload at a time is possible: there is no ID or anything like that to distinguish multiple files being uploaded at the same time. Although given this would all be over the same TCP connection (typically) there would probably not be any speed benefit of this.
Looking in the source for TightVNC, it looks like (confusingly) TightVNC itself doesn't seem to support the 132 "TGHT" / 133 "TGHT" messages.
Instead, it has a sub-protocol based on messages of type 252 (0xFC). In this sub-protocol, the message types are 4 byte integers: 252 in the first byte, and then 3 more, as per the comment in FTMessage.h
read first byte (UINT8) as message id, but if first byte is equal to 0xFC then it's TightVNC extension message and must read next 3 bytes and create UINT32 message id for processing.
At first glance on a high level it looks similar to the one in libvnc, but it does appear to include more server acknowledgements. For example, in response to a request to start an upload, the server will reply with a message to type 0xFC000107 to say "yes, that's fine" (I think)
Related
I've been struggling for a while to get my messages framed correctly between my NodeJS server and my erlang gen_tcp server. I was using {packet,line} successfully until I had to send large data messages and needed to switch to message size framing.
I set gen_tcp to {packet,2}
and I'm using the library from:
https://github.com/davedoesdev/frame-stream
for the NodeJS tcp decode side. It is ALSO set to packet size option 2
and I have tried packet size option 4.
I saw for any messages with a length under 127 characters this setup works well, but any messages longer than this has a problem.
I ran a test by sending longer and longer messages from gen_tcp and then reading out the first four bytes received on the NodeJS side:
on message 127:
HEADER: 0 0 0 127
Frame length 127
on message 128:
HEADER: 0 0 0 239 <----- This should be 128
Frame length 239 <----- This should be 128
Theories:
Some character encoding mismatch since it's on the number 128 (likely?)
Some error in either gen_tcp or the library (highly unlikely?)
Voodoo magic curse that makes me work on human-rights day (most likely)
Data from wireshark shows the following:
The header bytes are encoded properly by gen_tcp past 128 characters since the hex values proceed as follows:
[00][7e][...] (126 length)
[00][7f][...] (127 length)
[00][80][...] (128 length)
[00][81][...] (129 length)
So it must be that the error lies when the library on the NodeJS side calls the Node readUInt16BE(0) or readUInt32BE(0) functions. But I checked the endieness and both are big-endian.
If the header bytes are [A,B] then, in binary, this error occurs after
[00000000 01111111]
In other words, readUInt16BE(0) reads [000000000 10000000] as 0xef ? which is not even an endian option...?
Thank you for any help in how to solve this.
Kind Regards
Dale
I figured it out, the problem was caused by setting the socket to receive on UTF-8 encoding which supports ascii up to 127.
Dont do this: socket.setEncoding('utf8').
It seems obvious now but that one line of code is hard to spot.
I am writing a BitTorrent client, where the application is receiving large blocks of data after requesting pieces from other peers. Sometimes the blocks are larger than piece-length of the torrent.
For example, where torrent piece-length 524288 bytes, some piece requests result in 1940718596 bytes long responses.
Also, the message seems valid as the length encoded in the first four bytes happens to be the same (that large num).
Question: What to do with that data, should I ignore the excess bytes (after piece-length)? Or, should I write the data into corresponding files? - what is concerning because it might override the next pieces!
The largest chunk of a piece the protocol allows in a piece message is 16 KB (16384 bytes). So if a peer sent a 1940718596 bytes (1.8 GB) long piece message, the correct response is to disconnect from it.
Also, if a peer sends a piece message that doesn't correspond to a request message you have sent earlier, you shall also disconnect from it.
A peer that receives a request message asking for more than a 16 KB chunk, shall also disconnect the requester. Requesting a whole piece in a single request message is NOT allowed.
A request message that goes outside the end of the piece, is of course, also NOT allowed.
While it's possible that you will encounter other peers that don't follow the protocol, the most likely when writing a new client, is that the error is on your side.
The most important tool you can use is WireShark. Look how other clients behave and compare with yours.
I am currently working on a graduation project where I want to transmit a sessiontoken using BLE. On the server side I am using Node.js and Bleno to create the connection. After the client subscribes to the notification, the server will push the token.
A small part of the code is:
const buf1 = Buffer.from(info, 'utf8');
updateValueCallback(buf1);
At this step, I am using nRF Connect to check if everything is working. My intention works, except I see that only the first 20 characters are transferred. (As much as the packet size)
My question concerns the buffer size. Will, when I finally connect to an Android app, the whole string be transmitted? In this case the underlying protocols will cut the string and reassemble it on the other side. In this case the buffer size doesn't matter. Or must I negotiate the MTU to be the size of the string. In other words must the buffersize be the size of the transmitted package?
In the case the buffer is smaller than the whole string, can the whole string still be transmitted with it?
GATT requires that a notification is maximum MTU - 3 bytes long. The default MTU is 23 so hence the maximum modification value length is 20 bytes by default. By negotiating a larger MTU you can send longer notifications (if your BLE stack supports that).
I haven't used Bleno but all the stack that I have used I needed to slice the data myself 20 bytes at the time. And on receiver side collect them and put them together again.
The stacks have been good to buffer the data and transmit it one chunk at the time. So I have looped the function (as your updateValueCallback()) until all the slices of my data was done.
Hope it works for you.
I'm writing a small app to test out how torrent p2p works and I created a sample torrent and am seeding it from my Deluge client. From my app I'm trying to connect to Deluge and download the file.
The torrent in question is a single-file torrent (file is called A - without any extension), and its data is the ASCII string Test.
Referring to this I was able to submit the initial handshake and also get a valid response back.
Immediately afterwards Deluge is sending even more data. From the 5th byte it would seem like it is a bitfield message, but I'm not sure what to make of it. I read that torrent clients may send a mixture of Bitfield and Have messages to show which parts of the torrent they possess. (My client isn't sending any bitfield, since it is assuming not to have any part of the file in question).
If my understanding is correct, it's stating that the message size is 2: one for identifier + payload. If that's the case why is it sending so much more data, and what's that supposed to be?
Same thing happens after my app sends an interested command. Deluge responds with a 1-byte message of unchoke (but then again appends more data).
And finally when it actually submits the piece, I'm not sure what to make of the data. The first underlined byte is 84 which corresponds to the letter T, as expected, but I cannot make much more sense of the rest of the data.
Note that the link in question does not really specify how the clients should supply messages in order once the initial handshake is completed. I just assumed to send interested and request based on what seemed to make sense to me, but I might be completely off.
I don't think Deluge is sending the additional bytes you're seeing.
If you look at them, you'll notice that all of the extra bytes are bytes that already existed in the handshake message, which should have been the longest message you received so far.
I think you're reading new messages into the same buffer, without zeroing it out or anything, so you're seeing bytes from earlier messages again, following the bytes of the latest message you read.
Consider checking if the network API you're using has a way to check the number of bytes actually received, and only look at that slice of the buffer, not the entire thing.
I've never worked with bluetooth before. I have to sends data via BLE and I've found the limit of 20 bytes per chunk.
The sender is an Arduino and the receiver could be both an Android or a Node.js app on a pc.
I have to send 9 values, stored in float values, so 4 bytes * 9 = 36 bytes. I need 2 chunks for all my data via BLE. The receiving part needs both chunks to process them. If some data are lost, I don't care.
I'm not expert in network protocols and I think I have to give each message an incremental timestamp so that the receiver can glue the two chunks with the same timestamp or discard the last one if the new timestamp is higher. But I'm not sure how to do a checksum, if I really need it or not, if I really have to care about it, or if - for a simple beta version of my system - I can ignore all those problems..
Does anyone can give me some advice? Like examples of similar situations handled with BLE communication?
You can get around the size limitation using the "Read Blob Request" of ATT. It allows you to read an attribute and also give an offset. So, you can use it to read the attribute with an offset of 0, if there's more than ATT_MTU bytes than you can request again with the offset at ATT_MTU*1, if there's still more ATT_MTU*2, etc... (You can read it in 3.4.4.5 of the Bluetooth v4.1 specifications; it's in the 4.0 spec too but I don't have that in front of me right now)
If the value changes between request, I'm not sure how you could go about detecting such a change. You could have the attribute send notifications when there's a change to interrupt the process in case the value changes in the middle of reading it.