Nodejs UDP socket send with array larger than MTU size - node.js

I am currently using Nodejs to send UDP packets to Arduino with a Wiznet820io. I have successfully managed to send smallish byte arrays (500 bytes in length) however when I try to send to send a byte array greater than 1470 bytes I get nothing on the Arduino. I did some research and determined that it was silently failing due to the MTU size restriction.
So I attempted to split up the data array into multiple arrays not exceeding 1470 bytes and send them via a basic for loop. However, when doing this I noticed that only the first packet would get sent over unless I wait ~10ms before sending the next packet. I believe that this is due to the send function attempting to send the data before the previous data has been sent, However, i may be wrong in my understanding. Having the delay significantly reduces the speed of the server which is an issue as I am attempting to stream video.
Is there a proper way to parse and send packets over the UDP stream using dgram.send? Am I correct in my understanding of why only the first packet was retrieved? I am new to sockets so any help would be awesome :D
Cheers
Steve

Related

NodeJS Serialport Data Read

I have a serial data read question.
I have a serial device that responds to commands with an ack (0x06) or error (0x21)
To start data flow, I need to send a certain command, wait for the ACK, then send the data request command... And collect the 18 byte, non-terminated data. Then resend the data request after a couple of seconds.
Currently, I am using the 'on read' signal from the port. This works...however, the read event mostly fires for each byte read and sometimes if fires and reads two bytes of data.
So... Question is... How can I ensure the after I send the data request that I collect JUST the 18 bytes of the response message?
Will this require checking the size of each data read from the port and splitting it up into aingle-byte chunks?
From what I could tell, a parser might work, but I could not tell how to kick off the parser read after the ACK, and then stop after reading 18 bytes, the start again after then next ACK.
Any help or thoughts would be appreciated.

During traffic generation from client to server, why does socket's send buffer queue get stuck at one point even when the size of send buffer is more?

I am trying to develop a socket congestion algorithm in diameter stack by comparing the socket send buffer size[default max size] and the actual bytes available in send buffer queue.
getsockopt(ainfo->socket,SOL_SOCKET,SO_SNDBUF,(void *)&n, &m); // Getting the max default size
retval = ioctl(ainfo->socket,TIOCOUTQ,&bytes_available); // Getting the actual bytes available
Testing scenario:
Start the client and server.
Once handshake is successful, start the traffic from client to server.
Block the packets at server's end using iptables.
Check the netstat output and check the send buffer size with help of ss command.
The send buffer size get stuck after sometime at a certain number. Example size of send buffer is 87090 [tb as shown in ss output]. The Send Q is stuck at some random number which is much smaller than the tb [for example : 54344]. Sometimes it increases till 135920. Ideally it should reach somewhere around 80k and then get stuck. Can somewhen explain me this unusual behavior ?
Any help is appreciated.
Thanks!

When using recv(n), with n greather than the MTU are you guaranteed to read at least a whole layer 2 frame?

I was wondering, imagine if there is no data to read from a TCP socket, then a whole frame of 1492 bytes arrives (full). In your code (C or any language supporting TCP) you have let's say recv 4096 bytes, will the OS guarantee that the recv reads the whole 1492 bytes, or is it possible that the loading of the frame in memory and recv are "interleaved", so the recv may get less ?
TCP is a stream oriented protocol. Data are received in order but you must not do any assumption about how many times you have to call recv until you receive all your data.
It is up to your application to repeat the calls to recv until you know you have received what you need.
(1) TCP is stream-oriented protocol. This means that it accepts a stream of data from the upper layer on the sender and returns the stream of data to the upper layer on the receiver. TCP itself receives packets from IP layer, and then reconstructs the stream. That is at some points packets cease to exist. In theory it is possible that somewhere during this reconstructed stream, only half of the incomming packet is copied in buffer, but it seems to me pretty unlikely that this would happen.
Now, linux man page states
The receive calls normally return any data available up to the requested amount,
I would interpret it as "if one packet has arrived (correctly, in order, etc), you will get the whole packet worth of data". But there is no guarantee.
On the other hand Windows docs states:
recv will return as much data as is currently available—up to the size of the buffer specified.
Which sounds more like the guarantee.
Note, however, that the data will only be returned if the packet is received correctly, and it is next in-order packet (with next expected sequence numbers).
(2) Now, TCP layer works on complete packets. It is actually impossible for it to do interleaving or anything. Ethernet has a checksum, which cannot be computed unless the packet was received completely. Packets with incorrect Ethernet checksum should be filtered out by the network card. TCP also has a checksum which requires all packet data to compute. So, if the network card has passed the packet to your OS, then data should be available.
(3) I don't think you can assume that if the packet is received, it is immediatelly available. A pretty common feature of network cards is TCP segmentation offload, which reconstructs part of the stream and results in network card passing one TCP packet that was reconstructed from multiple TCP packets. There are other things that can be in place to reduce the number of interrupts, which more or less result in several packets comming at once. So, the more likely situation is that you will have maybe some delay and then receive data from several packets at once.
The point is, the opposite of what you described is likely to happen. However, I still would not write an application that makes any assumptions about how large a chunk of data is available at a time. This negates the concept of a stream.

BLE connection buffersize to packet length

I am currently working on a graduation project where I want to transmit a sessiontoken using BLE. On the server side I am using Node.js and Bleno to create the connection. After the client subscribes to the notification, the server will push the token.
A small part of the code is:
const buf1 = Buffer.from(info, 'utf8');
updateValueCallback(buf1);
At this step, I am using nRF Connect to check if everything is working. My intention works, except I see that only the first 20 characters are transferred. (As much as the packet size)
My question concerns the buffer size. Will, when I finally connect to an Android app, the whole string be transmitted? In this case the underlying protocols will cut the string and reassemble it on the other side. In this case the buffer size doesn't matter. Or must I negotiate the MTU to be the size of the string. In other words must the buffersize be the size of the transmitted package?
In the case the buffer is smaller than the whole string, can the whole string still be transmitted with it?
GATT requires that a notification is maximum MTU - 3 bytes long. The default MTU is 23 so hence the maximum modification value length is 20 bytes by default. By negotiating a larger MTU you can send longer notifications (if your BLE stack supports that).
I haven't used Bleno but all the stack that I have used I needed to slice the data myself 20 bytes at the time. And on receiver side collect them and put them together again.
The stacks have been good to buffer the data and transmit it one chunk at the time. So I have looped the function (as your updateValueCallback()) until all the slices of my data was done.
Hope it works for you.

TCP Sockets send buffer size efficiency

When working with WinSock or POSIX TCP sockets (in C/C++, so no extra Java/Python/etc. wrapping), is there any efficiency pro/cons to building up a larger buffer (e.g. say upto 4KB) in user space then making as few calls to send as possible to send that buffer vs making multiple smaller calls directly with the bits of data (say 1-1000 bytes), other the the fact that for non-blocking/asynchronous sockets the single buffer is potentially easier for me to manage.
I know with recv small buffers are not recommended, but I couldn't find anything for sending.
e.g. does each send call on common platforms go to into kernel mode? Could a 1 byte send actually result in a 1 byte packet being transmitted under normal conditions?
As explained on TCP Illustrated Vol I, by Richard Stevens, TCP divides the send buffer in near to optimum segments to fit in the maximum packet size along the path to the other TCP peer. That means that it will never try to send segments that will be fragmented by ip along the route to destination (when a packet is fragmented at some ip router, it sends back an IP fragmentation ICMP packet and TCP will take it into account to reduce the MSS for this connection). That said, there is no need for larger buffer than the maximum packet size of the link level interfaces you'll have along the path. Having one, let's say, twice or thrice longer, makes you sure that TCP will not stop sending as soon as it receives some acknowledge of remote peer, because of not having its buffer filled with data.
Think that the normal interface type is ethernet and it has a maximum packet size of 1500 bytes, so normally TCP doesn't send a segment greater than this size. And it normally has an internall buffer of 8Kb per connection, so there's little sense in adding buffer size at kernel space for that (if this is the only reason to have a buffer in kernel space).
Of course, there are other factors that force you to use a buffer in user space (for example, you want to store the data to send to your peer process somewhere, as there's only 8Kb data in kernel space to buffer, and you will need more space to be able to do some other processes) An example: ircd (the Internet Relay Chat daemon) uses write buffers of up to 100Kb before dropping a connection because the other side is not receiving/acknowledging that data. If you only write(2) to the connection, you'll be put on wait once the kernel buffer is full, and perhaps that's not what you want.
The reason to have buffers in user space is because TCP makes also flow control, so when it's not able to send data, it has to be put somewhere to cope with it. You'll have to decide if you need your process to save that data up to a limit or you can block sending data until the receiver is able to receive again. The buffer size in kernel space is limited and normally out of control for the user/developer. Buffer size in user space is limited only by the resources allowable to it.
Receiving/sending small chunks of data in a TCP connection is not recommendable because of the increased overhead of TCP handshaking and headers impose. Suppose a telnet connection in which for each character sent, a header for TCP and other for IP is added (20 bytes min for TCP, 20 bytes min for IP, 14 bytes for ethernet frame and 4 for the ethernet CRC) makes up to 60 bytes+ to transmit only one character. And normally each tcp segment is acknowledged individually, so that makes a full roundtrip time to send a segment and get the acknowledge (just to be able to free the buffer resources and assume this character as transmitted)
So, finally, what's the limit? It depends on your application. If you can cope with the kernel resources available and don't need more buffers, you can pass without havin buffers in user space. If you need more, you'll need to implement buffers and be able to feed the kernel buffer with your buffer data when available.
Yes, a one byte send can - under very normal conditions - result in sending a TCP packet with only a single byte payload. Send coalescing in TCP is normally done by use of Nagle's algorithm. With Nagle's algorithm, sending data is delayed iff there is data that has already been sent but not yet acknowledged.
Conversely data will be sent immediately if there is no unacknowledged data. Which is usually true in the following situations:
The connection has just been opened
The connection has been idle for some time
The connection only received data but nothing was sent for some time
In that case the first send call that your application performs will cause a packet to be sent immediately, no matter how small. So starting communication with two or more small sends is usually a bad idea because it increases overhead and delay.
The infamous "send send recv" pattern can also cause really large delays (e.g. on Windows typically 200ms). This happens if the local TCP stack uses Nagle's algorithm (which will usually delay the second send) and the remote stack uses delayed acknowledgment (which can delay the acknowledgment of the first packet).
Since most TCP stack implementations use both, Nagle's algorithm and delayed acknowledgment, this pattern should best be avoided.

Resources