How the socket server knows the client transform data over - web

In the socket level, when the connection is built, the server will continue read data from the socket, my question is how the server will knows the client won’t send any data?

how the server will knows the client won’t send any data
It doesn't. One option is to scan for pre-agreed-upon "end of message" byte sequence. When server sees this byte sequence, it considers message fully received. If there's more data in the buffer, it belongs to another message.
Or the client must advertise its message length ahead of time. "I'm going to send you X bytes now. Here they are: ..." The server then reads only X bytes from the socket and considers message fully received.
Take a look at redis protocol for an example of the second scheme. It's very simple and fully functional. It is so simple, in fact, that a full client can be implemented in only 20 lines of ruby.

Related

BLE Write Commands (Write Without Response)

In the Bluetooth 4.0-4.2 Specifications Vol 3 Part F, I can find this text:
Commands and notifications that are received but cannot be processed, due to
buffer overflows or other reasons, shall be discarded. Therefore, those PDUs
must be considered to be unreliable.
I wonder, who receives? For Write Commands, is it the ATT server that receives from the ATT client, or might it also be the ATT layer of the sender host that receives a request to send a Write Command from the client app that drops the Write Command, i.e. it gets dropped before even being sent out over the air?
The air interface is often limited in the number of packets it can buffer for a Connection Event. Ditto Notification vs Indication in the other direction (server to client)
"Commands and notifications that are received..."
Commands, for example, Write request that is from Client to Server, so the Server is receiver. The process is that the client send out the write request to server, and when the server receive the write request(lower layer first received) but it find there is no buffer(or other reasons) for this write request and it will discard the write request(higher layer will not receive the write request).
Notifications, instead, from Server to Client, so the Client is receiver. When the client receive the notification but there is no buffer(or other reasons) it will discard the notification.
This is about flow control of communication protocols not just only for Bluetooth. So if you understand flow control you may not have such confusion.
I wonder, who receives? For Write Commands, is it the ATT server that receives from the ATT client,
Yes, it is the ATT server. Both ATT and GATT are not reliable. however the link layer is reliable. I assume the higher layer e.g. the application shall constitute reliability checking.
or might it also be the ATT layer of the sender host that receives a
request to send a Write Command from the client app that drops the
Write Command, i.e. it gets dropped before even being sent out over
the air?
This is out of spec,I think Bluetooth stack should return corresponding error e.g. "failer" due to no memory.

Where goes those messages not yet received in Node.js?

For example we have a basic node.js server <-> client comunicaciton.
A basic node.js server who sends each 500ms a message to the only o every one client connected with their respective socket initiated, the client is responding correctly to the heratbeat and receiving all the messages in time. But, imagine the client has a temporal connection lag (without closing socket), cpu overload, etc.. And cannot process nothing during 2secs or more.
In this situation, where goes all those the messages that are not yet received by the client??
They are stored in node? in any buffer or similar?
And viceversa? The client is sending every 500ms a message to the server (the server only listens without responding), but the server has a temporary connection issue or cpu overhead during 2 or 3 secs..
Thanks in advice!! any information or aclaration will be welcomed
Javier
Yes, they are stored in buffers, primarily in buffers provided by the OS kernel. Same thing on the receiving end for connections incoming to a node server.

a UDP socket based rateless file transmission

I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.

Calculating the Ping of a WebSocket Connection?

small question. How can I calculate the ping of a WebSocket connection?
The server is set up using Node.js and node-websocket-server, if that matters at all.
There is few ways. One that is offered by Raynos - is wrong. Because client time and server time are different, and you cannot compare them.
Solution with sending timestamp is good, but it has one issue. If server logic does some decisions and calculations based on ping, then sending timestamp, gives risk that client software or MITM will modify timestamp, that way it will give another results to server.
Much better way, is sending packet to client with unique ID, that is not increment number, but randomized. And then server will expecting from client "PONG" message with this ID.
Size of ID should be same, I recommend 32bits (int).
That way server sends "PING" with unique ID and store timestamp of the moment message sent, and then waiting until it will receive response "PONG" with same ID from Client, and will calculate latency of round-trip based on stored timestamp and new one on the moment of receiving PONG message.
Don't forget to implement case with timeout, to prevent lost PING/PONG packet stop the process of checking latency.
As well WebSockets has special packet opcode called PING, but example from post above is not using this feature. Read this official document that describes this specific opcode, it might be helpful if you are implementing your own WebSockets protocol on server side: https://www.rfc-editor.org/rfc/rfc6455#page-37
To calculate the latency you really should complete the round-trip. You should have a ping message that has a timestamp in it. When one side or the other receives a ping it should change it to a pong (or gnip or whatever) but keep the original timestamp and send it back to the sender. Then the original sender can compare the timestamp to the current time to see what the roundtrip latency is. If you need the one way latency divide by 2. The reason you need to do it this way is that without some very sophisticated time skew algorithms, the time on one host vs another is not going to be comparable at small time deltas like this.
Websockets have a ping type message which the server can respond to with a pong type message. See this for more info about websockets.
You can send a request over the web socket with Date.now() as data and compare it to Date.now() on the server.
This gives you the time difference between sending the packet and receiving it plus any handling time on either end.

How the buffering work in socket on linux

How does buffering work with sockets on Linux?
i.e. if the server does not read the socket and the client keeps sending data.
So what will happen? How big is the socket's buffer? And will the client know so that it will stop sending?
For UDP socket client will never know - the server side will just start dropping packets after the receive buffer is filled.
TCP, on the other hand, implements flow control. The server's kernel will gradually reduce the window, so the client will be able to send less and less data. At some point the window will go down to zero. At this point the client fills up its send buffer and receives an error from the send(2).
TCP sockets use buffering in the protocol stack. The stack itself implements flow control so that if the server's buffer is full, it will stop the client stack from sending more data. Your code will see this as a blocked call to send(). The buffer size can vary widely from a few kB to several MB.
I'm assuming that you're using send() and recv() for client and server communication.
So, send() will return the number of bytes that have been sent out. This doesn't necessarily equal to to the number of bytes you wanted to send out, so it's up to you to realise this and send the rest.
Now, the recv() returns the number of bytes read to the buffer. So if recv returns a 0, then the server has probably closed the connection.

Resources