I have the following scenario:
A local PC receives data samples via bluetooth at 50.000 bits/sec. The data is send via UDP to some server. The server in turn distributes the data via web page/JavaScript and web sockets to connected browsers where the data is processed. Eventually, the results arriving from the browsers are passed on via UDP back to the local PC.
So far I'm experimenting with a strictly local setup, i.e. everything runs on one machine which has a CPU with four cores. I've written server code in both node.js and golang. In both cases there is a significant data loss, i.e. not every sample that is send via UDP is successfully received by the server even in case only one web socket client is connected.
Where is the bottleneck causing the loss? Is it the fact that everything runs on a local machine? Could it be that the web socket bandwidth is too small? Would I be better of with WebRTC? Or is it be something else entirely?
It is hard to say where exactly the bottleneck in your case is.
But UDP is an unreliable protocol (can loose data) while WebSockets (which uses TCP) is not. This means that the messages are probably lost by a processes which reads or writes the UDP data. Such packet loss might for example occur because these apps are too slow in general to read the data or because the socket buffers are too small to handle fluctuations in reading/writing speed caused by process scheduling or similar.
Related
I have an app where a single client talks to a single server. Normally, the client does a single connect, and then calls send repeatedly, and there's no problem.
However, I need to do a version where the client sets up a connection for each individual send (a bit like HTTP with and without keep-alive). In this version, the client calls socket, connect, send once, and then close.
The problem with this is that I very quickly run out of ephemeral client ports, and the connect fails. To get around this I call setsockopt with SO_REUSEADDR, and then bind to port 0, before calling connect (see here, for example).
This works, except that the TCP connection is no longer reliable. I get occasional incorrect data, presumably because there's still data around when the TCP connection is closed.
Is there any way to make this reliable (and fast)? shutdown before close doesn't help. Maybe I can get select to tell me if the socket is ready for output, but that seems like overkill.
Do you have to use TCP? If so, you will probably have to maintain an open connection and route your messages over that one connection.
There is SCTP, which may be a good fit for your use case - a reliable datagram protocol:
Like TCP, SCTP provides reliable, connection oriented data delivery with congestion control. Unlike TCP, SCTP also provides message bound‐ ary preservation, ordered and unordered message delivery, multi-stream‐ ing and multi-homing. Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.
I am trying to write internal transport system.
Data should be transferred from client to server using net sockets.
It is working fine except handling of network issues.
If I place firewall between client and server, on both sides I will not see any error, so data will continue to fill kernel buffer on client side.
And if I will restart app in this moment I will lose all data in buffer.
Question:
Do we have any way to detect network issues?
Do we have any way to get data back from kernel buffers?
Node js exposes the low level socket api to you very directly. I'm assuming that you are using a TCP socket to send and receive data.
One way to ensure that there is an active connection between the client and server is to send heartbeat signals back and forth. If you fail to receive a heartbeat from the server while sending data, you can assume that the connection failed.
As for the second part of your question: There is no easy way to get data back from kernel buffers. If losing the data will be a problem, I would make sure to write it to disk.
I'm currently working on Linux Network programming and i'm new to this. I am developing some Stream Socket (TCP) based client-server applications in Linux Network Programming (C language).
Server- will continuously send the data
Client- will continuously receive the data
(both are running in while(1) loop)
If Server.c is running on system-A and client.c is running on
system-B. Server is sending some 100 packets/sec. But due to some
network issues the Client is able to receive 10 packets/sec.
i.e; Producer is producing more than the capacity of receiver.
Is there any packet loss? or all packets will be transmitted as it is a TCP connection (reliable)?
If any packet loss is there how to enable the retransmission?
Any Flags or Options
Is there any mechanism or procedure to handle producer-consumer problem?
How Send() and recv() function works? (any blocking kind is there)
Some help is needed!
Please.
Thanking You all
TCP has built-in flow-control. You do not have to make any special arrangements at application level. If the sender consistently tx's more data than the receiver can eat, the TCP stack will reduce the window size to reduce the transfer rate. The effect is that the send() calls block for longer.
I have connected two Linux Machines using netcat over WLAN using Server-Client design. And now i am able to send and receive messages between them. On the server i use UDP socket creation :
$ nc -u -l 3333
and on the client side i connect to the port using the port number and destination IP :
$ nc -u 192.168.178.160 3333
This leads to a bi-directional connection between server and client. One couldn't tell, but i guess it is quite Real-Time.
now i want to develop the functionality and try and establish a real-time speech connection between the two sides. Recording via Microphones is also feasible through arecord commands which write the speech data to a .wav file . Transmission of the .wav file is possible, only after it has been fully recorded but this is of no use since what is desired, is a Real-Time communication. Of course the received speech signals have to be instantly played back on the other end.
Has anyone any idea how to make it Real-Time?
Fidelity means a large buffer count to preserve sound continuity despite network latency and latency variation, low sound delay approximating to real time means a small buffer count to reduce overall latency. You cannot have both.
IME, you need to keep ~250ms max. of sound buffered at both ends to maintain an illusion of 'real time' speech. This queue of buffers needs to be emptied at the fixed rate necessary to reproduce the speech and kept topped-up by the network protocol as necessary. If the network latency is too low to top up buffer pools of that size, the buffer pool has to be made larger, the queue longer and the perceived real-time performance will suffer.
The TCP/UDP issue is a red-herring on most network connections.
Just be thankful that you are not streaming video:)
I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.