TCP socket: Blocking or non-blocking? - multithreading

I am writing a PC application that will connect with TCP to multiple micro-controller boards. The micro-controller boards are listening and the PC app connecting to each of them as a client.
There can be up to up to about 50 boards depending on configuration, and it is not clear to me if it is best to create all these sockets as blocking in individual threads waiting at recv() or make the sockets non-blocking and use select() to check for incoming data one socket at the time.
So is up to 50 threads of blocking recv() OK, or is it better to check one socket at the time with select()?
Info: Data is almost entirely coming from the micro-controller boards and the rate can vary from essentially nothing to max. of the network.
Thanks...

Related

server.listen(5) vs multithreading in socket programming

I am working on socket programming in python. I am a bit confused with the concept of s.listen(5) and multithreading.
As I know, s.listen(5) is used so that the server can listen upto 5 clients.
And multithreading is also used so that server can get connected to many clients.
Please explain me in which condition we do use multithreading?
Thanks in advance
You will need to use multithreading to handle multiple clients. When you accept a connection you receive a new socket instance that represents the connection with that new client. Now lets suppose you are making a chat and you need to receive the data from one client and send it to all connected clients, if you are not using multithreading you will need to implement a non-performatic logic using a single process loop to walk your connected clients reading each one and after all send to them the data, but you will have another problem because the listen function creates an IO interruption that waits until a new client try to connect if you don't use non-block socket. It's all about architecture, performance and good practices.
A good reading about multithreading follow this link https://techdifferences.com/difference-between-multiprocessing-and-multithreading.html.
As I know, s.listen(5) is used so that the server can listen upto 5 clients.
No. s.listen(5) declares a backlog of size 5. Than means that the listening socket will let 5 connection requests in pending stated before they are accepted. Each time a connection request is accepted it is no longer in the pending backlog. So there is no limit (other than the server resources) to the number of accepted connections.
A common use of multithreading is to start a new thread after a connection has been accepted to process that connection. An alternative is to use select on a single thread to process all the connections in the same thread. It used to be the rule before multithreading became common, but it can lead to more complex programs

How to sync the rate of communication between socket server and client in linux

I'm currently working on Linux Network programming and i'm new to this. I am developing some Stream Socket (TCP) based client-server applications in Linux Network Programming (C language).
Server- will continuously send the data
Client- will continuously receive the data
(both are running in while(1) loop)
If Server.c is running on system-A and client.c is running on
system-B. Server is sending some 100 packets/sec. But due to some
network issues the Client is able to receive 10 packets/sec.
i.e; Producer is producing more than the capacity of receiver.
Is there any packet loss? or all packets will be transmitted as it is a TCP connection (reliable)?
If any packet loss is there how to enable the retransmission?
Any Flags or Options
Is there any mechanism or procedure to handle producer-consumer problem?
How Send() and recv() function works? (any blocking kind is there)
Some help is needed!
Please.
Thanking You all
TCP has built-in flow-control. You do not have to make any special arrangements at application level. If the sender consistently tx's more data than the receiver can eat, the TCP stack will reduce the window size to reduce the transfer rate. The effect is that the send() calls block for longer.

epoll issue: tunneling and multi-threading

I encounter this problem while trying to do TCP tunnelling between two threads.
Thread 1
listen at Port
accept
then add the sock after accept to epoll_ctl
while (1)
epoll_wait
read whatever from Port to remote (tunnelling)
Thread 2
connect to Port
if connected
communicate...
What I actually observe is: while Thread 2 is blocked on connect, Thread 1 has no chance to run epoll_wait and send the connect info to the remote. Thus both threads cannot make progress.
One possible solution is to use parent-child processes instead of multi-threading. But before I switch to that, could it still be done with multi-threading? I think what it is needed here is some kind of interrupt thing than just polling. Right?
Thank you for the insight.
You can add server side socket descriptor into epoll_ctl. But I'm curious that if thread2 blocked on connection, what information you need to send to server? Thanks for your hint.

poll system call in linux drivers

I am learning Linux internals. So I came across the poll system call. As far as I understand, it is used by drivers to provide notification when some data is ready to be read from device and when we have data ready to device.
If device do not have any data to read, process will get sleep and wake up when data become available and vice versa for write case.
Can someone provide me concrete understanding of poll system call with some real example?
poll and select (the latter is very similar to poll with these differences) sys calls are used in so called asynchronous event-driven approach for handling client's requests.
Basically, in network programming there are two major strategies for handling many connections from network clients by the server:
1) more traditional: threaded or process-oriented approach. In this situation network server has main proccess which listens on one specific network port (port 80 in case of web servers) for incomming connections and when connection arrives, it spawns new thread/process to handle this new connection. Apache HTTP server took this approch.
2) aforementioned asynchronous event-driven approach where (in simplest case) network server (for example web server) is application with only one process and it accepts connections (creating socket for each new client) and then it monitors those sockets with poll/select for incoming data. Nginx http web server took this approch.

Is it possible to open a serial port multiple times?

I'm designing a control system which should handle too many actuators (or sets of actuators) over a serial port. The new design (not implemented yet) is supposed to control actuators over multiple POSIX threads.
Is this possible to open a serial port multiple times (by multiple threads)?
If yes, I'm gonna write a synchronous write / asynchronous read mechanism. There will be n threads M[0] to M[n-1] which can write data directly to serial port. They're not supposed to read from serial port directly. Instead, a thread R is supposed to read data from port in a while(true) loop and serve data to corresponding threads waiting for it. (Wake up M[i] which is waiting for response, when data is ready and it belongs to ith thread)
It's all depended on the question if it's possible to write to serial port by multiple threads or not.
notes: I can't test the behavior of serial port, because currently I have no access to devices in mechatronics lab of my university,
I'm using kernel 2.6.38-8 patched with Xenomai real-time subsystem (if important)
I'm porting code to traditional Linux way of communicating with serial port. (Open /dev/ttyS0, set baudrate, read(), write(), etc.) Currently a third party library is used to talk with serial port.
You can open the same serial port only once. The second trial fails with Access Denied. Once the port is opened, you can work with it in different threads, using the port handle. Of course, you need to synchronize port access between these threads.

Resources