Advantages of multiple udp sockets - multithreading

Hi this question is from a test i had recently :
(code of a server using one thread for read actions and N number of threads to write where N is the number of Writing actions needed to be done right now)
will using multiple UDP sockets (one for each client )over a single one(one for all of them) have any advantages ?
the official answer :
no because the server is using one thread for read/write per client which wont make it more efficient (students who addressed buffer overflow got full points)
my question is - under any circumstances will changing single UDP to many will have an efficiency impact?
Thanks

Maybe yes, if you are using BSD Systems (OSX, FreeBSD, etc.). BSD System do not behave well when sending a lot of upd packets on the same sockets. If the sending queue of the socket is full, the packets get dropped. So write on udp sockets never blocks on BSD systems. When you use 10.000 UDP Sockets sending to the same address, no packets will be dropped on the sending host. So there is definitely an advantage, since some OSs do crazy queue management.

Related

Linux raw socket (layer2) - best strategy

The context:
I am thinking about the best way to process packet from nic to apps.
I have 4 processes running and receiving packets from ethernet nic.
They run pf_packet sockets so they receive layer 2 packets.
The problem is that they all have to filter the packet they see.
There are no race conditions since the filtering is done by port. One app is interested in one unique port.
The question:
Is there a way to avoid each app to filter all the packet? Having one core for the filter and communicating the packet to the right app incurs context switch costs.
Is it possible for a nic to put the packets corresponding to custom port in a defined rx queue? That way my app will be sure that those packets are exclusively for it.
What is the best way?
If you do not want to use BPF and libpcap, perhaps you can use Linux Socket Filters https://www.kernel.org/doc/Documentation/networking/filter.txt
This will filter the packets in kernel space, before handing them to your packet sockets.
For some syntax examples, use the BSD BPF man-page https://www.freebsd.org/cgi/man.cgi?query=bpf&sektion=4 or google/duckduckgo
But I also suggest that, if your application is performance critical, you prototype and measure different alternatives, before discarding any one in particular (like libpcap).

One or more UDP networks realtime system

I'm developing a real-time system for elevators, and I'm in need for some advice on the implementation of the network module. The network module is based on UDP broadcasting, and there are several things it needs to do:
Spam a heartbeat, to know which elevators are alive
Broadcast new order/job
Broadcast if the elevator is doing the order/job
Now so far I've implemented only one UDP server, making the load of each message quite large. It contains a JSONobject with the following attributes:
STATUS
newOrders
takenOrders
orderQueue
My question is, should I spawn multiple threads running UDP on different ports for each elevator? If so, I could use a combination of one to send heartbeats, one to deal out orders, and one to broadcast the orderqueue. What are the pros and cons of running multiple UDP's on each client?

How to sync the rate of communication between socket server and client in linux

I'm currently working on Linux Network programming and i'm new to this. I am developing some Stream Socket (TCP) based client-server applications in Linux Network Programming (C language).
Server- will continuously send the data
Client- will continuously receive the data
(both are running in while(1) loop)
If Server.c is running on system-A and client.c is running on
system-B. Server is sending some 100 packets/sec. But due to some
network issues the Client is able to receive 10 packets/sec.
i.e; Producer is producing more than the capacity of receiver.
Is there any packet loss? or all packets will be transmitted as it is a TCP connection (reliable)?
If any packet loss is there how to enable the retransmission?
Any Flags or Options
Is there any mechanism or procedure to handle producer-consumer problem?
How Send() and recv() function works? (any blocking kind is there)
Some help is needed!
Please.
Thanking You all
TCP has built-in flow-control. You do not have to make any special arrangements at application level. If the sender consistently tx's more data than the receiver can eat, the TCP stack will reduce the window size to reduce the transfer rate. The effect is that the send() calls block for longer.

Epoll for many short living UDP requests

I want to realize an application which handles many UDP connections to different servers simultaneously and non-blocking. They should just send one short message (around 80 bytes including the UDP header) and receive one packet. They are not intended to do more than these two operations: send a message and receive one. So my first goal is to have as many requests per second as possible. I thought of using epoll() to realize my idea, but when I read that epoll is primarily meant for servers with long living client-server sessions, I was unsure if this would make sense in my case. What is the best technique? (I use Linux)

Sending from the same UDP socket in multiple threads

I have multiple threads which need to send UDP packets to different IP addresses (only to send, nothing needs to be received). Can I reuse the same UDP socket in all the threads?
Yes, I think you can.
As the packets are sent out individually, although the order they are received will be nondeterministic, it is already with UDP.
So sending in multiple threads in the same socket is fine.
Although, if you're doing other stuff with the socket, such as bind(), close(), then you could end up with race conditions, so you might want to be careful.
System calls are supposed to be atomic, so formally it seems fine for UDP. Then kernels have bugs too and you are inviting all sorts of nasty surprises. Why can't you use socket per thread? It's not like with TCP where you need a connection. As an added bonus you'd get a separate send buffer for each descriptor.

Resources