linux unix sockets vs tcp sockets send buffer size - linux

I am comparing how many bytes a send call can transfer when a socket is tcp and when it is unix socket.
For unix domain socket the number is always 219264,but for TCP this number is much higher.Why is this difference? Both the programs are executed in the same machine
Note:sockets are in non blocking mode
checked the buffer size,these are the values
unix domain socket
receive buffer size =212992
send buffer size =212992
TCP socket
receive buffer size =1062000
send buffer size =2626560
can someone explain me why is this difference?

The tcp buffer is used for packages which have been sent but not acknowledged by the other end yet and for packages which have been received out of order and are waiting for delayed packages to arrive before presented to the application. Of course packages will also stay in the buffer as long as the consuming application doesn't read() the data.
Over UNIX sockets, packages which are waiting for ACK, or the order of packages is not an issue, therefore the buffer can be smaller.

Related

TCP buffer doesn't receive more data

I have a program which sends some TCP data to the kernel and another application is listening to the port. From my program, I keep sending the data using send() API after polling for space using the poll() API. There are multiple packets to be sent one by one. The TCP data buffer gets filled up after some point and the poll times out. The data is stuck at the buffer and the application which listens does not receive any packets, although the connection is established successfully.
I have tried to increase the buffer size too, but no luck.
Can someone please help me troubleshoot this issue?

How linux select() works?

Could someone explain how select() works to me? I have a wrong mental model and do not understand it from the man page.
If I have clients with multiple sockets to different servers and I am reading some info periodically from server, how does the kernel knows which socket mark to read? How does he know which socket read() will not be blocking? I think it is not predictable if it actually does not read data from server.
The kernel isn't predicting anything. It's telling you the current status of the socket's receive buffer. If the buffer is not empty, the socket is readable. If the buffer is empty, select() waits. When a packet arrives from the server, the kernel matches it to the correct socket using the IP address, protocol, and port numbers. The packet is put into the socket's receive queue and select() is notified that the status has changed.

Can af_unix socket with SOCK_SEQPACKET be limited to hundreds of bytes? [duplicate]

An example client(http://pastebin.com/hAbpFPia) and server(http://pastebin.com/9pL27hkK) using SOCK_SEQPACKET, indicate that the client can queue up ~42k of data.
Using setsockopt(socket, SOL_SOCKET, SO_SNDBUF,...) to set the size on the client size does limit the buffered data significantly. Is there some way I can enforce/set this limit from the server side? I tried setting SO_RCVBUF on the accept()ed socket and the socket before calling accept() but neither works for me.
Using AF_UNIX.
The server cannot enforce a SO_SNDBUF size on the client.
SO_SNDBUF limits the buffer that the sender's OS uses. The sending application can send up to SO_SNDBUF bytes, even if the underlying network cannot send the data right now, until a send would block. The OS takes care to send this data when the network stack is able to accept a new send.
SO_RCVBUF if the buffer that the receiver's OS uses. The network stack is receiving (and acknowledging to the sender) up to SO_RCVBUF bytes, even if the application does not call recv to retrieve the data. For unix domain sockets SO_RCVBUF does not have any effect, but SO_SNDBUF has.
As you can see both buffers sum up. And they are working on different ends of the network connection. They do not affect each other and the receiver cannot enforce a SO_SNDBUF on the sender's side - as the sender cannot enforce a SO_RCVBUF on the receiver's side.

a UDP socket based rateless file transmission

I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.

How the buffering work in socket on linux

How does buffering work with sockets on Linux?
i.e. if the server does not read the socket and the client keeps sending data.
So what will happen? How big is the socket's buffer? And will the client know so that it will stop sending?
For UDP socket client will never know - the server side will just start dropping packets after the receive buffer is filled.
TCP, on the other hand, implements flow control. The server's kernel will gradually reduce the window, so the client will be able to send less and less data. At some point the window will go down to zero. At this point the client fills up its send buffer and receives an error from the send(2).
TCP sockets use buffering in the protocol stack. The stack itself implements flow control so that if the server's buffer is full, it will stop the client stack from sending more data. Your code will see this as a blocked call to send(). The buffer size can vary widely from a few kB to several MB.
I'm assuming that you're using send() and recv() for client and server communication.
So, send() will return the number of bytes that have been sent out. This doesn't necessarily equal to to the number of bytes you wanted to send out, so it's up to you to realise this and send the rest.
Now, the recv() returns the number of bytes read to the buffer. So if recv returns a 0, then the server has probably closed the connection.

Resources