UDP packets arriving on wrong sockets on Linux - linux

I have two UDP sockets bound to the same address and connected to addresses A and B. I have two more UDP sockets bound to A and B and not connected.
This is what my /proc/net/udp looks like (trimmed for readability):
sl local_address rem_address
3937: 0100007F:DD9C 0300007F:9910
3937: 0100007F:DD9C 0200007F:907D
16962: 0200007F:907D 00000000:0000
19157: 0300007F:9910 00000000:0000
According to connect(2): "If the socket sockfd is of type SOCK_DGRAM, then addr is the address to which datagrams are sent by default, and the only address from which datagrams are received."
For some reason, my connected sockets are receiving packets that were destined for each other. eg: The UDP socket connected to A sends a message to A, A then sends a reply back. The UDP socket connected to B sends a message to B, B then sends a reply back. But the reply from A arrives at the socket connected to B and the reply from B arrives at the socket connected to A.
Why on earth would this be happening? Note that it happens randomly - sometimes the replies arrive at the correct sockets and sometimes they don't. Is there any way to prevent this or any situation under which connect is supposed to not work?

Ehm, as far as I can see there is no ordering guarantee.
From the man page:
SO_REUSEPORT (since Linux 3.9)
Permits multiple AF_INET or AF_INET6 sockets to be bound to an identical socket address. This option must be set on each socket (including the first socket) prior to calling bind(2) on the socket. To prevent port hijacking, all
of the processes binding to the same address must have the same effective UID. This option can be employed with both TCP and UDP sockets.
For TCP sockets, this option allows accept(2) load distribution in a multi-threaded server to be improved by using a distinct listener socket for each thread. This provides improved load distribution as compared to traditional
techniques such using a single accept(2)ing thread that distributes connections, or having multiple threads that compete to accept(2) from the same socket.
For UDP sockets, the use of this option can provide better distribution of incoming datagrams to multiple processes (or threads) as compared to the traditional technique of having multiple processes compete to receive datagrams on
the same socket.
So you're using something that is mainly seen as a option for servers (or in some cases clients, but ordering can never be guaranteed - especially in UDP) as a client.
I suspect your approach is wrong, and needs a rethink =)
PS. Just had a quick glance but IMHO it's a bug in your approach

Related

Receiving and filtering UDP packets with the same destination port from multiple peers

I have two independent network peers sending video data over UDP to a single host both using the same destination port.
e.g.
protocol src_addr src_port dst_addr dst_port
UDP peer1 ephemeral host1 5555
UDP peer2 ephemeral host1 5555
I can have a single socket receive both streams and dispatch the video data to the appropriate decode engine based on src_addr; but what is the best way to achieve the same result without a single receive socket (which is desirable because I would prefer both decodes to be independent applications)?
It feels like it should be possible because each video data stream is uniquely identified in the table above.
The primary issue is that bind() fails when the two sockets, in the two separate receive applications, attempt to bind to the same port.
Options:
1) SO_REUSEADDR and connect()
Can I use SO_REUSEADDR to allow both sockets to bind to the same port? Will a subsequent call to connect() filter the appropriate datagrams to the appropriate application? Or will all datagrams still be delivered to the first socket?
2) SO_REUSEPORT and connect()
SO_REUSEPORT seems to be used to achieve the above for, as an example, load balancing. How is a receiving socket selected? And will it respect connect()?
A quick look at the code in the Linux kernel (net/core/sock_reuseport.c) suggests it is selected randomly unless a Berkeley packet filter is set.
3) Berkeley packet filter
Is this a sensible approach?

TCP's socket vs Linux's TCP socket

Linux API and TCP protocol both have concepts called "socket". Are they the same concept, and does Linux's TCP socket implement TCP's socket concept?
Relation between connections and sockets:
I have heard that two connections can't share a Linux's TCP socket, and is it true?
Tenebaum's Computer Networks (5ed 2011, Section 6.5.2 The TCP Service Model, p553) says:
A socket may be used for multiple connections at the same time. In other words, two or more connections may terminate at the same socket. Connections are identified by the socket identifiers at both ends.
Since the quote says two connections can share a "socket", does the book use a different "socket" concept from Linux's TCP socket? Does the book use TCP's socket concept?
Relation between processes and sockets:
I also heard that two processes can share a Linux's TCP socket. But if two processes can share a socket, can't the processes create their own connections on the socket at will, so there are two connections on the same Linux's TCP socket? Is it a contradiction to 1, where two connections can't share a Linux TCP socket?
Can two processes share a TCP's socket?
The book references a more abstract concept of a socket, one that is not tied to a particular OS or even a network/transport protocol. In the book, a socket is simply a uniquely defined connection endpoint. A connection is thus a pair (S1, S2) of sockets, and this pair should be unique in some undefined context. An example specific to TCP using my connection right now would have an abstract socket consisting of an interface IP address and a TCP port number. There are many, many connections between stackoverflow users like myself and the abstract socket [443, 151.101.193.69] but only a single connection from my machine [27165, 192.168.1.231] to [443, 151.101.193.69], which is a fake example using a non-routable IP address so as to protect my privacy.
If we get even more concrete and assume that stackoverflow and my computer are both running linux, than we can talk about the socket as defined by man 2 socket, and the linux API that uses it. Here a socket can be created in listening mode, and this is typically called a server. This socket can be shared (shared in the sense of shared memory or state) amongst multiple processes. However, when a peer connects to this listening socket a new socket is created (as a result of the accept() call. The original listening socket may again be used to accept() another connection. I believe if there are multiple processes blocked on the accept() system call then exactly one of these is unblocked and returns with the newly created connected socket.
Let me know if there is something missing here.
Speaking as the docs you're reading do is convenient, but it's not really accurate.
Sockets are a general networking API. Their only relation with TCP is, you can set sockets up to use it. You can also set sockets up to talk over any other networking protocol the OS backs; also, you don't necessarily have to use sockets, many OS's still offer other networking APIs, some with substantial niche advantages.
The problem this leaves you with is, the nontechnical language leaves you with an idea of how things are put together but it glosses over implementation details, you can't do any detailed reasoning from the characterizations and analogies in layman's terms.
So ignore the concept you've formed of sockets. Read the actual docs, not tutorials. Write code to see if it works as you think it does. You'll learn that what you have now is a layman's understanding of "a socket", glossing over the differences between sockets you create with socket(), the ones you get from accept(), the ones you can find in Unix's filesystem, and so forth.
Even "connection" is somewhat of a simplification, even for TCP.
To give you an idea just how deep the rabbit hole goes, so is "sharing" -- you can send fd's over some kinds of sockets, and sockets are fd's, after fork() the two processes share the fd namespace, and you can dup() fd's...
A fully-set-up TCP network connection is a {host1:port1, host2:port2} pair with some tracked state at both ends and packets getting sent between those ends that update the state according to the TCP protocol i.e. rules. You can bind() a socket to a local TCP address, and connect() through that socket to remote (or local) addresses one after another, so in that sense connections can share a socket—but if you're running a server, accept()ed connections get their own dedicated socket, it's how you identify where the data you read() is coming from.
One of the common conflations is between the host:port pair a socket can be bound to and the socket itself. You wind up with one OS socket listening for new connections plus one per connection over a connection-based protocol like TCP, but they can all use the same host:port, it's easy to gloss over the reality and think of that as "a socket", and it looks like the book you're reading fell into that usage.

What is the purpose of SOCK_DGRAM and SOCK_STREAM in the context AF_UNIX sockets?

I understand that, SOCK_DGRAM and SOCK_STREAM corresponds to Connection-less and connection oriented network communication done using INET Address family.
Now i am trying to learn AF_UNIX sockets to carry out IPC between processes running on same host, and there i see we need to specify the sub_socket_type as SOCK_DGRAM Or SOCK_STREAM. I am not able to understand for AF_UNIX sockets, what is the purpose of specifying the sub socket type.
Can anyone pls help understand the significance of SOCK_DGRAM and SOCK_STREAM in the context of AF_UNIX sockets ?
It happens that TCP is both a stream protocol, and connection oriented, whereas UDP is a datagram protocol, and connectionless. However it is possible to have a connection-oriented datagram protocol. That is what a block special file (or a Windows Mailslot) are.
(You can't have a connectionless stream protocol though, it doesn't make sense, unless /dev/null counts)
The flag SOCK_DGRAM does not mean the socket is connectionless, it means that the socket is datagram oriented.
A stream-oriented socket (and a character special file like /dev/random or /dev/null) provides (or consumes, or both) a continuous sequence of bytes, with no inherent structure. Structure is provided by interpreting the contents of the stream. Generally speaking there is only one process on either end of the stream.
A datagram-oriented socket, provides (or consumes or both) short messages which are limited in size and self-contained. Generally speaking, the server can receive datagrams from multiple clients using recvfrom (which provides the caller with an address to send replies to) and replies to them with sendto specifying that address.
The question also confused me for a while, but as Ben said, with socket type is SOCK_STREAM OR SOCK_DGRAM ,they all means the same way to access inter-process communication between client and server. Under domain AF_UNIX ,it makes not one jot of difference.

Why does socketpair() allow SOCK_DGRAM type?

I've been learning about Linux socket programming recently, mostly from this site.
The site says that using the domain/type combination PF_LOCAL/SOCK_DGRAM...
Provides datagram services within the local host. Note that this
service is connectionless, but reliable, with the possible exception
that packets might be lost if kernel buffers should become exhausted.
My question, then, is why does socketpair(int domain, int type, int protocol, int sv[2]) allow this combination, when according to its man page...
The socketpair() call creates an unnamed pair of connected sockets in
the specified domain, of the specified type...
Isn't there a contradiction here?
I thought SOCK_DGRAM in the PF_LOCAL and PF_INET domains implied UDP, which is a connectionless protocol, so I can't reconcile the seeming conflict with socketpair()'s claim to create connected sockets.
Datagram sockets have "pseudo-connections". The protocol doesn't really have connections, but you can still call connect(). This associates a remote address and port with the socket, and then it only receives packets that come from that source, rather than all packets whose destination is the address/port that the socket is bound to, and you can use send() rather than sendto() to send back to this remote address.
An example where this might be used is the TFTP protocol. The server initially listens for incoming requests on the well-known port. Once a transfer has started, a different port is used, and the sender and receiver can use connect() to associate a socket with that pair of ports. Then they can simply send and receive on that new socket to participate in the transfer.
Similarly, if you use socketpair() with datagram sockets, it creates a pseudo-connection between the two sockets.

Linux UDP Socket: why select()?

I am new to Linux socket programming. Here I have an basic question:
for UDP, why we need select()?
As UDP is stateless, so UDP server just handles whatever data it received. There will be no new socket created once a new client sends data, right?
if so, select() will be returned/notified once this socket has data arrived. So we don't need to go throughput all to check which socket is being notified (as there will be only one socket);
Is this true? non-blocking UDP socket + select() == blocking UDP socket.
Thanks!
The main benefit of select() is to be able to wait for input on multiple descriptors at once. So when you have multiple UDP sockets open, you put them all into the fd_set, call select(), and it will return when a packet is received on any of them. And it returns an fd_set that indicates which ones have data available. You can also use it to wait for data from the network while also waiting for input from the user's terminal. Or you can handle both UDP and TCP connections in a single server (e.g. DNS servers can be accessed using either TCP or UDP).
If you don't use select(), you would have to write a loop that continuously performs a non-blocking read on each socket. This is not as efficient, since it will spend lots of time performing unnecessary system calls (imagine a server that only gets one request a day, yet is continually calling recv() all day).
Your question seems to assume that the server can work with just one UDP socket. However, if the server has multiple IP addresses, it may need multiple sockets. UDP clients generally expect the response to come from the same IP they sent the request to. The standard socket API doesn't provide a way to know which IP the request was sent to, or to set the source address of the outgoing reply. So the common way to implement this is to open a separate socket bound to each IP, and use select() or epoll() to wait for a request on all of them concurrently. Then you send the reply through the same socket that the request was received on, and it will use that socket's bound IP as the source.
(Linux has socket extensions that make this unnecessary, see Setting the source IP for a UDP socket.)

Resources