TCP's socket vs Linux's TCP socket - linux

Linux API and TCP protocol both have concepts called "socket". Are they the same concept, and does Linux's TCP socket implement TCP's socket concept?
Relation between connections and sockets:
I have heard that two connections can't share a Linux's TCP socket, and is it true?
Tenebaum's Computer Networks (5ed 2011, Section 6.5.2 The TCP Service Model, p553) says:
A socket may be used for multiple connections at the same time. In other words, two or more connections may terminate at the same socket. Connections are identified by the socket identifiers at both ends.
Since the quote says two connections can share a "socket", does the book use a different "socket" concept from Linux's TCP socket? Does the book use TCP's socket concept?
Relation between processes and sockets:
I also heard that two processes can share a Linux's TCP socket. But if two processes can share a socket, can't the processes create their own connections on the socket at will, so there are two connections on the same Linux's TCP socket? Is it a contradiction to 1, where two connections can't share a Linux TCP socket?
Can two processes share a TCP's socket?

The book references a more abstract concept of a socket, one that is not tied to a particular OS or even a network/transport protocol. In the book, a socket is simply a uniquely defined connection endpoint. A connection is thus a pair (S1, S2) of sockets, and this pair should be unique in some undefined context. An example specific to TCP using my connection right now would have an abstract socket consisting of an interface IP address and a TCP port number. There are many, many connections between stackoverflow users like myself and the abstract socket [443, 151.101.193.69] but only a single connection from my machine [27165, 192.168.1.231] to [443, 151.101.193.69], which is a fake example using a non-routable IP address so as to protect my privacy.
If we get even more concrete and assume that stackoverflow and my computer are both running linux, than we can talk about the socket as defined by man 2 socket, and the linux API that uses it. Here a socket can be created in listening mode, and this is typically called a server. This socket can be shared (shared in the sense of shared memory or state) amongst multiple processes. However, when a peer connects to this listening socket a new socket is created (as a result of the accept() call. The original listening socket may again be used to accept() another connection. I believe if there are multiple processes blocked on the accept() system call then exactly one of these is unblocked and returns with the newly created connected socket.
Let me know if there is something missing here.

Speaking as the docs you're reading do is convenient, but it's not really accurate.
Sockets are a general networking API. Their only relation with TCP is, you can set sockets up to use it. You can also set sockets up to talk over any other networking protocol the OS backs; also, you don't necessarily have to use sockets, many OS's still offer other networking APIs, some with substantial niche advantages.
The problem this leaves you with is, the nontechnical language leaves you with an idea of how things are put together but it glosses over implementation details, you can't do any detailed reasoning from the characterizations and analogies in layman's terms.
So ignore the concept you've formed of sockets. Read the actual docs, not tutorials. Write code to see if it works as you think it does. You'll learn that what you have now is a layman's understanding of "a socket", glossing over the differences between sockets you create with socket(), the ones you get from accept(), the ones you can find in Unix's filesystem, and so forth.
Even "connection" is somewhat of a simplification, even for TCP.
To give you an idea just how deep the rabbit hole goes, so is "sharing" -- you can send fd's over some kinds of sockets, and sockets are fd's, after fork() the two processes share the fd namespace, and you can dup() fd's...
A fully-set-up TCP network connection is a {host1:port1, host2:port2} pair with some tracked state at both ends and packets getting sent between those ends that update the state according to the TCP protocol i.e. rules. You can bind() a socket to a local TCP address, and connect() through that socket to remote (or local) addresses one after another, so in that sense connections can share a socket—but if you're running a server, accept()ed connections get their own dedicated socket, it's how you identify where the data you read() is coming from.
One of the common conflations is between the host:port pair a socket can be bound to and the socket itself. You wind up with one OS socket listening for new connections plus one per connection over a connection-based protocol like TCP, but they can all use the same host:port, it's easy to gloss over the reality and think of that as "a socket", and it looks like the book you're reading fell into that usage.

Related

UDP packets arriving on wrong sockets on Linux

I have two UDP sockets bound to the same address and connected to addresses A and B. I have two more UDP sockets bound to A and B and not connected.
This is what my /proc/net/udp looks like (trimmed for readability):
sl local_address rem_address
3937: 0100007F:DD9C 0300007F:9910
3937: 0100007F:DD9C 0200007F:907D
16962: 0200007F:907D 00000000:0000
19157: 0300007F:9910 00000000:0000
According to connect(2): "If the socket sockfd is of type SOCK_DGRAM, then addr is the address to which datagrams are sent by default, and the only address from which datagrams are received."
For some reason, my connected sockets are receiving packets that were destined for each other. eg: The UDP socket connected to A sends a message to A, A then sends a reply back. The UDP socket connected to B sends a message to B, B then sends a reply back. But the reply from A arrives at the socket connected to B and the reply from B arrives at the socket connected to A.
Why on earth would this be happening? Note that it happens randomly - sometimes the replies arrive at the correct sockets and sometimes they don't. Is there any way to prevent this or any situation under which connect is supposed to not work?
Ehm, as far as I can see there is no ordering guarantee.
From the man page:
SO_REUSEPORT (since Linux 3.9)
Permits multiple AF_INET or AF_INET6 sockets to be bound to an identical socket address. This option must be set on each socket (including the first socket) prior to calling bind(2) on the socket. To prevent port hijacking, all
of the processes binding to the same address must have the same effective UID. This option can be employed with both TCP and UDP sockets.
For TCP sockets, this option allows accept(2) load distribution in a multi-threaded server to be improved by using a distinct listener socket for each thread. This provides improved load distribution as compared to traditional
techniques such using a single accept(2)ing thread that distributes connections, or having multiple threads that compete to accept(2) from the same socket.
For UDP sockets, the use of this option can provide better distribution of incoming datagrams to multiple processes (or threads) as compared to the traditional technique of having multiple processes compete to receive datagrams on
the same socket.
So you're using something that is mainly seen as a option for servers (or in some cases clients, but ordering can never be guaranteed - especially in UDP) as a client.
I suspect your approach is wrong, and needs a rethink =)
PS. Just had a quick glance but IMHO it's a bug in your approach

What is the purpose of SOCK_DGRAM and SOCK_STREAM in the context AF_UNIX sockets?

I understand that, SOCK_DGRAM and SOCK_STREAM corresponds to Connection-less and connection oriented network communication done using INET Address family.
Now i am trying to learn AF_UNIX sockets to carry out IPC between processes running on same host, and there i see we need to specify the sub_socket_type as SOCK_DGRAM Or SOCK_STREAM. I am not able to understand for AF_UNIX sockets, what is the purpose of specifying the sub socket type.
Can anyone pls help understand the significance of SOCK_DGRAM and SOCK_STREAM in the context of AF_UNIX sockets ?
It happens that TCP is both a stream protocol, and connection oriented, whereas UDP is a datagram protocol, and connectionless. However it is possible to have a connection-oriented datagram protocol. That is what a block special file (or a Windows Mailslot) are.
(You can't have a connectionless stream protocol though, it doesn't make sense, unless /dev/null counts)
The flag SOCK_DGRAM does not mean the socket is connectionless, it means that the socket is datagram oriented.
A stream-oriented socket (and a character special file like /dev/random or /dev/null) provides (or consumes, or both) a continuous sequence of bytes, with no inherent structure. Structure is provided by interpreting the contents of the stream. Generally speaking there is only one process on either end of the stream.
A datagram-oriented socket, provides (or consumes or both) short messages which are limited in size and self-contained. Generally speaking, the server can receive datagrams from multiple clients using recvfrom (which provides the caller with an address to send replies to) and replies to them with sendto specifying that address.
The question also confused me for a while, but as Ben said, with socket type is SOCK_STREAM OR SOCK_DGRAM ,they all means the same way to access inter-process communication between client and server. Under domain AF_UNIX ,it makes not one jot of difference.

Why does socketpair() allow SOCK_DGRAM type?

I've been learning about Linux socket programming recently, mostly from this site.
The site says that using the domain/type combination PF_LOCAL/SOCK_DGRAM...
Provides datagram services within the local host. Note that this
service is connectionless, but reliable, with the possible exception
that packets might be lost if kernel buffers should become exhausted.
My question, then, is why does socketpair(int domain, int type, int protocol, int sv[2]) allow this combination, when according to its man page...
The socketpair() call creates an unnamed pair of connected sockets in
the specified domain, of the specified type...
Isn't there a contradiction here?
I thought SOCK_DGRAM in the PF_LOCAL and PF_INET domains implied UDP, which is a connectionless protocol, so I can't reconcile the seeming conflict with socketpair()'s claim to create connected sockets.
Datagram sockets have "pseudo-connections". The protocol doesn't really have connections, but you can still call connect(). This associates a remote address and port with the socket, and then it only receives packets that come from that source, rather than all packets whose destination is the address/port that the socket is bound to, and you can use send() rather than sendto() to send back to this remote address.
An example where this might be used is the TFTP protocol. The server initially listens for incoming requests on the well-known port. Once a transfer has started, a different port is used, and the sender and receiver can use connect() to associate a socket with that pair of ports. Then they can simply send and receive on that new socket to participate in the transfer.
Similarly, if you use socketpair() with datagram sockets, it creates a pseudo-connection between the two sockets.

Is there only one Unix Domain Socket in the communication between two processes?

There are two kinds of sockets: network sockets and Unix Domain Sockets.
When two processes communicate using network sockets, each process creates its own network socket, and the processes communicate by connection between their sockets. There are two sockets, each belonging to a different process, being the connection endpoint of each process
When two processes communicate using Unix Domain sockets, a Unix Domain socket is addressed by a filename in the filesystem.
Does that imply the two processes communicate by only one Unix Domain socket, instead of two?
Does the Unix Domain socket not belong to any process, i.e. is the Unix domain socket not a connection endpoint of any process, but somehow like a "middle point" between the two processes?
There are two sockets, one on each end of the connection. Each of them, independently, may or may not have a name in the filesystem.
The thing you see when you ls -l that starts with srwx is not really "the socket". It's a name that is bound to a socket (or was bound to a socket in the past - they don't automatically get removed when they're dead).
An analogy: think about TCP sockets. Most of them involve an endpoint with a well-known port number (22 SSH; 25 SMTP; 80 HTTP; etc.) A server creates a socket and binds to the well-known port. A client creates a socket and connects to the well-known port. The client socket also has a port number, which you can see in a packet trace (tcpdump/wireshark), but it's not a fixed number, it's just some number that was automatically chosen by the client's kernel because it wasn't already in use.
In unix domain sockets, the pathname is like the port number. If you want clients to be able to find your server socket, you need to bind it to a well-known name, like /dev/log or /tmp/.X11-unix/X0. But the client doesn't need to have a well-known name, so normally it doesn't do a bind(). Therefore the name /tmp/.X11-unix/X0 is only associated with the server socket. You can confirm this with netstat -x. About half the sockets listed will have pathnames, and the other half won't. Or write your own client/server pair, and call getsockname() on the client. Its name will be empty, while getsockname() on the server gives the pathname.
The ephemeral port number automatically assigned to a TCP client has no counterpart in unix domain socket addresses. In TCP it's necessary to have a local port number so incoming packets can be matched to the correct socket. Unix domain sockets are linked directly in their kernel data structures, so there's no need. A client can be connected to a server and have no name.
And then there's socketpair() which creates 2 unix domain sockets connected to each other, without giving a name to either of them.
(Not mentioned here, and not really interesting: the "abstract" namespace.)

Binding multiple times to the same port

is there a way to bind multiple listening TCP sockets on the same {IP, port}? I know I can just open a socket, bind, fork and then listen in each of the processes. But I'd like to do the same with separate processes that cannot fork after binding. Is there some way to allow this and not get the "Address already in use" error?
The only option I need is automatic load-balancing of the connections.
It looks like it makes sense to introduce a separate process that would listen on the port and act as a load balancing proxy forwarding the traffic to a pool of backend processes, either over the loopback interface or Unix sockets. If you're dealing with HTTP you could use one of the existing HTTP reverse proxies, like pound or nginx.
You could do something similar and pass the socket fd through a unix domain socket as suggested here
Unfortunately, i don't believe this is possible.
Only one process can bind TCP socket to a given port and IP address (even if it's INADDR_ANY) - that would be a completely duplicate binding. The only exception to this is the bind(2)/fork(2) dance, as you already mentioned.
That said, if you have multiple network interfaces on the machine (or setup IP aliases on a single interface), you can bind one socket to each IP address with the same port. Just remember to set SO_REUSEADDR socket option between socket(2) and bind(2) calls.
Load balancing could be done in multiple ways:
do it on a firewall mapping source IPs to pool of machines/ports,
proxy/pre-process in one process, do real work in a pool of processes,
use file descriptor passing as #Hasturkun suggests.
appears "newly possible" for linux with kernel 3.9 http://freeprogrammersblog.vhex.net/post/linux-39-introdued-new-way-of-writing-socket-servers/2
by using SO_REUSEPORT
For a highly expert variation of your task, on Linux you may map incoming packets into your process, via using netfilter's queue module. Then you can drop/modify/them.
prog1: checks if packet came from IP xxx.xxx.xxx. Match? Catch packet.
prog2: checks if prog3 busy. Match? Catch packet.
prog3: checks packet's origin...
However this results in ~20Kb of code just to handle packets, and your TCP/IP stack wont survive it long (big traffic == big mistake).
On Windows, you can achieve same with Winsock driver.
It's an esotheric solution, just create one dispatcher process to fork your others. Look after nginx, or apache2, or cometd, or any Perl's asynchronous TCP module to catch the idea.

Resources