AFAIK, there exist two methods for IPC over sockets. Unix sockets and TCP/IP sockets.
UNIX domain sockets know that they’re executing on the same system, so they can avoid some checks and operations (like routing); which makes them faster and lighter than IP sockets. They also transfer the packets over the file system, meaning disk access is a natural part of the process (AFAIU, from what using file system means).
IP sockets (especially TCP/IP sockets) are a mechanism allowing communication between processes over the network. In some cases, you can use TCP/IP sockets to talk with processes running on the same computer (by using the loopback interface).
My question is: in the latter case, where does the transfer of packets occur exactly? If they are being passed over the memory, although it seems like there is a logical overhead, IP sockets are actually more performant than UNIX sockets.
Is there something that I am missing? I understand that logically IP sockets introduce an overhead, I want to understand what happens to a message in both cases.
UNIX domain sockets ... They also transfer the packets over the file system, meaning disk access is a natural part of the process
This is wrong. While there is a special socket file in the file system it only regulates access to the socket by using file system permissions for it. The data transfer itself is done purely in memory.
IP sockets ... where does the transfer of packets occur exactly?
Also in memory.
Unix variants map a lot of things over the filesystem that have absolutely no relation to actual disk drives.
What you're describing is both in memory only, just the amount of layering and overhead differs. Unix sockets just use the DOS while IP sockets use the full network stack.
Related
I realize it's a bad idea to rely on the UDP protocol to provide any sort of ordering guarantees and TCP should be used instead. Please refrain from answers suggesting I use TCP.
I am debugging some legacy networking code and was wondering what are the ordering guarantees provided by the socket interface. The setup I have consists of a linux box running Debian talking to an embedded device over a direct ethernet cable. The embedded device cannot fit an entire TCP stack and even if it did, the legacy stack is too old to refactor.
Specifically, if I have a NIC configured with the default single pfifo_fast, and I am sending packets over a single socket, using a single thread, and they all have the same ToS, under these conditions, am I guaranteed that all my packets will be sent over the wire in the order I send them?
This is the behavior I observe in practice but I haven't found any standard, POSIX or otherwise that guarantees this behavior and I would like to make sure this is in fact supported behavior under the environments and assumptions I listed above.
In contrast, if I send my packets over two separate sockets I observe that they are not sent over the NIC in the same order I sent them from the application code. It's clear that the kernel is reserving the right to re-order packets sent over separate sockets, but refrains from doing so when using a single socket.
As I understand it, calling send() on a socket places the packet into the appropriate queue for the NIC synchronously. The existence of a queue suggests to me a notion of order (otherwise a list would be a more appropriate name for the data structure). Whether or not such ordering guarantees exist, I am interested in some sort of documentation or specification stating so.
There's no guarantee of this behavior because in the general case it would be irrelevant, as user207421 mentioned. POSIX, or even Linux for that matter, wouldn't guarantee that behavior because it necessarily constrains the implementation for an extremely uncommon case. Reordering packets for various reasons is common, and allowing Linux to do that for performance or other reasons (e.g. packet filtering or QoS) improves throughput.
Even if the sender did guarantee this behavior, the receiver could still experience an overfull buffer or temporary network hardware issue that would prevent the packet from being received properly, so any guarantee on the sending side would still be meaningless. You just can't rely on in-order delivery of UDP packets without a higher-level protocol on top, no matter what.
If you need in-order retrieval and retries but can't use TCP, look at QUIC for an example of how to do it. You may (or may not) want to implement the crypto portion, but the protocol layering may be helpful.
I'm breaking a big application into several processes and I want each process to communicate with each other.
for now it's gonna be on the same server, but later several servers on same local network will have several processes that will need to communicate between each other. (means service on one server, with service on other server on same vpc)
so.. my raw options are tcp or unix sockets. I know that with Unix sockets can be useful only if you're on the same server. but we're thinking about writing our own implementation that on same server processes will communicate on unix sockets, and between servers that will communicate using tcp.
is it worth it ? of course tcp sockets are slower then unix sockets.. cause it doesn't go through the network and doesn't get wrapped with tcp related data. the question is by how much ? I couldn't find online proof of benchmarking between tcp and unix sockets. if tcp adds 3%-5% overhead that's cool, but can it be more then that ? I'd like to learn from experience of big projects.. of other people over the years, but didn't find anything relevant.
next...
our project is a NodejS project.
some people may say that I can use a broker for messages, so I tried using nats.io compared to node-ipc (https://www.npmjs.com/package/node-ipc) and I found out that node-ipc is 4 times faster but nats has the cool publish-subscribe feature... but performance is important.
so I have tons of options, no concrete decision.
any information regarding the issue would be greatly appreciated.
The question is actually too broad to answer, but one answer for TCP vs unix domain sockets:
Architect your code, so that you can easily move between those if necessary. The programming model for these is basically the same (both are bidirectional streams of data), and the read/write APIs on OS level as well as in most frameworks is the same. This means e.g. in node both will inherit from the Readable/WriteableStream interfaces. That means the only code that you need to change for switching between those is the listener on the server side where you call the TCP accept APIs instead of the unix domain socket accept APIs and the other way around. You can even have your application accept both types of connections and later on handle them the same internally.
TCP support is always nice because it gives you some flexibility. With my last measurement the overhead was a little bit more (I think 30% versus TCP over loopback) but these are all micro benchmarks and it won't matter for most applications. Unix domain sockets might have an advantage if require some of their special functions, e.g. the ability to send file descriptors across them.
And regarding TCP vs NATS & Co:
If you are not that experienced with network programming and protocol design it makes sense to use readymade IPC systems. That could be anything from HTTP to gRPC to Thrift. These are all point-to-point systems. NATS is different, since its a message broker and not RPC. It also requires an extra component in the middle. Whether this makes sense totally depends on the application.
I am trying to study all the different ways I can find in order for a process on a linux machine to establish IPC with a second process (not a child) on the same machine. I did find that socket can be used given I know the path on file system the second process is listening to.
Is IPC communication with an second process possible in other ways ? I don't want the first process to know the pid/uid of the second process. The scenario is more towards communicating with an untrusted process, by a different author, on the same machine, but still have some information like where the socket in the second process is listening ?
Shared Memory and Sockets can be used for IPC communications between the processes that are not related to each other. Pipes can be used for IPC communications between the parent and child processes.
Shared memory is the fastest form of interprocess communication. In all other methods, the system calls copy the data from the memory area of one process to another process. The drawback of shared memory is, you need to implement synchronization methods to avoid race condition.
Sockets interfaces enables communication in a connection oriented way between the process across the network as well as locally. UNIX domain sockets provides local IPC using a known file path.
Possible ways:
Shared memory (shmget, shmctl, shmat, shmdt)
FIFO queues aka named pipes (mkfifo)
Message queues (msgget, msgsnd, msgrcv, msgctl)
Semaphores - for synchronization (semget, semctl, semop)
Hint:
Useful commands: ipcs, ipcmk, ipcrm
Also, you can use mmap POSIX standard call with flag MAP_SHARED specified. Here https://www.cs.purdue.edu/homes/fahmy/cs503/mmap.txt you can find an example of using it.
My goal is to monitor sockets and relate them to the applications that created them.
I am aware of netstat, ss, lsof and so on and that they can list all sockets with their application.
And I also know that I can parse /proc/net/tcp to get the sockets and relate them to the applications with /proc/(PID), which is exactly what these tools do or they use netlink sockets.
My researches brought me to an article which explains how to get all sockets from the kernel with netlink via the inet_diag protocol. The user space program sets up a netlink socket of the inet_diag type and sends a request to the kernel. The response consists of several messages which contain the sockets and additional related information.
This is really neat, but unfortunately the kernel sends this information only once per request. So I have to "poll" continuously.
Further researches brought me to another article which monitors IP changes of interfaces with netlink route sockets continuously. The socket is bound to a multicast group and then messages are read from it in an endless loop.
So I investigated if there is the same possibility with the inet_diag sockets. Unfortunately I am not really able to understand kernel code. But as far as I can tell there are no multicast groups for this socket family.
At this point I am stuck and I need to know if this approach is somehow feasible or somebody knows any other hints.
You can try dtrace if every tools you mentioned can't meet your requirement.
You can use kprobe kernel module to hook all connect system call,whichi monitor sockets and relate them to the applications that created them
just like Elkeid,Elkeid Driver hooks kernel functions via Kprobe, providing rich and accurate data collection capabilities, including kernel-level process execve probing, privilege escalation monitoring, network audition, and much more. The Driver treats Container-based monitoring as a first-class citizen as Host-based data collection by supporting Linux Namespace. Compare to User Space agents on the market, Elkeid provides more comprehensive information with massive performance improvement.
There's a process and a lot of child processes (Node.js)
which method is preferable to use for real-time communication between processes within a single machine: Linux IPC or TCP / UDP?
What are the limitations of the IPC?
Suitable whether IPC to transfer large amounts of information with minimum delay?
AFAIK, TCP/IP (even on localhost) is significantly slower than Linux pipes or socketpairs, so you should avoid TCP/IP (and probably even UDP/IP) if speed is a major concern.