What is the underlying transport for D-Bus? - linux

D-Bus allows programs to communicate. How is this IPC implemented? Unix domain sockets, shared memory + semaphores, named pipes, something else? Maybe a combination?

I think it typically uses UNIX sockets. Under Linux, it may use "abstract namespace" Unix sockets, which are the same except they don't physically exist as visible files in the filesystem.

This is remarkably similar to the question DBus query. And the answer from Googling was sockets - either for TCP/IP or Unix Domain.

Apparently, IPC or TCP/IP:
http://www.freedesktop.org/wiki/Software/dbus
Update:
I mean, multiple IPC methods on different OS's, plus TCP/IP.
http://dbus.freedesktop.org/doc/dbus-daemon.1.html shows that the unix reference edition uses both unix domain sockets and tcp/ip.

There has been in the past some attempt to use netlink sockets directly from the kernel. More recently (announced during last LPC), some people are working at getting rid of D-Bus user-space daemon and putting D-Bus in the kernel, it will probably also use sockets, but maybe revive the netlink or other approaches.

Related

Passing parameters between systemd services runtime

is it possible to pass parameters between services runtime? What I've already found is how to start services with variables and how to pass them parameters using an external file at run-time. However, I could not find information about exchanging data between services.
You're looking for an IPC mechanism. Systemd does not provide one because Linux already has quite a few.
The most common method is for a service to listen on a socket (usually an AF_UNIX socket in /run, but TCP is also an option, e.g. if you're writing in Java) and for other services to connect to it and submit or receive data. You can invent your own protocol, but practically any RPC system (such as gRPC or SunRPC or REST) that's built for network use will also work for local use, both across TCP and across AF_UNIX sockets.
D-Bus is one specific IPC system that systemd itself uses (which is also built on top of AF_UNIX sockets, but with a central "message bus" daemon), but it is not part of systemd. It will likely be available on any systemd-based distribution, however. D-Bus bindings are available for most programming languages.
Aside from AF_UNIX sockets, Linux also has several forms of "shared memory" and "message queue" systems (POSIX IPC and SysV IPC variants of each).

Is it possible to secure the linux kernel sockets?

I have several Linux applications which uses sockets (UDP/TCP IP).
I want (need, no matter why) to secure the connection with my own secure protocol without changing those applications.
I thought about changing the Linux kernel socket implementation, so I can use my secure protocol sockets, without changing those applications.
So, Is it possible to change the Linux kernel sockets, so when using send or receive sockets function, the inner Linux implementation will by mine ?
And how can I do it ? which kernel module do I need to change ?

reliable local messaging in Tcl on Unix-like systems

After some research, I have found that on Linux, Tcl doesn't support Unix domain sockets at all.
In the absence of the Unix datagram sockets, what is the native(*) alternative for the reliable local message-based many-to-one communications in Tcl for the Unix-like systems? A limitation of text-only communication would be acceptable.
UDP is unreliable even if used locally. TCP is not message-based, and Tcl doesn't offer TCP_NODELAY option. Pipes are only one-to-one. FIFOs have poor async semantics and are (similarly to pipes) only one-to-one. SysV message queues lack poll() support but anyway are also not supported by the Tcl. I went through all the usual alternatives, but failed to find what fulfills the role of the Unix sockets in the Tcl.
(*) I have found the ceptcl, an external module, but it is neither a part of the Tcl nor is bundled with any Linux distro. As such it is not an acceptable option.
As stated before in this thread, unix_sockets cannot be part of the core package because of portability issues.
An implementation of Unix sockets as extensions here: https://github.com/cyanogilvie/unix_sockets
It works on Linux. Compilation tested successfully on Debian 10.

Communicating with processes in the same host using internet sockets?

I am building a message layer for processes running on an embedded Linux system. I am planing to use sockets. This system might be ported to different operating systems down the road so portability is a concern. Performance is below portability in priority order.
I have a few questions regarding my way forward.
I am thinking of using internet sockets over TCP/IP for this communication between local processes for the sake of portability. Is there any reason that I should not do that and use domain sockets?
Does it really improve the portability when using internet sockets instead of domain sockets?
If this is indeed the way forward, can you point me in the right direction (how to use ports for each process etc.) with some online resources?

IPC using Linux pipe

I have doubt in using Linux Pipes for IPC. My question is
Can Linux pipes can be used to communicate between the processes running on different machines?.
Thanks,
No, you can't use only pipe to communicate between different machines, because pipe is defined as local machine communication method (IEEE standard says that it creates two file descriptors in current process. Descriptors usually can't be send to other machine, only inherited from parent or passed via local machine sockets).
But you can try to use pipe to some external socket program, like netcat, which will resend all data over tcp socket, and remote netcat will replay it back into the program.
And if you are developing some application, it can be better to use tcp sockets directly.
PS: The IPC - Inter-process communication - AFAIK means communications between different processes on one (same) machine (linux IPC from Linux Programmer's Guide 1995).
PPS: If sockets are hard to work with them directly, you may choose some Message Passing library or standard. For example MPI standard (OpenMPI, MPICH libraries) is often used to communicate between many machines in tightly-coupled computing clusters, and there are some popular interfaces like RPC (Remote procedure call, several implementations) or ZeroMQ
Pipe is only used for communication between related process on the same host (eg. parent and child process).

Resources