Remote process control in Linux - linux

I'm currently working on a project requiring a number of processes running under control of a "master" process, which receives remote commands via TCP and tells the child processes what to do (e.g.: what files they should act on, what processing operations they should perform).
I've come up with the following ideas to pass commands/configuration down to the child processes:
Signals (not powerful enough)
A binary protocol over sockets or pipes connecting each process to the master (reinvent the wheel).
RPC (maybe overkill)
CORBA (perhaps overkill)
DDS (totally overkill)
Any ideas/suggestions?

D-Bus

How about a text-protocol via pipes?
text-protocols are always better than binary protocols because they are easier to test, and easier testing generally means fewer bugs.

You could also use message queues, or shared memory with semaphores.
You could also look into an Apache project called ActiveMQ which allows messages to be dispatched to subscription queues, etc. Its very powerful and flexible and there are C interfaces. Its ideal if you have many machines/networks to which you need to dispatch messages.
http://activemq.apache.org/

A lightweight message queue like beanstalkd or resque seems like the right level of complexity. Files with inotify could also work; inotify is designed as an event queue. You can try it with incrontab before baking it in. {xml,json}-rpc are (slightly) more complex, but also more standard, as they use http. However, the message queue metaphor is more appropriate than rpc for non-blocking interactions.

The supervisord tool may be useful. This is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

Related

How to control a process from a different process running in different core

I have two processes running in different cores. I want to know what is the fastest way to pause and continue one process from another process.
Well it really depends on the application and the type of communication you are looking for, so I will asume that what you want is indeed a blocking operation.
For that, I will go with Domain Sockets, they have blocking and non blocking operations and are very ease to use, with lot of examples. You can start here here in the oficial documentation for Linux.
The concept is the same for any other operating system, only the implementations could differ.

How can my Linux program get a computer-is-about-to-go-to-sleep notification, without modifying any system config files?

I wrote a Linux program that creates some persistent TCP connections, and I would like my program to close() those TCP sockets just before the Linux-computer goes to sleep, so that the remote peer isn't left with a non-responsive "zombie" TCP connection.
According to this answer, one way to do that is to modify the /etc/pm/sleep.d file to run a special notification app, but I'd prefer not to do it that way since modifying system config files is risky (and in many cases not possible if my program does not have permissions to do so).
Windows and MacOS/X have C-based notification APIs for this sort of thing; is there anything similar in Linux-land?
You can use systemd-logind a tiny daemon that manages user logins and provides both a C library interface as well as a D-Bus interface. You can subscribe to the following signals:
The PrepareForShutdown() resp. PrepareForSleep() signals are sent right before (with the argument True) and after (with the argument False) the system goes down for reboot/poweroff, resp. suspend/hibernate. This may be used by applications for saving data on disk, releasing memory or doing other jobs that shall be done shortly before shutdown/sleep, in conjunction with delay inhibitor locks. After completion of this work they should release their inhibition locks in order not to delay the operation any further.
You can also take a look at this sample program written in Python that uses D-Bus and the PrepareForSleep() signal.

rpcgen for Linux

We have used rpcgen to create a rpc server on Linux machine (c language).
When there are many calls to our program it still results in a single
threaded request.
I see that it's common problem from 2004, there is a new rpcgen (or other genarator) that solved this problem?
Thanks,
Kobi
rpcgen will simply generate the serialization routines. Your server might be coded to have several threads. Learn more about pthreads.
You probably should not have too many threads (e.g. at most a dozen, not thousands). You could design your program to use some thread pool, or simply to have a fixed set of worker threads which are continuously handling RPC requests (with the main thread just in charge of accepting connections, etc).
Read rpc(3). You might consider not using svc_run in your server, but instead doing it your own way with threads. Beware that if you use threads, you'll need to synchronize, perhaps with mutex.
You could also consider JSONRPC, or perhaps making your C program some specialized HTTP server (e.g. using libonion) and have your clients do HTTP requests (maybe with libcurl). See also this. And you might consider a message passing architecture, perhaps with Open-MPI.
Beware sun version is being abandoned, look for tirpc

Alternative to POSIX message queues

I am using POSIX message queues in a non-root system. I am running into significant issues with unlinking and cleaning. I can't see opened message queues and then write a routine to clean them.
I was wondering if one of the two are possible:
Create POSIX mqueue locally, in $PWD or something
Get an alternative message queue library instead of the standard one from Linux.
One thing you can try is to see whether you can go by using a unix domain datagram sockets instead of posix message queues, in particular SOC_SEQPACKET variety of those:
http://man7.org/linux/man-pages/man7/unix.7.html
If this is not enough, there are plenty of message queue abstraction libraries out there, such as a popular ZeroMQ: http://zeromq.org/

Are message queues obsolete in linux?

I've been playing with message queues (System V, but POSIX should be ok too) in Linux recently and they seem perfect for my application, but after reading The Art of Unix Programming I'm not sure if they are really a good choice.
http://www.faqs.org/docs/artu/ch07s02.html#id2922148
The upper, message-passing layer of System V IPC has largely fallen out of use. The lower layer, which consists of shared memory and semaphores, still has significant applications under circumstances in which one needs to do mutual-exclusion locking and some global data sharing among processes running on the same machine. These System V shared memory facilities evolved into the POSIX shared-memory API, supported under Linux, the BSDs, MacOS X and Windows, but not classic MacOS.
http://www.faqs.org/docs/artu/ch07s03.html#id2923376
The System V IPC facilities are present in Linux and other modern Unixes. However, as they are a legacy feature, they are not exercised very often. The Linux version is still known to have bugs as of mid-2003. Nobody seems to care enough to fix them.
Are the System V message queues still buggy in more recent Linux versions? I'm not sure if the author means that POSIX message queues should be ok?
It seems that sockets are the preferred IPC for almost anything(?), but I cannot see how it would be very simple to implement message queues with sockets or something else. Or am I thinking too complexly?
I don't know if it's relevant that I'm working with embedded Linux?
Personally I am quite fond of message queues and think they are arguably the most under-utilized IPC in the unix world. They are fast and easy to use.
A couple of thoughts:
Some of this is just fashion. Old things become new again. Add a shiny do-dad on message queues and they may be next year's newest and hottest thing. Look at Google's Chrome using separate processes instead of threads for its tabs. Suddenly people are thrilled that when one tab locks up it doesn't bring down the entire browser.
Shared memory has something of a He-man halo about it. You're not a "real" programmer if you aren't squeezing that last cycle out of the machine and MQs are marginally less efficient. For many, if not most apps, it is utter nonsense but sometimes it is hard to break a mindset once it takes hold.
MQs really aren't appropriate for applications with unbounded data. Stream oriented mechanisms like pipes or sockets are just easier to use for that.
The System V variants really have fallen out of favor. As a general rule go with POSIX versions of IPC when you can.
Yes, I think that message queues are appropriate for some applications. POSIX message queues provide a nicer interface, in particular, you get to give your queues names rather than IDs, which is very useful for fault diagnosis (makes it easier to see which is which).
Linux allows you to mount the posix message queues as a filesystem and see them with "ls", delete them with "rm" which is quite handy too (System V depends on the clunky "ipcs" and "ipcrm" commands)
I haven't actually used POSIX message queues because I always want to leave open the option to distribute my messages across a network. With that in mind, you might look at a more robust message-passing interface like zeromq or something that implements AMQP.
One of the nice things about 0mq is that when used from the same process space in a multithreaded app, it uses a lockless zero-copy mechanism that is quite fast. Still, you can use the same interface to pass messages over a network as well.
Biggest disadvantages of POSIX message queue:
POSIX message queue does not make it a requirement to be compatible with select().(It works with select() in Linux but not in Qnx system)
It has surprises.
Unix Datagram socket does the same task of POSIX message queue. And Unix Datagram socket works in socket layer. It is possible to use it with select()/poll() or other IO-wait methods. Using select()/poll() has the advantage when designing event-based system. It is possible to avoid busy loop in that way.
There is surprise in message queue. Think about mq_notify(). It is used to get receive-event. It sounds like we can notify something about the message queue. But it is actually registering for notification instead of notifying anything.
More surprise about mq_notify() is that it has to be called after every mq_receive(), which may cause a race-condition(when some other process/thread call mq_send() between the call of mq_receive() and mq_notify()).
And it has a whole set of mq_open, mq_send(), mq_receive() and mq_close() with their own definition which is redundant and in some case inconsistent with socket open(),send(),recv() and close() method specification.
I do not think message queue should be used for synchronization. eventfd and signalfd are suitable for that.
But it(POSIX message queue) has some realtime support. It has priority features.
Messages are placed on the queue in decreasing order of priority, with newer messages of the same priority being placed after older messages with the same priority.
But this priority is also available for socket as out-of-band data !
Finally, to me , POSIX message queue is a legacy API. I always prefer Unix Datagram socket instead of POSIX message queue as long as the real-time features are not needed.
Message queues are very useful to build local decoupled applications. They are super fast, they are block organized (no need for buffering, cutting, etc which is the case for streaming sockets), basically few memcpy() operations (user code copy block to kernel, and kernel copy block to other process reading from q), and that's the story for message delivery. Some industry known middlewares such as Oracle Tuxedo or Mavimax Enduro/X uses these queues to help to build load balanced, high performance, fault tolerant decomposed, distributed applications. These queues allows to do load balancing, when several executables reads from the same queue, and kernel scheduler just distributes the message to processes which ever is idling. The nice thing for Linux is that poll can be done on Posix queues, which helps a to solve certain scenarios. For IBM AIX it is possible to do poll on System V queues.
For example, two processes can communicate easily locally over the queues with quite impressive throughput (~70k req+rply/sec):
If networking is needed, then for example Enduro/X provides tpbridge process which basically reads from messages from local queue, sends blocks to some other machine, where the other end injects the messages back in the local queue.
Also when comparing to sockets, you do not get any issues with queues, such as busy/lingering sockets when for example some binary have crashed, i.e. program at startup can immediately start to read the queues and do the processing.

Resources