I am using POSIX message queues in a non-root system. I am running into significant issues with unlinking and cleaning. I can't see opened message queues and then write a routine to clean them.
I was wondering if one of the two are possible:
Create POSIX mqueue locally, in $PWD or something
Get an alternative message queue library instead of the standard one from Linux.
One thing you can try is to see whether you can go by using a unix domain datagram sockets instead of posix message queues, in particular SOC_SEQPACKET variety of those:
http://man7.org/linux/man-pages/man7/unix.7.html
If this is not enough, there are plenty of message queue abstraction libraries out there, such as a popular ZeroMQ: http://zeromq.org/
Related
Consider my main script as a loop that is constantly broadcasting various events happening. For example:
FileChange: File at address XXX has changed.
FileDeleted: File at address XXX has been deleted.
ScreenSaver: Screen saver named YYY got activated.
...
What I intend to do is to have other apps that I would add now (and later on), listen to what main app is broadcasting and if it is related to it (say a script for handling FileChange events), they get the message and do their own processing.
What are my options for achieving this model of interprocess communication?
If you structure your application using threads you could use the python queue library for that https://docs.python.org/3/library/queue.html - it is a class supporting multiple producers and consumers. I often use that for structuring my applications for propagating events among different listeners.
If you want to separate different processes you could choose something from Recommended Python publish/subscribe/dispatch module?
In your case a unix domain socket could be a simple possibility without using difficult frameworks (you could even write shell programs for accessing that). Meanwhile it seems to work under Windows as well (AF_UNIX equivalent for Windows). Your service could publish events with a simple ASCII protocol like <EVENT>:<PATH> on the socket.
I want to build a small web application in Rust which should be able to read and write files on a users behalf. The user should authenticate with their UNIX credentials and then be able to read / write only the files they have access to.
My first idea, which would also seem the most secure to me, would be to switch the user-context of an application thread and do all the read/write-stuff there. Is this possible?
If this is possible, what would the performance look like? I would assume spawning an operating system thread every time a request comes in could have a very high overhead. Is there a better way to do this?
I really wouldn't like to run my entire application as root and check the permissions manually.
On GNU/Linux, it is not possible to switch UID and GID just for a single thread of a process. The Linux kernel maintains per-thread credentials, but POSIX requires a single set of credentials per process: POSIX setuid must change the UID of all threads or none. glibc goes to great lengths to emulate the POSIX behavior, although that is quite difficult.
You would have to create a completely new process for each request, not just a new thread. Process creation is quite cheap on Linux, but it could still be a performance problem. You could keep a pool of processes around to avoid the overhead of repeated process creation. On the other hand, many years ago, lots of web sites (including some fairly large ones) used CGI to generate web pages, and you can get relatively far with a simple design.
I think #Florian got this backwards in his original answer. man 2 setuid says
C library/kernel differences
At the kernel level, user IDs and group IDs are a per-thread attribute. However, POSIX requires that all threads in a process
share the same credentials. The NPTL threading implementation handles
the POSIX requirements by providing wrapper functions for the various
system calls that change process
UIDs and GIDs. These wrapper functions (including the one for setuid()) employ a signal-based technique to ensure that when one
thread changes credentials, all of the other threads in the process
also change their credentials. For details, see nptl(7).
Since libc does the signal dance to do it for the whole process you will have to do direct system calls to bypass that.
Note that this is linux-specific. Most other unix variants do seem to follow posix at the kernel level instead emulating it in libc.
I wrote a Linux program that creates some persistent TCP connections, and I would like my program to close() those TCP sockets just before the Linux-computer goes to sleep, so that the remote peer isn't left with a non-responsive "zombie" TCP connection.
According to this answer, one way to do that is to modify the /etc/pm/sleep.d file to run a special notification app, but I'd prefer not to do it that way since modifying system config files is risky (and in many cases not possible if my program does not have permissions to do so).
Windows and MacOS/X have C-based notification APIs for this sort of thing; is there anything similar in Linux-land?
You can use systemd-logind a tiny daemon that manages user logins and provides both a C library interface as well as a D-Bus interface. You can subscribe to the following signals:
The PrepareForShutdown() resp. PrepareForSleep() signals are sent right before (with the argument True) and after (with the argument False) the system goes down for reboot/poweroff, resp. suspend/hibernate. This may be used by applications for saving data on disk, releasing memory or doing other jobs that shall be done shortly before shutdown/sleep, in conjunction with delay inhibitor locks. After completion of this work they should release their inhibition locks in order not to delay the operation any further.
You can also take a look at this sample program written in Python that uses D-Bus and the PrepareForSleep() signal.
I'm currently working on a project requiring a number of processes running under control of a "master" process, which receives remote commands via TCP and tells the child processes what to do (e.g.: what files they should act on, what processing operations they should perform).
I've come up with the following ideas to pass commands/configuration down to the child processes:
Signals (not powerful enough)
A binary protocol over sockets or pipes connecting each process to the master (reinvent the wheel).
RPC (maybe overkill)
CORBA (perhaps overkill)
DDS (totally overkill)
Any ideas/suggestions?
D-Bus
How about a text-protocol via pipes?
text-protocols are always better than binary protocols because they are easier to test, and easier testing generally means fewer bugs.
You could also use message queues, or shared memory with semaphores.
You could also look into an Apache project called ActiveMQ which allows messages to be dispatched to subscription queues, etc. Its very powerful and flexible and there are C interfaces. Its ideal if you have many machines/networks to which you need to dispatch messages.
http://activemq.apache.org/
A lightweight message queue like beanstalkd or resque seems like the right level of complexity. Files with inotify could also work; inotify is designed as an event queue. You can try it with incrontab before baking it in. {xml,json}-rpc are (slightly) more complex, but also more standard, as they use http. However, the message queue metaphor is more appropriate than rpc for non-blocking interactions.
The supervisord tool may be useful. This is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
I've been playing with message queues (System V, but POSIX should be ok too) in Linux recently and they seem perfect for my application, but after reading The Art of Unix Programming I'm not sure if they are really a good choice.
http://www.faqs.org/docs/artu/ch07s02.html#id2922148
The upper, message-passing layer of System V IPC has largely fallen out of use. The lower layer, which consists of shared memory and semaphores, still has significant applications under circumstances in which one needs to do mutual-exclusion locking and some global data sharing among processes running on the same machine. These System V shared memory facilities evolved into the POSIX shared-memory API, supported under Linux, the BSDs, MacOS X and Windows, but not classic MacOS.
http://www.faqs.org/docs/artu/ch07s03.html#id2923376
The System V IPC facilities are present in Linux and other modern Unixes. However, as they are a legacy feature, they are not exercised very often. The Linux version is still known to have bugs as of mid-2003. Nobody seems to care enough to fix them.
Are the System V message queues still buggy in more recent Linux versions? I'm not sure if the author means that POSIX message queues should be ok?
It seems that sockets are the preferred IPC for almost anything(?), but I cannot see how it would be very simple to implement message queues with sockets or something else. Or am I thinking too complexly?
I don't know if it's relevant that I'm working with embedded Linux?
Personally I am quite fond of message queues and think they are arguably the most under-utilized IPC in the unix world. They are fast and easy to use.
A couple of thoughts:
Some of this is just fashion. Old things become new again. Add a shiny do-dad on message queues and they may be next year's newest and hottest thing. Look at Google's Chrome using separate processes instead of threads for its tabs. Suddenly people are thrilled that when one tab locks up it doesn't bring down the entire browser.
Shared memory has something of a He-man halo about it. You're not a "real" programmer if you aren't squeezing that last cycle out of the machine and MQs are marginally less efficient. For many, if not most apps, it is utter nonsense but sometimes it is hard to break a mindset once it takes hold.
MQs really aren't appropriate for applications with unbounded data. Stream oriented mechanisms like pipes or sockets are just easier to use for that.
The System V variants really have fallen out of favor. As a general rule go with POSIX versions of IPC when you can.
Yes, I think that message queues are appropriate for some applications. POSIX message queues provide a nicer interface, in particular, you get to give your queues names rather than IDs, which is very useful for fault diagnosis (makes it easier to see which is which).
Linux allows you to mount the posix message queues as a filesystem and see them with "ls", delete them with "rm" which is quite handy too (System V depends on the clunky "ipcs" and "ipcrm" commands)
I haven't actually used POSIX message queues because I always want to leave open the option to distribute my messages across a network. With that in mind, you might look at a more robust message-passing interface like zeromq or something that implements AMQP.
One of the nice things about 0mq is that when used from the same process space in a multithreaded app, it uses a lockless zero-copy mechanism that is quite fast. Still, you can use the same interface to pass messages over a network as well.
Biggest disadvantages of POSIX message queue:
POSIX message queue does not make it a requirement to be compatible with select().(It works with select() in Linux but not in Qnx system)
It has surprises.
Unix Datagram socket does the same task of POSIX message queue. And Unix Datagram socket works in socket layer. It is possible to use it with select()/poll() or other IO-wait methods. Using select()/poll() has the advantage when designing event-based system. It is possible to avoid busy loop in that way.
There is surprise in message queue. Think about mq_notify(). It is used to get receive-event. It sounds like we can notify something about the message queue. But it is actually registering for notification instead of notifying anything.
More surprise about mq_notify() is that it has to be called after every mq_receive(), which may cause a race-condition(when some other process/thread call mq_send() between the call of mq_receive() and mq_notify()).
And it has a whole set of mq_open, mq_send(), mq_receive() and mq_close() with their own definition which is redundant and in some case inconsistent with socket open(),send(),recv() and close() method specification.
I do not think message queue should be used for synchronization. eventfd and signalfd are suitable for that.
But it(POSIX message queue) has some realtime support. It has priority features.
Messages are placed on the queue in decreasing order of priority, with newer messages of the same priority being placed after older messages with the same priority.
But this priority is also available for socket as out-of-band data !
Finally, to me , POSIX message queue is a legacy API. I always prefer Unix Datagram socket instead of POSIX message queue as long as the real-time features are not needed.
Message queues are very useful to build local decoupled applications. They are super fast, they are block organized (no need for buffering, cutting, etc which is the case for streaming sockets), basically few memcpy() operations (user code copy block to kernel, and kernel copy block to other process reading from q), and that's the story for message delivery. Some industry known middlewares such as Oracle Tuxedo or Mavimax Enduro/X uses these queues to help to build load balanced, high performance, fault tolerant decomposed, distributed applications. These queues allows to do load balancing, when several executables reads from the same queue, and kernel scheduler just distributes the message to processes which ever is idling. The nice thing for Linux is that poll can be done on Posix queues, which helps a to solve certain scenarios. For IBM AIX it is possible to do poll on System V queues.
For example, two processes can communicate easily locally over the queues with quite impressive throughput (~70k req+rply/sec):
If networking is needed, then for example Enduro/X provides tpbridge process which basically reads from messages from local queue, sends blocks to some other machine, where the other end injects the messages back in the local queue.
Also when comparing to sockets, you do not get any issues with queues, such as busy/lingering sockets when for example some binary have crashed, i.e. program at startup can immediately start to read the queues and do the processing.