unix message queue - linux

Is there an ipc option to get the last message in message queue but not removing it?
I want this to allow many clients reading same messages from the same server..
Edit:
Server and clients are on the same machine!
Thanks

I don't believe there is any way to do that using either system v or POSIX message queues. Furthermore, AFAIK neither API allows you to send messages to a remote machine, so unless your clients are running on the same host as the server, you will need to use a higher-level technology.

Related

In Linux, how to send message from one process to another with flow control?

I need to send the message from one Golang process to a Python one. The receiver must process the data so it reads much slower than the sender can send. I want to have a flow control mechanism, which is, a way to let the sender stop sending if there are too many unread messages so that these messages won't take too many system resources.
My current solution is to use a TCP connection, but the sender and receiver are on the same machine, so I'm looking for a potentially better alternative. But I'm not sure whether something like UNIX domain socket or named pipe support flow control or if there is a protocol that is convenient to implement to make them support.

How to configure MassTransit in an unreliable network environment?

I'm trying to get my head around MassTransit in combination with RabbitMQ.
The basic concepts are working in a test project, but what I need is the following:
My system will have one or more servers that react to real life events (telephony). These events wil, by means of MassTransit and RabbitMQ, translate into messages that will be picked up by one or more receivers via a separate server, set up as RabbitMQ host. So far so good.
However, I cannot assume that I always have a connection between the publisher and the host machines. Just assume that the publishing server will continue to consume the real life events, but now cannot publish it's messages.
So, the question is: Does MassTransit have some kind of mechanism to store messages locally some way until the connection is re-established?
Or should I install RabbitMQ on every publishing server as well, in order to create a local exchange? Then I have to make the exchanges synchronize themselves after a reconnect.
Probably you have to implement a store and forward policy. Instead of publishing directly your message through MassTransit and RabbitMQ, you can store the message in a persistence repository (a local database) and delegate to some other process the notification through Masstransit of the messages stored before. This approach is often referred as "Client High Availability". This does not substitute the standard HA (High Availability) on server like the one implemented by RabbitMQ. But it's a good approach to use in a distributed system (like the one you described) because it could help you a lot in scenarios of server failure (e.g. an issue on RabbitMQ server that causes some loss of messages that you still have inside the store of some client and therefore you can make it process again).

Chat / System Communication App (Nodejs + RabbitMQ)

So i currently have a chat system running NodeJS that passes messages via rabbit and each connected user has their own unique queue that subscribed and only listening to messages (for only them). The backend can also use this chat pipeline to communicate other system messages like notifications/friend requests and other user event driven information.
Currently the backend would have to loop and publish each message 1 by 1 per user even if the payload of the message is the same for let's say 1000 users. I would like to get away from that and be able to send the same message to multiple different users but not EVERY user who's connected.
(example : notifying certain users their friend has come online).
I considered implementing a rabbit queue system where all messages are pooled into the same queue and instead of rabbit sending all user queues node takes these messages and emit's the message to the appropriate user via socket connections (to whoever is online).
Proposed - infrastructure
This way the backend does not need to loop for 100s and 1000s of users and can send a single payload containing all users this message should go to. I do plan to cluster the nodejs servers together.
I was also wondering since ive never done this in a production environment, will i need to track each socketID.
Potential pitfalls i've identified so far:
slower since 1000s of messages can pile up in a single queue.
manually storing socket IDs to manually trasmit to users.
offloading routing to NodeJS instead of RabbitMQ
Has anyone done anything like this before? If so, what are your recommendations. Is it better to scale with user unique queues, or pool all grouped messages for all users into smaller (but larger pools) of queues.
as a general rule, queue-per-user is an anti-pattern. there are some valid uses of this, but i've never seen it be a good idea for a chat app (in spite of all the demos that use this example)
RabbitMQ can be a great tool for facilitating the delivery of messages between systems, but it shouldn't be used to push messages to users.
I considered implementing a rabbit queue system where all messages are pooled into the same queue and instead of rabbit sending all user queues node takes these messages and emit's the message to the appropriate user via socket connections (to whoever is online).
this is heading down the right direction, but you have to remember that RabbitMQ is not a database (see previous link, again).
you can't randomly seek specific messages that are sitting in the queue and then leave them there. they are first in, first out.
in a chat app, i would have rabbitmq handling the message delivery between your systems, but not involved in delivery to the user.
your thoughts on using web sockets are going to be the direction you want to head for this. either that, or Server Sent Events.
if you need persistence of messages (history, search, last-viewed location, etc) then use a database for that. keep a timestamp or other marker of where the user left off, and push messages to them starting at that spot.
you're concerns about tracking sockets for the users are definitely something to think about.
if you have multiple instances of your node server running sockets with different users connected, you'll need a way to know which users are connected to which node server.
this may be a good use case for rabbitmq - but not in a queue-per-user manner. rather, in a binding-per-user. you could have each node server create a queue to receive messages from the exchange where messages are published. the node server would then create a binding between the exchange and queue based on the user id that is logged in to that particular node server
this could lead to an overwhelming number of bindings in rmq, though.
you may need a more intelligent method of tracking which server has which users connected, or just ignore that entirely and broadcast every message to every node server. in that case, each server would publish an event through the websocket based on the who the message should be delivered to.
if you're using a smart enough websocket library, it will only send the message to the people that need it. socket.io did this, i know, and i'm sure other websocket libraries are smart like this, as well.
...
I probably haven't given you a concrete answer to your situation, and I'm sure you have a lot more context to consider. hopefully this will get you down the right path, though.

Enumerating clients of a 389/Fedora Directory server

I have a Fedora Directory server that I need to shut down. In order to do so, I need to find a list of all clients currently authenticating to this server. Not being familiar with Fedora/389 Directory, I was wondering if there's an easy way to do that? My best option at this point seems to be to comb through the log files.
An LDAP-compliant server should send the unsolicited notification to clients about events transpiring between the client and server. The notification contains information that the client can use to take an action. Therefore, properly coded clients should not care about the server being shutdown. Clients that do not support the unsolicited notification should have that support added.
see also
LDAP: Programming Practices

ZeroMQ: Check if someone is listening behind Unix domain socket

Context: Linux (Ubuntu), C, ZeroMQ
I have a server which listens on ipc:// SUB ZeroMQ socket (which physically is a Unix domain socket).
I have a client which should connect to the socket, publish its message and disconnect.
The problem: If server is killed (or otherwise dies unnaturally), socket file stays in place. If client attempts to connect to this stale socket, it blocks in zmq_term().
I need to prevent client from blocking if server is not there, but guarantee delivery if server is alive but busy.
Assume that I can not track server lifetime by some external magic (e.g. by checking a PID file).
Any hints?
Non-portable solution seems to be to read /proc/net/unix and search there for a socket name.
Without showing your code all of this is guesswork... that said...
If you have a PUB/SUB pair, the PUB will hang around to make sure that its message gets through. Perhaps you're not using the right type of zmq pair. Sounds more like you have a REP/REQ pair instead.
This way, once you connect from the client (the REQ side), you can do a zmq_poll to determine if the socket is available for writing. If yes, then go ahead with your write, otherwise shutdown the client and handle the error condition (if it is an error in your system).
Maybe you can try to connect to the socket first, with your own native socket. If the connection succeeds, it's quite high possibility your publisher could work fine.
There is another solution. Don't use ipc:// sockets. Instead use something like tcp://127.0.0.101:10001. On most UNIXes that will be almost as fast as IPC because the OS recognizes that it is a local connection and shortcuts the full IP stack processing.

Resources