Problems with boost.interprocess for bidirectional messaging using message_queue - linux

I am trying to implement a messaging system between two processes with boost.interprocess and message_queue.
First problem: One queue can only be used for sending messages from process A to B, not B to A.
Thus, I am using two queues in both processes. Process A listens/receives at Queue-A and sends on Queue-B; Process B listens/receives at Queue-B and sends on Queue-A.
I am unable to get the to system work with both queues. Depending on the ordering of the processes calling boost::interprocess::message_queue(boost::interprocess::open_or_create,...)
or
boost::interprocess::message_queue(boost::interprocess::open_only,...)
either one Queue works or the other or neither.
Even if Process A creates Queue-A and Queue-B and Process B opens Queue-A and Queue-B, only. In one direction boost::interprocess is stuck at the receive-function and never awakes.
1) Is it possible to get bidirectional messaging/signalling to work with interprocess::message_queue using two queues in each process?
2) Is there a better way get bidirectional messaging without using message_queue?

I did not receive any comments on this. The solution was to not use boost::interprocess::message_queue. With the help of boost/interprocess/shared_memory_object I did write on my own a new, simple library for unidirectional interprocess messaging. https://github.com/svebert/InterprocessMsg

Related

ActiveMQ CMS: Is there a way to use it without threading?

I took the sample code from Apache here: https://activemq.apache.org/components/cms/example
(The producer section specfically) and tried to rewrite it so it doesn't create any threads for producing. And instead, in my program's main thread, creates a producer object and sets up the connection, session, destination, and so on. Then it sends messages using a message producer. This is all done in a singleton so that my program just has one Producer object and just goes to it whenever it needs to dump any message to one of my queues. This example code seems to create a producer for every thread, set it up everytime, just to send a message, then deletes everything. And it does this for every time you want to want to produce something from your program.
I am crashing right when I try to call send on a message producer with any given message. I found out after some digging that after the send call it tries to lock a mutex and enter a critical section. I guess this is for threading? I don't use threads at all in my code so I guess it crashes because of that... Does anyone know a way to bypass this? I don't want to use multiple threads, I won't need to worry about two threads trying to call send at the same time or whatever the problem is that using mutexes is trying to solve.
You don't need to create a thread to run the producer in but internally the library is going to use a couple of threads as that is necessary for meeting the API requirements and also just because you don't use multiple threads doesn't means others won't so the mutex is an internal requirement.
You are free to modify the example to only create a producer inside the main thread of the application, the example uses two threads because it is acting as both a producer and consumer.
One likely cause of the error you are receiving is because you did not initialize the ActiveMQ-CPP library:
activemq::library::ActiveMQCPP::initializeLibrary();

Single Camera access by two processes at the same time

I want to use one Camera for two processes / threads, e.g.
a) live streaming and
b) image processing at the same time.
Use Case:
Application, which can handle multiple request, based on a user request.
a) User can request – Detect cam-1 and do a Live streaming
b) Later, user can request – Detect Motion / Image processing using the same cam-1, while process (a) is doing the live streaming.
Challenge I see to access same camera by 2 different process at the same time, is there way to reroute the data / pointers of Cam data to different process ?
Note: OS -Windows
Any help will be appreciated !!
Regards, AK
Well, doable. But ..
Given the said above, there are few things to respect once designing the target software approach. One of these is a fact, the camera is a device, which restricts it to have a single "commander-in-charge", rather than permiting to have a shizophrenic "duty" under several concurrent bosses.
This sais, the solution is in smarter-design of the acquired data-stream, this could be delivered into several concurrent consuming-processes.
For more hints on such a design concept, read this Answer to a similarly motivated Question.
Avoid to let two threads access the camera at the same time.
If the driver allows it, you may work with multiple buffers, used in a round-robin fashion to store the live stream. Their content can be continuously sent to the display, but when desired you can leave one on the side and reserve it to allow for longer processing.
If this is not possible, you can copy every desired image to a processing buffer when needed.
If your system must be very responsive and process the images in real-time, there is probably no need for two threads !
In any case, if you are working with two threads, there is no need to "reroute the pointers", you simply let the threads access the buffers.
If they are processes rather than threads, then you can establish the buffers in a shared memory section.

Can I use child process or cluster to do custom function calls in node?

I have a node program that does a lot of heavy synchronous work. The work that needs to be done could easily be split into several parts. I would like to utilize all processor cores on my machine for this. Is this possible?
Form the docs on child processes and clusters I see no obvious solution. Child processes seems to be focused on running external programs and clusters only work for incoming http connections (or have I misunderstood that?).
I have a simple function var output = fn(input) and would just like to run it several times, spread all the calls across the cores on my machine and provide the result in a callback. Can that be done?
Yes, child processes and clusters are the way to do that. There are a couple of ways of implementing a solution to your problem.
Your server creates a queue and manages that queue. Whenever you need to call your function, you will drop it into the queue. You will then process the queue N items at a time, where N equals the number of your cores. When you start processing, you will spawn a child process, probably either using spawn or exec, with the argument being another standalone Node.js script, along with any additional parameters (it's just a command line call, basically). Inside that script you will do your work, and emit the result back to the server. The worker is then freed up.
You can create a dedicated server with cluster, where all it will do is run your function. With the cluster module, you can (once again) create N number of other workers, and delegate work to these wokers.
Now this may seem like a lot of work, and it is. And for that reason you should use an existing library as this is a, for the most part, a solve problem at this point. I really like redis-based queues, so if you're interested in that see this answer for some queue recommendations.

Is the lock necessary when a host attempts to receive the data from different sockets

I have three machines A, B, and C that are all connected each other. If A and B try to send data to C simultaneously, Can C use two different threads to receive the respective data without using any locks? Here C is connected to A and B through different sockets. Thanks in advance.
Well, yes - no explicit locks anyway. The IP stack will have its own internal locks, but I don't think that's what you are asking.
You already appreciate that multiple processes can communicate simultaneously with different servers, and multiple processes implies different threads. The IP stack is therefore thread-safe.
Given the usual general care with any shared data inside one multithreaded process, (as metioned by rockstar comment), there is no problem with those threads communicating with IP endpoints on different peers/hosts. This is very common and works fine.
The two threads on C can safely communicate independently with A and B.
Go ahead - try it!
[posting my comment as answer as it is not wrong and makes sense :P even referenced.]
I would say that you can have 2 threads . One thread listening for data from socket 1 and the other thread listening for data from socket 2 .
But if you need a lock or not should depend on what you do with the data . Do you write it to some buffer ? Since threads share Data,Code & Heap segment therefore you must be careful when you write this received data in which case you need to lock .
This is my basic understanding . I shall wait for more knowledgeable answers here.

ServiceStack: How to make InMemoryTransientMessageService run in a background

What needs to be done to make InMemoryTransientMessageService run in a background thread? I publish things inside a service using
base.MessageProducer.Publish(new RequestDto());
and they are exececuted immediately inside the service-request.
The project is self-hosted.
Here is a quick unit test showing the blocking of the current request instead of deferring it to the background:
https://gist.github.com/lmcnearney/5407097
There is nothing out of the box. You would have to build your own. Take a look at ServiceStack.Redis.Messaging.RedisMqHost - most of what you need is there, and it is probably simpler (one thread does everything) to get you going when compared to ServiceStack.Redis.Messaging.RedisMqServer (one thread for queue listening, one for each worker). I suggest you take that class and adapt it to your needs.
A few pointers:
ServiceStack.Message.InMemoryMessageQueueClient does not implement WaitForNotifyOnAny() so you will need an alternative way of getting the background thread to wait to incoming messages.
Closely related, the ServiceStack.Redis implementation uses topic subscriptions, which in this class is used to transfer the WorkerStatus.StopCommand, which means you have to find an alternative way of getting the background thread to stop.
Finally, you may want to adapt ServiceStack.Redis.Messaging.RedisMessageProducer as its Publish() method pushes the message requested to the queue and pushes the channel / queue name to the TopicIn queue. After reading the code you can see how the three points tie together.
Hope this helps...

Resources