I am using rust to interact with a StreamDeck using the hid (https://crates.io/crates/hid) library.
I want to:
Read from the hid device to register button presses and releases.
Write to the device to set images on the buttons.
Since reading from the device is blocking, I want to do this on different threads.
For interacting with hid devices, it creates a Handle (https://github.com/meh/rust-hid/blob/master/src/handle.rs#L8) which cannot be moved between threads because it contains a ptr: *mut hid_device.
(the trait `Send` is not implemented for `*mut c_void`)
Is there some way I can use the handle from multiple threads? On one I would read and on the other I would write.
There are multiple options potentially, but I'd recommend to use channels, for example tokio's mpsc channels.
The idea is to create a dedicated thread for talking with the device, and only use Handle in that thread. The events and commands that you want to receive or send can be posted to the respective channel, and handled in your main "controller" thread.
Check out this tutorial.
Related
If I have a type that is not safe to send between threads, I wrap it with Arc<Mutex<T>>. This way I'm guaranteed that when I access it, I need to unlock it first. However, Rust still complains when T does not implement Send + Sync.
Shouldn't it work for any type? In my case, T is a struct that accesses a C object through FFI, so I cannot mark it as Sync + Send.
What can I do in this case and why won't Rust accept Arc<Mutex<T>> as safe to share between threads?
Just because you are the only one accessing something (at a time) does not mean it suddenly become okay to access things from different threads. It merely prevents one issue: data races. But there may be other issues with moving objects across threads.
For example it's common for low-level windowing APIs to only be able to be called from the main thread. Many low-level APIs are also only callable from the thread they were initialized in. If you wrap these APIs in Rust objects, you don't want these objects moving across threads no matter what.
There is a multi-threaded application on QT5. Threads emits signals to each other. For example, the button click signal in the GUI comes in three separate threads, and each thread performs its own function. Is there an easy way to connect all signals to some object, let's call it SignalHub? And this SignalHub will receive all signals from all threads or objects, and any thread or object can be subscribed to the desired signal? Something similar to DBus, but only for several threads within a single QT application? Each thread or object should not receive its own signal. The purpose of this is to reduce the amount of code (there are several threads in the application, and each has multiple signals and slots). And it would be ideal to connect the necessary signals to (and from) the SignalHub only in new object's constructor or new thread's constructor.
Signals are thread-safe. So are connect() and disconnect(). Meaning: you can literally declare any object you want to be your "signal hub" and just connect the slots across threads as you desire. To what threads the objects involved belong doesn't matter. No problems.
To avoid receiving signals an object sent itself, you can simply do something along those lines:
void MyObject::someSlot() {
if(sender() == this) return;
}
I'm trying to share frames(images) that I receive from a USB camera(logitech c270) between two processes so that I can avoid a memcpy. I'm using memory mapping streaming I/O method described here and I can successfully get frames from the camera after using v4l2_mmap. However, I have another process(for image processing) which has to use the image buffers after the dequeue and signal the first process to queue the buffer again.
Searching online, I could find that opening a video device multiple times is allowed, but when I try to map(tried both v4l2_mmap and just mmap) in the second process after a successful v4l2_open, I get an EINVAL error.
I found this pdf which talks about implementing multi-map in v4l2(Not official) and was wondering if this is implemented. I have also tried using User pointer streaming I/O method, the document of which explicitly states that a shared memory can be implemented in this method, but I get an EINVAL when I request for buffers(According to the documentation in linuxtv.org this means the camera doesn't support User pointer streaming I/O).
Note: I want to keep the code modular, hence two processes. If this is not possible, doing all the work in a single process(multiple threads & global frame buffer) is still possible.
Using standard shared memory function calls is not possible as the two processes have to map to the video device file(/dev/video0) and I cannot have it under /dev/shm.
The main problem with multi-consumer mmap is that this needs to be implemented on the device driver side. That is: even if some devices might support multi-map, others might not.
So unless you can control the camera that is being used with your application, you will eventually come across one that does not, in which case your application would not work.
So in any case, your application should provide means to handle non multi-map devices.
Btw, you do not need multiple processes to keep your code modular.
Multiple processes have their merits (e.g. privilige separation, crash resilience,...), but might also encourage code duplication...
This may not be relevant now.....
You don't need to use the full monty multi consumer thing to do this. I have used Python to hand off the processing of the mmap buffers to multiple processes (python multi-threading only allows 1 thread at a time to execute)
If you're running multi-threaded then worker threads can pick up the buffer and process it independently when triggered by the master thread
Since the code is obviously very pythonesq I won't post it here as it wouldn't make sense in other languages as it uses python multi-processing support.
I have a multithreaded application where I want to allow all but one of the threads to run synchronously. However, when a specific thread wakes up I need the rest of the threads to block.
My Current implementation is:
void ManyBackgroundThreadsDoingWork()
{
AquireMutex(mutex);
DoTheBackgroundWork();
ReleaseTheMutex(mutex);
}
void MainThread()
{
AquireMutex(mutex);
DoTheMainThreadWork();
ReleaseTheMutex(mutex);
}
This works, in that it does indeed keep the background threads from operating inside the critical block while the main thread is doing its work. However, There is a lot of contention for the mutex amongst the background threads even when they don't necessarily need it. The main thread runs intermittently and the background threads are able to run concurrently with each other, just not with the main thread.
What i've effectively done is reduced a multithreaded architecture to a single threaded one using locks... which is silly. What I really want is an architecture that is multithreaded for most of the time, but then waits while a small operation completes and goes back to being multithreaded.
Edit: An explanation of the problem.
What I have is an application that displays multiple video feeds coming from pcie capture cards. The pcie capture card driver issues callbacks on threads it manages into what is effectively the ManyBackgroundThreadsDoingWork function. In this function I copy the captured video frames into buffers for rendering. The main thread is the render thread that runs intermittently. The copy threads need to block during the render to prevent tearing of the video.
My initial approach was to simply do double buffering but that is not really an option as the capture card driver won't allow me to buffer frames without pushing the frames through system memory. The technique being used is called "DirectGMA" from AMD that allows the capture card to push video frames directly into the GPU memory. The only method for synchronization is to put a glFence and mutex around the actual rendering as the capture card will be continuously streaming data to the GPU Memory. The driver offers no indication of when a frame transfer completes. The callback supplies enough information for me to know that a frame is ready to be transferred at which point I trigger the transfer. However, I need to block transfers during the scene render to prevent tearing and artifacts in the video. The technique described above is the suggested technique from the pcie card manufacturer. The technique, however, breaks down when you want more than one video playing at a time. Thus, the question.
You need a lock that supports both shared and exclusive locking modes, sometimes called a readers/writer lock. This permits multiple threads to get read (shared) locks until one thread requests an exclusive (write) lock.
Can multiple processes communicate through Message Queues or is it only for multiple thread communication ? I want to let two different processes communicate. I don t want to use shared memory because of some reasons. I want to use message queues instead. Is is doable ?
Yes, this is possible. Call the PostMessage function to add a message to the queue for a window, or PostThreadMessage to add a message to the queue for a thread. (Obviously, the thread must be running a message loop.)
The WM_COPYDATA message is explicitly designed for this purpose. It does the marshaling for you. Of course, it is a pretty basic form of marshaling: all it knows how to do is marshal a blob of bytes. It's your responsibility to interpret that blob of bytes into something useful.
There is a complete example of copying data between processes here on MSDN.
It is also worth pointing out that you don't even need WM_COPYDATA if the amount of information that you want to pass is so small that it will fit inside of wParam or lParam.
The Message Queing is a construct for inter process communication (IPC).
You can build a data construct in the memory of one process that even can implement a queue. This can be use for quick processing e.g. for Windows messages. This must be differentiated from MSMQ.