I have a set of old MFC DLLs that act as a frame buffer emulation. One of them contains processing for drawing commands then blits to an in-memory bitmap, the other DLL is the "main" DLL that controls Windowing and events by running a CWnd in its own Afx thread, and displays the in-memory bitmap.
The application that links against these basically has no idea they are there, they simply call "init" and "update" while running their app, expecting to see pixel data output on the Windows window instead of actual hardware.
Now I need to port this to Linux and looking at something like GTK, but during investigation it looks like GTK expects to be in control of the main loop and is event driven which is expected of a GUI toolkit, but many others also allow the user to manually pump the main loop instead of handing off control and only communicating with messaging.
Ideally, I'd want to just kick off GTK in its own thread and let it handle windowing and messaging alone, then blitting when "update" is called, while the user's main app is running as the critical main thread.
If we cant just plop GTK main_loop() into a seperate thread, I see that gtk_main_iteration may be used instead, but there are a lot of questions close to this one that users say that GTK shouldnt be used in this way, but the same could be said of our CWnd MFD implementation. At this point, there is no changing the mechanisms of how these DLLs work, the user's app must be the "main" and the processing/windowing must be transparent.
The other option is to just use X11, but I'd really like to have other widgets easily usable like toolbars, menuing, XShm extensions transparently used, resource management, etc.
Can this be done in GTK or is there better options?
Related
I am trying to build 2 applications running in separate processes. One application will display live video from a camera (server) and the other will overlay UI (client) on top of that video. The solution requires low latency and therefore I would like to render both without going thru the OS compositor.
The solution I am trying to implement involves creating a shared OpenGL context or texture so that the UI can render its part to some off screen buffer/texture.
After every live image frame is rendered the server can take the information from the off-screen buffer/texture and render it on top.
This way there is no latency added due to synchronization of the processes. The server will take the latest image from the UI if one is ready. In case it is not ready it shouldnt wait for it, and use a previous image instead.
How can i pass a texture or context between processes?
The CreateContext function can take a pointer of another context and make it shared but the address as far as I understand will not be valid outside the process space.
These days the "cleanest" way to share GPU resources between processes is to create those resources using Vulkan, export them into file descriptors (POSIX) or HANDLEs (Win32) and import those into OpenGL contexts created at either side. The file descriptors you can pass by the usual methods (sendmsg with SCM_RIGHTS, or pidfd_getfd, or open("/proc/${PID}/fd/${FD}").
Exporting from Vulkan:
https://www.khronos.org/registry/vulkan/specs/1.2-khr-extensions/html/chap46.html#VK_KHR_external_memory_fd (ff.)
Importing into OpenGL:
https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects.txt
https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects_fd.txt
https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects_win32.txt
Doing with just "pure" OpenGL requires a lot of hacks. One way would be to force indirect contexts (at the cost of modern capabilities, due to the lack of GLX support for those) and sharing the X11 IDs for that. Another method is to use ptrace to access to mapped buffers in the other process. Either is quite daunting and a lot more work to properly implement (BT;DT.), than setting up a Vulkan instance, creating all the textures therein and then importing them to OpenGL.
Discretion: I would like a general guidance for an approach to a project I am working on, so the question is very broad.
I am currently trying to build a GUI to make serial communication with an arduino, a usb camera (the camera has its own python library for controls), and handle real-time data in .dat format that gets updated as this GUI is running.
Right now, I am using threading on python in order to do all of these simultaneously, and I only interact with the script by using input function on python. Once threads start running, I cannot really interact with this script.
I have 3 separate threads running: 1. thread that saves images from the camera 2. thread that sends signals to arduino in every given random timing. 3. thread that waits for an input to terminate the main thread.
Everything works as I desire, but I wish to add GUI to make things more straight forward for others to use the program.
I realized that Qt actually offers all the capabilities that I wish to implement as a part of this program. Yet, I cannot fully understand the scope of Qt library functions I will need to implement everything.
My understanding is that I could use a combination of QWidgets, QTimer and QThreads to try something, but I would like to have some guidance on a more conventional approach to designing such GUI interface to do multitasking. I would like to also display real-time data on graphs including images from the camera and recording voltage data from the data files that gets updated through another program (data gets written to another folder). The program requires tracking of time from finish to end, and I know that threading can be very confusing when it comes to tracking these times. Any reference will be greatly appreciated.
Thanks you all.
Hi I’m currently working on a project for a videocoacher program for recording and replaying video, as well as showing delayed real-time video, and tracking placement via color.
The software is running on linux , on a 4 core odroid, and initially I started to make it multi threaded with threads implemented as a part of each new class. Each of these threads taking care of their own gui elements.
I’ve later found out that I need to show all gui elements/video in the main/gui thread. Earlier I’ve used opencv and boost. But it seems like using the Qt might be a better idea since some of the code already depends on the QT library. I am currently a novice at programming, and not very familiar with either opencv, qt, or threading.
My question is:
Is this relatively sound as a structure for the program, or is there something inherently wrong with how I am planning to do it now?
Main/GUI Thread
will show all visual & video content
will start a thread for ButtonControl object
ButtonControl
will handle all button input, controlling what happens in the program
depending on what buttons are pressed will start and end threads
like:
StoreToFile object ( starts storing video to a file, while sending a
video stream to GUI thread to show what it is storing in real-time)
ReadFromFile object ( reads the file currently stored and sends data
to display it in GUI thread
DelayedVideoStream object (stores video to buffer, and shows a
continuous delayed view of what happened 5seconds in the past)
ColorTracking object (tracks where a color placement is in the image
)
Kind regards, and thank you for taking the time to look at my question.
TLDR - is a structure where threads are implemented as classes and the image data is sent back to the gui/main thread a decent way to do a multithreaded program ?
Performance-wise, the best approach is not to deal with threads directly at all, but use QtConcurrent::run. It is safe to paint QImages that are simply passed via signals to a GUI object to display. I wrote a complete example demonstrating that approach. It leads to some very concise and easy-to-understand code thanks to related code being adjacent.
If you do want to use explicit threads, it will be much easier not to derive from QThread, but to simply move various worker objects into their threads, and have them communicate via signals and slots. I have a complete example for that approach as well.
Currently, I am able to hook onto Direct3D application and draw custom stuff onto its surface. However, I would like to suspend this application and then draw something else.
Is this even remotely possible to do so? Like creating another my own Direct3D window on top of that application?
I'm targetting only Windows 7, but the application I want to draw on is using only DirectX 9.
The problem is that I have very little experience with DirectX in general.
Sort of.
You're working with two different elements here, one quite large and but not particularly complex: hooking D3D. The other ("suspending" the app) is simple within that, but you don't quite want what you think you want.
To hook D3D, by the simplest method, you need to intercept the call to CreateDirect3D9 and return your own IDirect3D9, which later creates and returns your own IDirect3DDevice9. This will give you full control over the app's render process.
In order to "suspend" it, you need to wait for the desired trigger, then in your IDirect3DDevice9::Present, call your own event loop. This will, for all intents and purposes, suspend execution of the original app's code, but not the process itself (allowing your code and event loop to process). There will be some limitations of this, and you may not be able to consume window/Windows events (simply), but it will give you full control and effectively pause the original app.
Note, however, that you must intercept and reroute execution in every thread you want to "suspend," it's only specific to a single thread and you don't want physics or AI crunching on while render and UI are paused.
You need to perform your overlay drawing, whatever that may be, during your loop or your IDirect3DDevice9::Present hook, then call the real device's Present method as needed. If you want to run multiple frames of your overlay, then call the real Present repeatedly before returning from your Present. Tweak as necessary. Rendering here is done pretty much normally (check out general D3D tutorials for that), but there is one major catch: the device's state is unknown and may be incompatible, but must be "untouched" on return. This is handled simply by caching an IDirect3DStateBlock9 created from the device immediately after creating it. In your Present hook, create another state block with the state on entrance, restore the clean state block, run your code, then restore the entrance state block. You can work with any states, off a fresh slate, without damaging the device's state (I use this in practice, in works great).
If you want some rather extensive examples of how this works, I'd suggest checking out the Voodoo Shader project, which has full D3D8 and 9 hooks, including everything needed for overlays [/shameless own-project promotion]. Feel free to reuse any of the concepts, or comment with further questions; this certainly isn't all the details that may be useful to you.
This is a very complex thing to accomplish, as it is very much a hack to do so. The only people you see doing such things are steam, teamspeak, xfire, fraps, and a few hard-core devs.
There are kits out on the internet that show you have to inject a DLL into the memory space of the target application to achieve such a feat, and methods such as proxy DLLs.
Proxy DLL:
http://www.codeguru.com/cpp/g-m/directx/directx8/article.php/c11453
Injection:
http://www.progamercity.net/d3d/372-c-directx9-0-hooking-via-detours.html
Good luck, this will take you a while.
My UI is using QTreeView with QFileSystemModel to be able to select folders and files. The documentation for QFileSystemModel says that file structure update is done on a seperate thread which would mean the UI would not be blocked. However, this is not the case for me and I can't figure out the discreptency and how other people are not running into this issue. After debugging, I noticed that QFileSystemModel _q_fileSystemChanged slot which takes most of the time is still executed on the main UI thread which makes sense. The Questiong is how does the documentation claim than that it will not block the UI. Is there a solution? Am I misunderstanding something?
To repro
- Create a QTreeView with QFileSystemDataModel
- Set root path to "" or "/"
- Set breakpoint in QFileSystemModel _q_fileSystemChanged slot
- Expand one of the drives after app loads
Problem:
- The slot is called on the UI thread thus blocking the app until it finishes.
There are ways to make the file parser faster, but it I really need to execute on a seperate thread and notify when the data is populated and ready for QTreeView.
Thanks,
Innokenty
I think the reason for this could be the icons. Within the _q_fileSystemChanged() slot fileInfoGatherer.getInfo() gets called which - among other things - resolves the icons for the paths. In it's current design QFileIconProvider uses QIcon to represent icons and QIcon can only be used in the UI thread. QImage seems to be the only class allowed to use in other threads, but I think it may be to expensive to use QImage in the background thread and convert it to an QIcon in the UI thread.
So it is possible that the platform implementation of QFileIconProvider is slow on network paths under some circumstances and therefore slows down the UI main thread.
I don't know if this is the source of your problem, but at least this should be the reason why _q_fileSystemChanged() is called within the UI thread.