Temporarily stop Qt5/QML from updating framebuffer (/dev/fb0) - linux

On an embedded system, due to very specific hardware/software limitations, we need another program to be able to display info via the framebuffer (/dev/fb0), while keeping our Qt5/QML program running in the background. We display a custom QQuickItem derived black rectangle (with nothing but a 'return' in the update()) in QML while the second program runs, but we still see flickering on our LCD display. We surmise that QT is still painting the Screen Graph (possibly of other items layered beneath the rectangle) to /dev/fb0, thus causing flickering by both programs writing to /dev/fb0 at the same time.We cannot use a second framebuffer approach (/dev/fb1) because the compositing increases processor loads dramatically such that the system becomes unusable. One thought is iterate through screen graph tree, marking all nodes 'ItemHasContents' flag as false so the screen graph renderer will not write to FB, then re-enable when the secondary program finishes its task. Another thought is to turn off rendering via the top level QWindow, but nothing in the documentation says this is even possible... Is this possible via QT, or even though a shell script?

/dev/fb0 sounds like you'd be working on a Linux-based system.
You're not saying whether you need the Qt application really continuing to run, just without screen updates, or whether simply "freezing" it while your other app uses the frame buffer would suffice.
If you are fine with the latter, the easiest solution to stop the Qt app from rendering is simply send it a SIGSTOP signal, it will freeze and cease to upgrade the frame buffer. Once you're done with the fb, send a SIGCONT signal. Sometimes the simplest approaches are the best...

Related

Flickering frames in Imediate mode

I am using vulkan (ash) to render a scene.
The rendering algorithm first does a raytracing pass (using the traditional pipeline not the NV extensions) then it renders a few frames normally.
On Nvidia it renders fine, on AMD I am experiencing a flicker.
The draw structure is:
Raytrace
Render 6 meshes with 6 draw calls
If I render that in immediate mode, I get flickering, if I do so in FIFO mode I don't. In immediate mode, if I put the thread to sleep in between the raytracing call and the mesh calls I get flickering no matter how slow the sleep is even at a 1 second sleep the flcikering occurs. The pattern of the flicker is 4 calls rendered normally, 2 calls the RT image is the only thing rendered, as if it had been called after the 6 regular mesh calls, even though it;s issued before them.
All f this suggests a synchronization bug. The issue is I have no idea what I forgot to do.
Each draw call is drawing to a swapchain image, each draw call has this structure:
Set up uniforms, desciptor sets, pipelines...
Create a fence
Submit draw command to queue signaling fence
Wait for fence
Wait for device
Delete fence
I am currently doing that at the end of every single one of the 7 calls. What did I forget to sync? What barrier am I missing?
I am using the passless rendering extension, so I don;t have any render passes. I might need an image barrier on the swapchain as a consequence but I don;t know if, and what it would need to be.

Running GTK in its own background thread

I have a set of old MFC DLLs that act as a frame buffer emulation. One of them contains processing for drawing commands then blits to an in-memory bitmap, the other DLL is the "main" DLL that controls Windowing and events by running a CWnd in its own Afx thread, and displays the in-memory bitmap.
The application that links against these basically has no idea they are there, they simply call "init" and "update" while running their app, expecting to see pixel data output on the Windows window instead of actual hardware.
Now I need to port this to Linux and looking at something like GTK, but during investigation it looks like GTK expects to be in control of the main loop and is event driven which is expected of a GUI toolkit, but many others also allow the user to manually pump the main loop instead of handing off control and only communicating with messaging.
Ideally, I'd want to just kick off GTK in its own thread and let it handle windowing and messaging alone, then blitting when "update" is called, while the user's main app is running as the critical main thread.
If we cant just plop GTK main_loop() into a seperate thread, I see that gtk_main_iteration may be used instead, but there are a lot of questions close to this one that users say that GTK shouldnt be used in this way, but the same could be said of our CWnd MFD implementation. At this point, there is no changing the mechanisms of how these DLLs work, the user's app must be the "main" and the processing/windowing must be transparent.
The other option is to just use X11, but I'd really like to have other widgets easily usable like toolbars, menuing, XShm extensions transparently used, resource management, etc.
Can this be done in GTK or is there better options?

Multiple OpenGL contexts, multiple windows, multithreading, and vsync

I am creating a graphical user interface application using OpenGL in which there can be any number of windows - "multi-document interface" style.
If there were one window, the main loop could look like this:
handle events
draw()
swap buffers (vsync causes this to block until vertical monitor refresh)
However consider the main loop when there are 3 windows:
each window handle events
each window draw()
window 1 swap buffers (block until vsync)
(some time later) window 2 swap buffers (block until vsync)
(some time later) window 3 swap buffers (block until vsync)
Oops... now rendering one frame of the application is happening at 1/3 of the proper framerate.
Workaround: Utility Window
One workaround is to have only one of the windows with vsync turned on, and the rest of them with vsync turned off. Call swapBuffers() on the vsync window first and draw that one, then draw the rest of the windows and swapBuffers() on each one.
This workaround will probably look fine most of the time, but it's not without issues:
it is inelegant to have one window be special
a race condition could still cause screen tearing
some platforms ignore the vsync setting and force it to be on
I read that switching which OpenGL context is bound is an expensive operation and should be avoided.
Workaround: One Thread Per Window
Since there can be one OpenGL context bound per thread, perhaps the answer is to have one thread per window.
I still want the GUI to be single threaded, however, so the main loop for a 3-window situation would look like this:
(for each window)
lock global mutex
handle events
draw()
unlock global mutex
swapBuffers()
Will this work? This other question indicates that it will not:
It turns out that the windows are 'fighting' each other: it looks like
the SwapBuffers calls are synchronized and wait for each other, even
though they are in separate threads. I'm measuring the frame-to-frame
time of each window and with two windows, this drops to 30 fps, with
three to 20 fps, etc.
To investigate this claim I created a simple test program. This program creates N windows and N threads, binds one window per thread, requests each window to have vsync on, and then reports the frame rate. So far the results are as follows:
Linux, X11, 4.4.0 NVIDIA 346.47 (2015-04-13)
frame rate is 60fps no matter how many windows are open.
OSX 10.9.5 (2015-04-13)
frame rate is not capped; swap buffers is not blocking.
Workaround: Only One Context, One Big Framebuffer
Another idea I thought of: have only one OpenGL context, and one big framebuffer, the size of all the windows put together.
Each frame, each window calls glViewport to set their respective rectangle of the framebuffer before drawing.
After all drawing is complete, swapBuffers() on the only OpenGL context.
I'm about to investigate whether this workaround will work or not. Some questions I have are:
Will it be OK to have such a big framebuffer?
Is it OK to call glViewport multiple times every frame?
Will the windowing library API that I am using even allow me to create OpenGL contexts independent of windows?
Wasted space in the framebuffer if the windows are all different sizes?
Camilla Berglund, maintainer of GLFW says:
That's not how glViewport works. It's not
how buffer swapping works either. Each window will have a
framebuffer. You can't make them share one. Buffer swapping is
per window framebuffer and a context can only be bound to a single
window at a time. That is at OS level and not a limitation of
GLFW.
Workaround: Only One Context
This question indicates that this algorithm might work:
Activate OpenGL context on window 1
Draw scene in to window 1
Activate OpenGL context on window 2
Draw scene in to window 2
Activate OpenGL context on window 3
Draw scene in to window 3
For all Windows
SwapBuffers
According to the question asker,
With V-Sync enabled, SwapBuffers will sync to the slowest monitor and
windows on faster monitors will get slowed down.
It looks like they only tested this on Microsoft Windows and it's not clear that this solution will work everywhere.
Also once again many sources tell me that makeContextCurrent() is too slow to have in the draw() routine.
It also looks like this is not spec conformant with EGL. In order to allow another thread to eglSwapBuffers(), you have to eglMakeCurrent(NULL) which means your eglSwapBuffers now is supposed to return EGL_BAD_CONTEXT.
The Question
So, my question is: what's the best way to solve the problem of having a multi-windowed application with vsync on? This seems like a common problem but I have not yet read a satisfying solution for it.
Similar Questions
Similar to this question: Synchronizing multiple OpenGL windows to vsync but I want a platform-agnostic solution - or at least a solution for each platform.
And this question: Using SwapBuffers() with multiple OpenGL canvases and vertical sync? but really this problem has nothing to do with Python.
swap buffers (vsync causes this to block until vertical monitor refresh)
No, it doesn't block. The buffer swap call may return immediately and not block. What it does however is inserting a synchronization point so that execution of commands altering the back buffer is delayed until the buffer swap happened. The OpenGL command queue is of limited length. Thus once the command queue is full, futher OpenGL calls will block the program until further commands can be pushes into the queue.
Also the buffer swap is not an OpenGL operation. It's a graphics / windowing system level operation and happens independent of the OpenGL context. Just look at the buffer swap functions: The only parameter they take are a handle to the drawable (=window). In fact even if you have multiple OpenGL contexts operating on a single drawable, you swap the buffer only once; and you can do it without a OpenGL context being current on the drawable at all.
So the usual approach is:
' first do all the drawing operations
foreach w in windows:
foreach ctx in w.contexts:
ctx.make_current(w)
do_opengl_stuff()
glFlush()
' with all the drawing commands issued
' loop over all the windows and issue
' the buffer swaps.
foreach w in windows:
w.swap_buffers()
Since the buffer swap does not block, you can issue all the buffer swaps for all the windows, without getting delayed by V-Sync. However the next OpenGL drawing command that addresses a back buffer issued for swapping will likely stall.
A workaround for that is using an FBO into which the actual drawing happens and combine this with a loop doing the FBO blit to the back buffer before the swap buffer loop:
' first do all the drawing operations
foreach w in windows:
foreach ctx in w.contexts:
ctx.make_current(w)
glBindFramebuffer(GL_DRAW_BUFFER, ctx.master_fbo)
do_opengl_stuff()
glFlush()
' blit the FBOs' renderbuffers to the main back buffer
foreach w in windows:
foreach ctx in w.contexts:
ctx.make_current(w)
glBindFramebuffer(GL_DRAW_BUFFER, 0)
blit_renderbuffer_to_backbuffer(ctx.master_renderbuffer)
glFlush()
' with all the drawing commands issued
' loop over all the windows and issue
' the buffer swaps.
foreach w in windows:
w.swap_buffers()
thanks #andrewrk for all theses research, i personnaly do like that :
Create first window and his opengl context with double buffer.
Active vsync on this window (swapinterval 1)
Create others windows and attach first context with double buffer.
Disable vsync on theses others window (swapinterval 0)
For each frame
For invert each window (the one with vsync enable at the end).
wglMakeCurrent(hdc, commonContext);
draw.
SwapBuffer
In that manner, i achieve the vsync and all window are based on this same vsync.
But i encoutered problem without aero : tearing...

How to make linux terminal refresh framebuffer

I have written a small app that draws directly on /dev/fb0. I run it in a console-only environment without X-server. The problem I have is that once the app exits, the framebuffer stays the way it left it. Is it possible to make the terminal redraw all it's contents to refresh the screen after the app exits?
What I would do(if it does not depend on design and conceptual requirements) is to issue execve(clear).
Perhaps even execve(reset) would help. But for the latter i do not guarantee. First will surely do the job.

glXSwapBuffers blocks until new X-events are fired

We have a really strange bug in our software. Call of glXSwapBuffers will block every now and then until some X-events are sent (mouse hovering over window/keyboard events). It seems that the bug is identical to Qt QGLWidget OpenGL rendering from thread blocks on swapBuffers() which was never properly solved. We have a same kind of situation.
In our application we create a multiple number of windows because our application needs to work with multiple screens. Each of our window is basically QWidget which has a class derived from QGLWidget as its only child. Each window has its own rendering thread attached which executes OpengGL-commands.
In this setting, application just halts every now and then. It continues normally if we feed X-events to it (moving the mouse over windows/push keyboard buttons). Based on the debugger info glXSwapBuffers() blocks somewhere inside the closed driver code.
We haven't confirmed this behaviour on NVidia cards, only with AMD-cards, and it is more likely to appear when using multiple AMD-cards. This suggests that the bug may come from the GPU-drivers.
I would like to know has any other bumped into this and has somebody even managed to solve this.

Resources