linux qt change monitor refresh frequency - linux

i am using openGL to improve the performance on the target.
i would like to know if there is a way to change monitor refresh frequency (from qt or linux) to match the application.

Qt does not wrap that functionality. You would need to access X directly, via XCB (or Xlib, but XCB is to be preferred) and the XRandR extension.
However, these days refresh rate is typically limited by the output device, as LCD screens often only operate in the 60Hz-75Hz range.

Related

What are the syscalls for drawing graphics on the screen in Linux?

I was searching for a syscall that would draw a pixel on a given coordinate on the screen on something similar. But I couldn't find any such syscalls in this site.
I came to know that OS interacts with monitors using graphic drivers. But these drivers may be different on different machines. So is there a common native API provided by linux for handling these?
Much like how there are syscalls for opening, closing, reading, writing to files. Even though underlying file systems maybe different, these syscalls provide an abstract API for user programs to simplify things. I was searching something similar for drawing onto the screen.
Typically a user is running a display server and window system which organizes the screen into windows which applications draw to individually using the API provided by that system. The details will depend on the architecture of this system.
The traditional window system on Linux is the X window system and the more modern Wayland display server/protocol is also in common use. For example X has commands to instruct the X server to draw primitives to the screen.
If no such system is in use, you can directly draw to a display either via a framebuffer device or using the DRM API. Both are not accessed by special syscalls, but instead by using normal file syscalls like open, read, write, etc., but also ioctl, on special device files in /dev, e.g. /dev/dri/card0 for DRM to the first graphics card or /dev/fb0 for the first framebuffer device. DRM is also used for applications to render directly to the screen or a buffer when under a display server or window system as above.
In any case DRM is usually not used directly to draw e.g. pixels to the screen. It still is specific to the graphics card. Typically a library like Mesa3D is used to translate the specific details into a common API like OpenGL or Vulkan for applications to use.

Changing Resolution of Linux Framebuffer

I am writing a high performance video application in linux and C++ (but language shouldn't matter for this question)
I currently have my application such that I can display images to the Framebuffer. When my computer boots, the resolution of the display connected seems to be set permanently. I would like to be able to change the resolution output on the computer dynamically. I have tried fbset but it did not work. I am not using X11 because I assumed that there would be a performance decrease.
Is directly writing to the framebuffer the best way to be doing my
rendering for performance?
If I use X11 I see that I can get commands
to change the resolution. Should this be something I investigate?
Is there another way to change resolution?

DirectX9: delay between present() and actual screen update

My question is about the delay between calling the present method in DirectX9 and the update appearing on the screen.
On a Windows system, I have a window opened using DirectX9 and update it in a simple way (change the color of the entire window, then call the IDirect3DSwapChain9's present method). I call the swapchain's present method with the flag D3DPRESENT_DONOTWAIT during a vertical blank interval. There is only one buffer associated with the swapchain.
I also obtain an external measurement of when the CRT screen I use actually changes color through a photodiode connected to the center of the screen. I obtain this measurement with sub-millisecond accuracy and delay.
What I found was that the changes appear exactly in the third refresh after the call to present(). Thus, when I call present() at the end of the vertical blank, just before the screen refreshing, the change will appear on the screen exactly 2*screen_duration + 0.5*refresh_duration after the call to present().
My question is a general one:
in how far can I rely on this delay (changes appearing in the third refresh) being the same on different systems ...
... or does it vary with monitors (leaving aside the response times of LCD and LED monitors)
... or with graphics-cards
are there other factors influencing this delay
An additional question:
does anybody know a way of determining, within DirectX9, when a change appeared on the screen (without external measurements)
There's a lot of variables at play here, especially since DirectX 9 itself is legacy and is effectively emulated on modern versions of Windows.
You might want to read Accurately Profiling Direct3D API Calls (Direct3D 9), although that article doesn't directly address presentation.
On Windows Vista or later, once you call Present to flip the front and back buffers, it's passed off to the Desktop Windows Manager for composition and eventual display. There are a lot of factors at play here including GPU vendor, driver version, OS version, Windows settings, 3rd party driver 'value add' features, full-screen vs. windowed mode, etc.
In short: Your Mileage May Vary (YMMV) so don't expect your timings to generalize beyond your immediate setup.
If your application requires knowing exactly when present happens instead of just "best effort" as is more common, I recommend moving to DirectX9Ex, DirectX 11, or DirectX 12 and taking advantage of the DXGI frame statistics.
In case somebody stumbles upon this with a similar question: I found out the reason why my screen update appears exactly on the third refresh after calling present(). As it turns out, the Windows OS by default queues exactly 3 frames before presenting them, and so changes appear on the third refresh. As it stands, this can only be "fixed" by the application starting with Directx10 (and Directx9Ex); for Directx9 and earlier, one has to either use the graphics card driver or the Windows registry to reduce this queueing.

Get screenshot of EGL DRM/KMS application

How to get screenshot of graphical application programmatically? Application draw its window using EGL API via DRM/KMS.
I use Ubuntu Server 16.04.3 and graphical application written using Qt 5.9.2 with EGLFS QPA backend. It started from first virtual terminal (if matters), then it switch display to output in full HD graphical mode.
When I use utilities (e.g. fb2png) which operates on /dev/fb?, then only textmode contents of first virtual terminal (Ctrl+Alt+F1) are saved as screenshot.
It is hardly, that there are EGL API to get contents of any buffer from context of another process (it would be insecure), but maybe there are some mechanism (and library) to get access to final output of GPU?
One way would be to get a screenshot from within your application, reading the contents of the back buffer with glReadPixels(). Or use QQuickWindow::grabWindow(), which internally uses glReadPixels() in the correct way. This seems to be not an option for you, as you need to take a screenshot when the Qt app is frozen.
The other way would be to use the DRM API to map the framebuffer and then memcpy the mapped pixels. This is implemented in Chromium OS with Python and can be translated to C easily, see https://chromium-review.googlesource.com/c/chromiumos/platform/factory/+/367611. The DRM API can also be used by another process than the Qt UI process that does the rendering.
This is a very interesting question, and I have fought this problem from several angles.
The problem is quite complex and dependant on platform, you seem to be running on EGL, which means embedded, and there you have few options unless your platform offers them.
The options you have are:
glTexSubImage2D
glTexSubImage2D can copy several kinds of buffers from OpenGL textures to CPU memory. Unfortunatly it is not supported in GLES 2/3, but your embedded provider might support it via an extension. This is nice because you can either render to FBO or get the pixels from the specific texture you need. It also needs minimal code intervertion.
glReadPixels
glReadPixels is the most common way to download all or part of the GPU pixels which are already rendered. Albeit slow, it works on GLES and Desktop. On Desktop with a decent GPU is bearable up to interactive framerates, but beware on embedded it might be really slow as it stops your render thread to get the data (horrible framedrops ensured). You can save code as it can be made to work with minimal code modifications.
Pixel Buffer Objects (PBO's)
Once you start doing real research PBO's appear here and there because they can be made to work asynchronously. They are also generally not supported in embedded but can work really well on desktop even on mediocre GPU's. Also a bit tricky to setup and require specific render modifications.
Framebuffer
On embedded, sometimes you already render to the framebuffer, so go there and fetch the pixels. Also works on desktop. You can enven mmap() the buffer to a file and get partial contents easily. But beware in many embedded systems EGL does not work on the framebuffer but on a different 'overlay' so you might be snapshotting the background of it. Also to note some multimedia applications are run with UI's on the EGL and media players on the framebuffer. So if you only need to capture the video players this might work for you. In other cases there is EGL targeting a texture which is copied to the framebuffer, and it will also work just fine.
As far as I know render to texture and stream to a framebuffer is the way they made the sweet Qt UI you see on the Ableton Push 2
More exotic Dispmanx/OpenWF
On some embedded systems (notably the Raspberry Pi and most Broadcom Videocore's) you have DispmanX. Whichs is really interesting:
This is fun:
The lowest level of accessing the GPU seems to be by an API called Dispmanx[...]
It continues...
Just to give you total lack of encouragement from using Dispmanx there are hardly any examples and no serious documentation.
Basically DispmanX is very near to baremetal. So it is even deeper down than the framebuffer or EGL. Really interesting stuff because you can use vc_dispmanx_snapshot() and really get a snapshot of everything really fast. And by fast I mean I got 30FPS RGBA32 screen capture with no noticeable stutter on screen and about 4~6% of extra CPU overhead on a Rasberry Pi. Night and day because glReadPixels got was producing very noticeable framedrops even for 1x1 pixel capture.
That's pretty much what I've found.

EGLFS and rotation of QT5 application under Linux

On behalf of my colleague I'd like to ask if it is possible to rotate the whole QT5 (QT 5.6.1-1) application window. We are using EGLFS as a backend on Sitara TI AM335X platform running Linux framebuffer.
The current situation is like this: we have some application which normally is rotated 90 degrees from end user point of view. As a temporary solution my colleague (the developer of this application) is rotating every element in this window to achieve proper visual effect. Unfortunately this rotation takes a lot of time of CPU.
My question is - is it possible to turn the whole window clockwise? I mean is it possible to do it on EGLFS or QT5 level without rotating every single element in the window?
I tried to exchange x-y dimensions (800x480) of the screen but without success. I have also taken a look into a linux kernel driver sources and I see no possibility to rotate the whole screen. I was thinking about creating some another buffer in memory from which I could copy data with rotation to target memory, but I'm not sure if it is good idea.
Any ideas?
Set the QT_QPA_EGLFS_ROTATION environment variable to 90 or -90. See the documentation.
Rotation on EGLFS platform was afflicted by a bug QTBUG-39959 until version 5.7.x and so the rotation variable was ignored.
The bug is fixed from version 5.8.

Resources