Why doesn't Qt::AA_DisableHighDpiScaling disable high DPI scaling, and why does Qt::AA_EnableHighDpiScaling disable it? - linux

I'm working on a Qt application (deploying to Qt 5.11, but I'm testing on Qt 5.14) that needs to run on a variety of projectors. At least one of these projectors reports a physical size of over one metre, which results in only 32.5 dpi reported to the Linux OS (compared to the default of 96 dpi). The effect of this setting on our Qt app is that all text becomes unreadably small:
It can be reproduced on any system by running
xrandr --dpi 32.5
before starting the application.
We could configure the system dpi differently, but there are reasons not to: this dpi is actually in the right ballpark (it's even too high), we may want to use it in other applications, and customers may use their own projector which might break our manual configuration.
The safe approach for this particular use case is to pretend we're still living in the stone age: ignore the system dpi setting and just use a 1:1 mapping between device-independent pixels and device pixels. The High DPI displays documentation says:
The Qt::AA_DisableHighDpiScaling application attribute, introduced in Qt 5.6, turns off all scaling. This is intended for applications that require actual window system coordinates, regardless of environment variables. This attribute takes priority over Qt::AA_EnableHighDpiScaling.
So I added this as the first line in main (before the QApplication is created):
QCoreApplication::setAttribute(Qt::AA_DisableHighDpiScaling);
However, it seems to have no effect; text is still unreadably small. I also tried this in various combinations with:
QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling, false);
QCoreApplication::setAttribute(Qt::AA_Use96Dpi);
Nothing has any visible effect effect.
What does work is setting QT_AUTO_SCREEN_SCALE_FACTOR=1 in the environment. If I understand correctly, this would enable scaling rather than disable it, but setting it to 0 does not work!
Similarly, if I enable Qt::AA_EnableHighDpiScaling in code like this, everything becomes readable:
QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
What also works to some extent is hardcoding the font size (found here):
QFont font = qApp->font();
font.setPixelSize(11);
qApp->setFont(font);
However, margins in the layout still seem to be scaled, so this results in a very cramped (albeit usable) layout.
What also works is setting QT_FONT_DPI=96 in the environment (this variable seems to be undocumented, but it works in Qt 5.11 and 5.14 at least).
Either there are bugs in Qt, or more likely, I'm misunderstanding something. How come that enabling the scaling seems to disable it and vice versa?
Edit: just tested on Qt 5.11 too, although it's on another system. There, neither QT_AUTO_SCREEN_SCALE_FACTOR=1 nor QT_AUTO_SCREEN_SCALE_FACTOR=0 works, so it seems we are dealing with Qt bugs to some extent after all. Maybe related:
High DPI scaling not working correctly - CLOSED Out of scope
HighDPi: Update scale factor setting for devicePixelRatio scaling (AA_EnableHighDpiScaling) - CLOSED Done in 5.14
Support of DPI Scaling Level for Displays in Windows 10 - REPORTED Unresolved
Qt uses wrong source for logical DPI on X - REPORTED Unresolved - This may be the root cause of the issue I'm seeing.
Uselessness of setAttribute(Qt::AA_EnableHighDpiScaling) - REPORTED Unresolved
So how can I make it work reliably in all cases?

Here's what I did in the end to forcibly disable any scaling on Qt 5.11:
QCoreApplication::setAttribute(Qt::AA_DisableHighDpiScaling);
if (qgetenv("QT_FONT_DPI").isEmpty()) {
qputenv("QT_FONT_DPI", "84");
}

Related

Why does VK_PRESENT_MODE_FIFO_KHR cause catastrophic performance issues in Ubuntu MATE?

I am implementing a simple Vulkan renderer according to a popular Vulkan tutorial (https://vulkan-tutorial.com/Introduction), and I've run into an interesting issue with the presentation mode and the desktop environment performance.
I wrote the triangle demo on Windows, and it performed well; however, I ported it to my Ubuntu installation (running MATE 1.20.1) and discovered a curious problem with the performance of the entire desktop environment while running it; certain swapchain presentation modes seem to wreak utter havoc with the desktop environment.
When setting up a Vulkan swapchain with presentMode set to VK_PRESENT_MODE_FIFO_KHR and subsequently running the application, the entire desktop environment grinds to a halt whenever any window is dragged. When literally any window on the entire desktop is dragged, the entire desktop environment slows to a crawl, appearing to run at roughly 4-5 fps. However, when I replace the presentMode with VK_PRESENT_MODE_IMMEDIATE_KHR, the desktop environment is immune to this issue and does not suffer the performance issues when dragging windows.
When I researched this before asking here, I saw that several people discovered that they experienced this behavior when their application was delivering frames as fast as possible (not vsync'd), and that properly synchronizing with vsync resolved this stuttering. However, in my case, it's the opposite; when I use VK_PRESENT_MODE_IMMEDIATE_KHR, i.e., not waiting for vsync, the dragging performance is smooth, and when I synchronize with vsync with VK_PRESENT_MODE_FIFO_KHR, it stutters.
VK_PRESENT_MODE_FIFO_RELAXED_KHR produces identical (catastrophic) results as the standard FIFO mode.
I tried using the Compton GPU compositor instead of Compiz; the effect was still there (regardless of what window was being dragged, the desktop still became extremely slow) but was slightly less pronounced than when using Compiz.
I have fully implemented the VkSemaphore-based frame/image/swapchain synchronization scheme as defined in the tutorial, and I verified that while using VK_PRESENT_MODE_FIFO_KHR the application is only rendering frames at the target 60 frames per second. (When using IMMEDIATE, it runs at 7,700 fps.)
Most interestingly, when I measured the frametimes (using glfwGetTime()), during the periods when the window is being dragged, the frametime is extremely short. The screenshot shows this; you can see the extremely short/abnormal frame time when a window is being dragged, and then the "typical" frametime (locked to 60fps) while the window is still.
In addition, only while using VK_PRESENT_MODE_FIFO_KHR, while this extreme performance degradation is being observed, Xorg pegs the CPU to 100% on one core, while the running Vulkan application uses a significant amount of CPU time as well (73%) as shown in the screenshot below. This spike is only observed while dragging windows in the desktop environment, and is not observed at all if VK_PRESENT_MODE_IMMEDIATE_KHR is used.
I am curious if anyone else has experienced this and if there is a known fix for this window behavior.
System info: Ubuntu 18.04, Mate 1.20.1 w/ Compiz, Nvidia proprietary drivers.
Edit: This Reddit thread seems to have a similar description of an issue; the VK_PRESENT_MODE_FIFO_KHR causing extreme desktop performance issues under Nvidia proprietary drivers.
Edit 2: This bug can be easily reproduced using vkcube from vulkan-tools. Compare the desktop performance of vkcube using --present-mode 0 vs --present-mode 2.

DirectX9: delay between present() and actual screen update

My question is about the delay between calling the present method in DirectX9 and the update appearing on the screen.
On a Windows system, I have a window opened using DirectX9 and update it in a simple way (change the color of the entire window, then call the IDirect3DSwapChain9's present method). I call the swapchain's present method with the flag D3DPRESENT_DONOTWAIT during a vertical blank interval. There is only one buffer associated with the swapchain.
I also obtain an external measurement of when the CRT screen I use actually changes color through a photodiode connected to the center of the screen. I obtain this measurement with sub-millisecond accuracy and delay.
What I found was that the changes appear exactly in the third refresh after the call to present(). Thus, when I call present() at the end of the vertical blank, just before the screen refreshing, the change will appear on the screen exactly 2*screen_duration + 0.5*refresh_duration after the call to present().
My question is a general one:
in how far can I rely on this delay (changes appearing in the third refresh) being the same on different systems ...
... or does it vary with monitors (leaving aside the response times of LCD and LED monitors)
... or with graphics-cards
are there other factors influencing this delay
An additional question:
does anybody know a way of determining, within DirectX9, when a change appeared on the screen (without external measurements)
There's a lot of variables at play here, especially since DirectX 9 itself is legacy and is effectively emulated on modern versions of Windows.
You might want to read Accurately Profiling Direct3D API Calls (Direct3D 9), although that article doesn't directly address presentation.
On Windows Vista or later, once you call Present to flip the front and back buffers, it's passed off to the Desktop Windows Manager for composition and eventual display. There are a lot of factors at play here including GPU vendor, driver version, OS version, Windows settings, 3rd party driver 'value add' features, full-screen vs. windowed mode, etc.
In short: Your Mileage May Vary (YMMV) so don't expect your timings to generalize beyond your immediate setup.
If your application requires knowing exactly when present happens instead of just "best effort" as is more common, I recommend moving to DirectX9Ex, DirectX 11, or DirectX 12 and taking advantage of the DXGI frame statistics.
In case somebody stumbles upon this with a similar question: I found out the reason why my screen update appears exactly on the third refresh after calling present(). As it turns out, the Windows OS by default queues exactly 3 frames before presenting them, and so changes appear on the third refresh. As it stands, this can only be "fixed" by the application starting with Directx10 (and Directx9Ex); for Directx9 and earlier, one has to either use the graphics card driver or the Windows registry to reduce this queueing.

EGLFS and rotation of QT5 application under Linux

On behalf of my colleague I'd like to ask if it is possible to rotate the whole QT5 (QT 5.6.1-1) application window. We are using EGLFS as a backend on Sitara TI AM335X platform running Linux framebuffer.
The current situation is like this: we have some application which normally is rotated 90 degrees from end user point of view. As a temporary solution my colleague (the developer of this application) is rotating every element in this window to achieve proper visual effect. Unfortunately this rotation takes a lot of time of CPU.
My question is - is it possible to turn the whole window clockwise? I mean is it possible to do it on EGLFS or QT5 level without rotating every single element in the window?
I tried to exchange x-y dimensions (800x480) of the screen but without success. I have also taken a look into a linux kernel driver sources and I see no possibility to rotate the whole screen. I was thinking about creating some another buffer in memory from which I could copy data with rotation to target memory, but I'm not sure if it is good idea.
Any ideas?
Set the QT_QPA_EGLFS_ROTATION environment variable to 90 or -90. See the documentation.
Rotation on EGLFS platform was afflicted by a bug QTBUG-39959 until version 5.7.x and so the rotation variable was ignored.
The bug is fixed from version 5.8.

ncurses good practices: clear screen with windows

ncurses app checks if terminal has been resized. If the size is less than 80x25, then blank the screen and show error message.
If the app has N windows, should all of them be removed with delwin(), or calling clear() would be enough? On the other hand, can already existing windows be later reused after clear() - to refresh and display contents if terminal size became satisfactory - or they should be recreated?
Clearing the windows sounds like the application's behavior rather than ncurses as such. The ncurses library (see resizeterm) will clear areas if the windows increase in size.
The best policy when resizing really depends on what you have inside the windows. ncurses is making reasonably safe changes, but since it has no information about your intention in making some things close on the screen, and others separated, all it can do is to attempt to resize windows so that their contents are preserved. The application can still clear them and start over again — as well as moving windows around on the screen.
It's your decision whether it is simpler to recreate the windows or reuse them.
As long as all of the rebuilding is done before the next repainting of the screen (e.g., with wrefresh), ncurses will make the updates with as little activity as it can.

Some tearing on QML animations

I am noticing some tearing with some QML 2 animations with Qt 5.4.2 on my Tegra 3 based embedded Linux board. I doubt if this is a complete vsync issue because most of the animations are smooth but there are some animations that involve a lot of parallel motion and clipping that tear consistently. These animation come out torn as opposed to simply stuttering so I don't think it is completely a performance issue either. Though it might be caused by the system not being able to put out the necessary FPS to sync properly? The exact same application has no such trouble on my Haswell i7 PC.
I have enabled QT_QPA_EGLFS_FORCEVSYNC to no effect and have not yet managed to find anything else that I can try. I should mention though that I am running EGLFS with an X11 backend (http://code.qt.io/cgit/qt/qtbase.git/tree/src/plugins/platforms/eglfs/qeglfshooks_x11.cpp?h=5.4) as a result of the Nvidia drivers dictating the use of X11. I would assume that this means that I can't really use the FB related settings normally available with EGLFS. Is there anything else that I can try to fix this?
PS. By setting QT_QPA_EGLFS_SWAPINTERVAL to 0 I can get the tearing to become a whole lot worse. This again suggests that I most likely do not have a whole system vsync issue.
PPS. I am getting a "QSGContext::initialize: stencil buffer support missing, expect rendering errors" warning at the start of my application.
On a Freescale/NXP imx6 with Vivante GC2000 I see a similar problem even when not using x11.
Setting "export QT_QPA_EGLFS_SWAPINTERVAL=2" seems to reduce the tearing on 3.14.38 kernel.
On 3.14.52 kernel that did not work but "export FB_MULTI_BUFFER=3" does help on both Qt 5.5.1 and 5.6 with imx6.

Resources