I have some troubles with multi-monitor configurations and xlib fullscreen windows. It is possible to detect which screen on virtual desktop contains window and window sizes when it goes to fullscreen (when sending _NET_WM_STATE_FULLSCREEN)?
I have two monitors 1024x768 combined into big desktop 2048x768, Xinerama correctly reports two 1024x768 screens, but WM can raise window into only one screen or both screens (enabling/disabling in WM settings). Any ideas?
Ok, i found solution. It is so easy: catch ConfigureNotify event while pending display.
Related
On my arm embedded device with a touchscreen, I have a 3rd party program (program A), that creates a window which handle keyboard presses. Because of that, this window always has to have focus. This is a closed source, and I do not have options to modify it.
I need to create a window in linux, that never grabs focus. It just shows an image, some times full screen. However, I have options not to make it full screen (1 pixel less, so window below is visible.).
Right now, I am using only X server, but I can install (almost) any window manager.
Is there a way to create a window in X, that never gets focus? If I understand X correctly, a window bellow mouse will get focus.
Is there a window manager, which supports such feature?
Is this possible to do with with xcb or wayland?
On Wayland, it's up to the compositor to tell the client whether it has focus or not, and which surface(s) to send key events to. So it would depend on the compositor or compositor toolkit you're using if it's possible.
KWin has an option that sounds like it does what you want. Right click the window title bar and choose more actions -> special window settings -> accept focus
Of the compositor toolkits, I only know the Qt Wayland Compositor API, and with that it should be possible (assuming your application can run as a Wayland client). The easiest thing would be to just show the image in the compositor using the QML APIs, or you could set enabled: false on the WaylandQuickItem or ShellSurfaceItem that you don't want to grab input focus.
When I open the Chrome debug view and set the device to "iPad Mini" to simulate its screen size (and touch events), interacting with a text input causes the Windows on-screen keyboard to open.
This computer is not a tablet, and has never had a touch screen. In the Windows Ease of Access -> Keyboard settings Turns on the On-Screen Keyboard is off.
I can only assume that Chrome "simulating" an iPad Mini is causing Windows to think there's a touchscreen. I've been using this feature for a few months now, and the keyboard opening only started happening recently. I may have simply flipped a switch in the settings (of Chrome or Windows) on accident. If that's the case, I'd like to know how to flip it back!
This is frustrating because I have to close the keyboard each time as it covers up a large portion of the web-app.
The same page without the "iPad Mini" simulation does not open the keyboard:
(This keyboard also opens when choosing any device that has a touch screen, not just iPad Mini.)
Chrome doesn't emulate the keyboards of the device profiles you pick. An image of a keyboard will show for certain ones, like the iPhone 5X, but it is non-functional and is just present to allow you to see how the various elements on the page respond to the keyboard. You can see my answer here for more details on viewing that. However, this is not the same keyboard you are seeing.
It looks as though there is something in Windows, which is triggering the on-screen keyboard. I'm not sure why it would still appear if you have it disabled, but you could try a couple of things, based on what I've found online:
Make sure 'Touch Keyboard and Handwriting Panel Service' is set to disabled in Services (services.msc)
SetLOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\ ShowTabletKeyboard from value 1 to 0 (regedit.exe)
Check there is no other 3rd party software running, which may affect your keyboard behaviour.
It specifically says in the Wayland TODO text file that Wayland doesn't have active grabs for the pointer yet. But if I run Gnome on Wayland, try clicking a menu open and then clicking outside it, the outside click is swallowed as if the pointer was grabbed by the menu window. How does Gnome manage that?
What you are talking about can be done very easily by creating a transparent overlay over the entire screen. In that case, the click events on the transparent region will not propagate to the underlying elements. You can see this in Telegram's image viewer, where it creates a full-screen gray overlay under the image.
But on the compositor side this effect can be achieved in a different way-- by disabling all input events outside the popup rectangle.
i have a linux system with 2 monitor outputs (1920x1080). I arranged them to have a desktop size of 1920x2160.
Now i wanted to run a Qt Applcation, which starts in full screen mode covering the 1920x2160 desktop.
I tried:
QWidget::setFullScreen() -> The QWidget is maximized across 1 monitor
QWidget::setGeometry(0,0,1920,2160) -> The QWidget is also maximized across 1 monitor
Even if i do:
QWidget::move(0,0) & QWidget::resize(1920,2160) -> The QWidget does not exceed the size of the 1 monitor.
But if i move and resize the QWidget manually with the mouse, i can resize it to 1920x2160.
I was not able to do that programmatically.
Maybe someone has a hint for me on what i am doing wrong.
Thanks in advance.
Cause of the problem is the window manager. If i started the X server without any window manager, it was possible to call
QWidget::setGeometry(...)
and the window sized itself across all connected displays.
So i mistakenly assumed that Qt is the problem.
I want to write applications (or use existing ones, that would be even more convenient) that behave like a hardware screens OSD (on screen display), only without input.
That is: A graphical output (e.g. from a GUI toolkit like Qt or Gtk) is placed on a layer where it is above even fullscreen-windows like Firefox F11 mode or a video player in fullscreen mode. That includes "above" the mouse cursor as well, so technically and graphically the mouse cursor would move below this widget.
I don't know about real fullscreen applications with SDL or OpenGL though, but this is not the requirement. If you know this as well please include it in your answer.
Real world applications are read-only overlays like a little webcam window, a TV-station like logo or premade annotations. So all in all this is meant for live presentations, streaming and recording of screencasts and tutorials with minimal post processing.
My own hacked, unsuccesful, experiments showed at least that removing this window from the WM control ( I did this by choosing a GTK popup dialog instead of a real main window) lets you position in absolute coordinates and it will ignore things like virtual desktops and workspaces, which is good, so you can switch between those and the overlay/HUD will stay in place.
Of course this cannot be done in software with the same Z-value (top/bottom windows) as the hardware screen. So technically I am talking above all other windows but below the screensaver or lock-screen layer.
+1 internet for linking to docs and giving the right keywords.
+2 internet for a working code example, language, gui-toolkit etc. doesn't matter.
You probably need composite overlay window from Composite extension - see section 3.2 "Composite Overlay Window" extension docs. (cursor is above this window)
Version 0.3 of the protocol adds the Composite Overlay Window, which
provides compositing managers with a surface on which to draw without
interference. This window is always above normal windows and is always
below the screen saver window. It is an InputOutput window whose width
and height are the screen dimensions. Its visual is the root visual
and its border width is zero. Attempts to redirect it using the
composite extension are ignored. This window does not appear in the
reply of the QueryTree request. It is also an override redirect
window. These last two features make it invisible to window managers
and other X11 clients. The only way to access the XID of this window
is via the CompositeGetOverlayWindow request. Initially, the Composite
Overlay Window is unmapped.
Example using node-x11:
var x11 = require('x11');
x11.createClient(function(err, display) {
var X = display.client;
var root = display.screen[0].root;
X.require('composite', function(err, Composite) {
Composite.GetOverlayWindow(root, function(err, overlay) {
// already automatically mapped here:
//
// CompositeGetOverlayWindow returns the XID of the Composite Overlay
// Window. If the window has not yet been mapped, it is mapped by this
// request. When all clients who have called this request have terminated
// their X11 connections the window is unmapped.
});
});
});