DXGI: trying to get correct display mode from output (monitor) - direct3d

I'm currently stuck with a pesky little issue. I developed an application that zeroes out the DXGI mode desc. structure and calls FindClosestMatchingMode() to, as advertised, "gravitate towards the desktop resolution".
This works fine if the laptop(s) run fully on their own display -- as soon as I plug in another monitor it goes berserk. In the case I extend my desktop it will still correctly get the laptop monitor resolution, yet the attached one (running 1080p) will yield a preference for 800*480 :) (sure, poor man's 16:10, but...)
Doing the same thing with the monitors cloned/combined (results in 1 output device), even if their resolution is equal, gives the same 800*480 crap.
What gives? And has anyone perhaps found a way to properly get a display's current mode through DXGI or a pointer for a wholly different yet functional approach to this here problem?
Life was easier back in the D3D9 days =)
-- Update
As it turns out any FindClosestMatchingMode() call made on the IDXGIOutput instance belonging to the external monitor behaves differently (and in most cases plain wrong) compared to the internal display, even though their native resolution is identical. To top it all off, other systems don't have this issue yet I can't get around supporting this particular laptop including it's drivers.
Time for a good old setup dialog.

Not the best solution but as I was constrained to these exact machines I settled for getting the monitor's current resolution through GetSystemMetrics() (SM_CXSCREEN/SM_CYSCREEN), which admittedly only works for the primary monitor but there's other ways, and feeding this resolution to the ModeToMatch structure fed to FindClosestMatchingMode().
It then settles for the correct (desktop) resolution.
Better answers are very welcome of course ;)

Related

Mirror windows on X (Linux)

I am required to write a very efficient application that will mirror the contents of an arbitrary external application multiple times (a lot of times) onto an area of my window, for Linux. On Windows, the way I used to do it was with the help of DwmRegisterThumbnail which would tell the compositor (Desktop Window Manager) that I want it to draw the thumbnail of that foreign window, which it anyway generates, onto a rectangle in my own window, when it composes the desktop image to be displayed to the user on the monitor. This is, I think, one of the lowest overhead ways to achieve my goal, on Windows. The goal is to have very minimal impact on the CPU, as the app will run on a pretty constrained machine. I never tested it against GDI or DirectX methods of copying the screen data, but I do not believe it is faster. Or maybe I am wrong, do correct me if so, please. Is there any other method faster on Windows? The limitations of this method include not being able to touch the actual image data, so no drawing on top of it for example, which is fine for my goal.
Now, my question is, what would be the best approach to achieve this on Linux? I have full liberty of choosing an appropiate X server, display manager if needed and also can write whatever software just to make it as low overhead as it is on Windows. Is there a similar API to the one on Windows for some Linux compositor, like Mutter or KWin, that works well? Or should I hook into X and copy image data from it? Would that eat a lot of CPU?
What's your experience and opinion? How should I take on this?
Thank you very much.

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

Determining and restoring Window state in Linux

I use xlib. I want to remember window position and restore it to that position the next time it starts. This will help the user as he won't need to move/size the window to the desired position at every start.
It works more or less Ok, except one case. When the window is maximized, I cannot find the way to determine its true (non-maximized) size and position. Perhaps someone knows how to do it?
There isn't as far as I know a standard way to do this. If you read the source to Metacity for example you can see it keeps this unmaximized size in the MetaWindow object but I don't think it gets stored in a property, and I don't see a property for this in the EWMH or ICCCM specs.
It's possible that some specific window managers might store it in a nonstandard property.
When a window is maximized you could get a property change event with the maximization flag (libwnck is an old library to track this sort of thing, maybe there's a newer one I don't know). However I doubt it's defined whether the resizing happens before or after setting the flag. You could perhaps heuristically assume that a resize covering most of the screen within 1 second of setting the maximize flag was a maximize, or some similar hack.
All that said, I suspect the easiest way to get this feature is to implement it as a window manager extension or plugin, which many WMs support in some way.
There is an old spec for this (X session management protocol) unfortunately the spec is ridiculously broken, unclear and essentially impossible to implement. A core issue that was never addressed - and probably needs app cooperation to address - is how to recognize "the same" window across restarts in order to restore its size.
There are only flawed heuristics for that.
(I wrote metacity and worked on a couple session managers long ago so once I could have told you a lot more, but I've forgotten many details.)

Gesture Recognition for my Grandma (Kinect) Linux

I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).

Touch Screen Running Windows CE

I'm starting my first project that runs on a 7 inch touch screen running Windows CE 6.0 (and NETCF 3.5).
The touch screen doesn't respond to touch too well when I use my finger. The only way for me to navigate around is by using a stylus (or similar).
Since I've never worked with Windows CE or a resistive touch screen, I'm not sure if I should expect to be able to use my finger or if the stylus method is, essentially, the only way to effectively navigate around. - or, maybe, I have a touch screen that simply isn't that good.
If you have experience with WinCE running on a touch screen, do you find that a stylus is the only way to go?
A resistive touchscreen can certainly provide feedback for a finger - I've even configured them for hands-in-gloves. It sounds like the touchscreen driver is tossing out the data samples it's getting from the panel and the key for you is going to be to figure out why.
In my experience there are really two primary reasons for your samples to be ignored.
The driver has been configured to too tight of a tolerance.
Sensitivity is often a configurable item. Maybe through recompile of the OS, maybe through the registry - depends on how your OEM implemented it. Check with the OEM and see if you can adjust it.
The panel has too much noise, causing your samples to get tossed
This one is easy to check. Drag a selection rectangle on the desktop with a stylus and hold the end point down (don't lift the stylus). Is it steady, or does it "wiggle" a lot at the final point? If so, you have noise. Grounding the panel usually helps, but it could be a hardware issue. I've done rolling-average work in touchpanel drivers to help smooth this out, but you then have to fight hysteresis.
Be aware that larger touchscreens have different resistivity properties than smaller ones, so if you just swapped panels from small to large, it's quite possible that the output range difference of the panels is not workable with the current driver settings. Again, some OEMs provide the ability to adjust these settings.
So can it work with a finger? Well there's nothing in the physical theory that would prevent using a finger. In fact if you can't use a finger, there's something wrong. Will it work in reality? Check with your OEM.

Resources