In Vulkan how can you associate each individual video card with monitors they're directly connected to - graphics

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.

Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.

There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.

As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

Related

Mirror windows on X (Linux)

I am required to write a very efficient application that will mirror the contents of an arbitrary external application multiple times (a lot of times) onto an area of my window, for Linux. On Windows, the way I used to do it was with the help of DwmRegisterThumbnail which would tell the compositor (Desktop Window Manager) that I want it to draw the thumbnail of that foreign window, which it anyway generates, onto a rectangle in my own window, when it composes the desktop image to be displayed to the user on the monitor. This is, I think, one of the lowest overhead ways to achieve my goal, on Windows. The goal is to have very minimal impact on the CPU, as the app will run on a pretty constrained machine. I never tested it against GDI or DirectX methods of copying the screen data, but I do not believe it is faster. Or maybe I am wrong, do correct me if so, please. Is there any other method faster on Windows? The limitations of this method include not being able to touch the actual image data, so no drawing on top of it for example, which is fine for my goal.
Now, my question is, what would be the best approach to achieve this on Linux? I have full liberty of choosing an appropiate X server, display manager if needed and also can write whatever software just to make it as low overhead as it is on Windows. Is there a similar API to the one on Windows for some Linux compositor, like Mutter or KWin, that works well? Or should I hook into X and copy image data from it? Would that eat a lot of CPU?
What's your experience and opinion? How should I take on this?
Thank you very much.

as/400: other way for display graphics?

I'm aware of the existence of DDS files which allow programming of display graphics on the as/400, but is there another way?
Specifically, what I want to do is manipulate the terminal buffer directly to be able to display anything else than just text.
For example, the terminal looks like that:
Let's say, in memory, there would be a two dimensional char array: text[20][80] for the text menu and lower than that, there would be a pixel buffer array of size [200][800].
Is there a way to access either of those arrays directly?
I would like to be able to create a displayable menu entirely in C without the need of a display file and also display other kind of graphics (images) directly in the pixel buffer.
Is there a way to access either of those arrays directly?
That's easy enough, though a "display file" that has no formatted fields will still be needed. The 'file' will be the connection between the program and the physical device (or the emulator). You can define a single large area that contains whatever "text" you want your program to put into it. This can even include display field attributes that delimit input areas.
For the most control, the DDS USRDFN keyword is appropriate. But for simple stuff like lists of menu items, almost any large text field can be output to.
Outputting simple text is easy. For detailed stuff like USRDFN formatting, detailed understanding of the 5250 protocol is needed.
One kind of alternative would be to use User Interface Manager (UIM) APIs to update a PANEL's "text area" (:TEXT) via its USREXIT= application program. The UIM handles everything as far as any "display file" definition and actual I/O goes. The UIM can be thought of as a HTML interface for 5250 and uses a very similar markup language to define PANELs.
Another alternative is the Dynamic Screen Manager (DSM) APIs. These give much finer control than the UIM or DDS methods (though DDS USRDFN gets very close). But as with USRDFN, actual device control will require 5250 protocol knowledge.
...and also display other kind of graphics (images) directly in the
pixel buffer.
There is no "pixel buffer" for 5250 nor even 'pixels'. It's a character-based protocol, like telnet. If you're going for images or 'pixels', you're into browser interfaces, or perhaps Java and NAWT, or X-windows, etc.
Now, granted that with TCP/IP and sockets, you can do essentially anything that you're able to program. Whatever you can figure out how to do, including downloading/installing 3rd-party code libraries, you can do -- within the network restrictions surrounding your server. But it is in fact a server, so GUI kinds of apps generally shouldn't run on it. That's the same as for almost all types of servers. Code the GUI on the client system rather than the server. But you can do it if you really want to.
I'm not sure why you'd want to do this...
Now-a-days, it'd be much easier to simply generate your output as HTML and serve it up via the integrated apache web server.
But if you really want to do graphics via 5250, it can be done...theoretically at least. In 20+ years on the platform, I've never seen it.
But way back when (1994?), IBM added support for Graphical Data Display Manager (GDDM) and Presentation Graphics APIs into OS/400. "GDDM is a means of
displaying, printing, or plotting pictures. Presentation Graphics routines are a
means of displaying, printing, or plotting business charts."
The support is still in the OS. However, client side support is NOT available in IBM i Access for Windows or the most recently released client, IBM Access Client Solutions (ACS). It appears that the standalone IBM Personal Communications product may support GDDM.
For complete control of the character buffer, take a look at the Dynamic Screen Manager (DSM) APIs. The DSM APIs are "a set of screen I/O interfaces that provide a dynamic way to create and manage screens for the Integrated Language Environment® (ILE) high-level languages. Because the DSM interfaces are bindable, they are accessible to ILE programs only."
There is a way to do it in ILE C/C++. This was very fun to investigate since I haven't tried it myself.
The only documentation on it (page 183+) I could find is from 5.1, but you are able to cross reference the functions used to this 7.3 manual (possibly page vii/7) to see if they're still used the same.
Hope this helped!

Tegra 3 OpenGL ES directly to framebuffer

I am developing an embedded Linux system using the Apapalis T30 Tegra 3 SOM from Toradex. I only need a very simple multi-touch user interface for it. I am trying to push the performance and efficiency of the UI as far as I can because the device will have to render complex 3D models whilst allowing live interactions and I know for a fact that my users will be have models that will make it bog down no matter what. I am therefore trying to push that point as far away as I can. Memory will also be a constraint and some models might use up all the RAM if there is not enough available.
What I would like to do to solve this is to have an OpenGL ES 2 GUI with SVG UI elements combined with GLES 3D views, rendered directly to the frame-buffer. In other words, I want to completely ditch any form of a window/desktop manager because I won't need it. I only need a single full-screen GLES drawing surface. I don't even need pointer events etc. as I will be talking to the touch panel directly from my application.
I have looked around quite a bit but I could not really find any conclusive information. I am constantly reading reports of HW acceleration not working when the frame-buffer is directly used, but I guess one could render the GLES into an image and then just push it to the FB? I am also reading that the graphics driver might be locked to X11 but I am also struggling to find details about the Tegra GFX driver, I am reading reports about Nvidia opensourcing their driver, is this true?
Any assistance or explanations will be greatly appreciated.
PS. Please don't preach me on how bad an idea this is and how I should rather use Qt or something like that, I want to find out how to do what I am planning here.
PPS. What a basically want to be able to do is what I understand embedded Qt 5 does in its "EGLFS" rendering mode.

Command-Line linux OpenGL processing

I need to build a command line tool, that will take a 3D model as an argument, and will output photos of it, that may or may not be processed by this application. The tool will be deployed on Linux, but I want to make it as cross-platform as possible.
The program is not supposed to present a window of any kind, or accept any other input apart from the command line arguments.
I was wondering, how would someone approach this? I am currently able to display the 3D model on-screen with the help of GLFW, which actually drives my event handlers to peripheral input, and also my main loop. However, I don't know if using GLFW will help me if I want to make a command-line program with input-output as files.
Does anyone have any indications as to how to approach this?
create invisible/hidden window,
use its gl context to render to FBO and
use readpixels to save that to file
For OpenGL to work you need an OpenGL context. Which used to require some kind of windowing system active, that could produce you some drawable for which the context could be created.
Some OpenGL implementations, like Mesa, actually allow you to create an OpenGL context for drawables that are created without a windowing system; Mesa calls this "off-screen mesa". With Gallium3D drivers on Linux this even may give you GPU acceleration. But usually you end up in the "softpipe" software rasterizer.
Does anyone have any indications as to how to approach this?
Don't use OpenGL for it. OpenGL is mostly meant for creating interactive graphics; but of course if your goal is visualization of complex data, then a GPU would be better suited.
With NVidia hardware you'll need to use an X server for that; the X server must be running and active on the console for this to work. AMD hardware with the open source drivers and Mesa may give you off-screen capabilities without X (but I never tried that).
On Windows Server you don't have proper OpenGL support anyway (just v1.4 and very slow), so don't bother with it.

How to use X window to create a GUI for Linux OS interface?

Can you provide me a surface level knowledge about this.
How can I use linux's latest kernel and X windows GUI to create my own Embedded OS interface?
If you want to learn to make your own distribution, look at linux from scratch. A pre-existing embedded distribution may be more what you are looking for. Some are uclinux-dist, openembedded, poky, ltib, buildroot.
When you say "small" what do you mean by small? Small means reduced functionality.
The smallest is writing your own code that writes to the frame buffer. Your GUI may look like space invaders.
Bigger would be to use a direct to framebuffer toolkit like Nano-X
Bigger again is DirectFB.
Bigger again is a high level toolkit
(GTK or Qt) on top of DirectFB
And the biggest is X with a window
manager and high level toolkit.
Having "learned" already, I would use whatever comes with the platform you are developing on.
End Dump.
First suggestion, code HTML and use a browser. All of the heavy lifting will be done for you. More to the point, most embedded OSen do not live on systems with keyboards, video, and mice. Exporting everything to a remote web client though a web server is the standard way of doing things.
Second suggestion, use a high level toolkit, like Qt, KDE, or Gnome. Coding in low level X is painful.

Resources