I'm learning OpenGL under X11 with xcb and I'm having a hard time figuring out the difference between visuals and fbconfigs (the ones you find in glxinfo)
As far as I could see a visual is a set of properties related to depth buffer, stencil buffer, framebuffer, etc.. what's the difference with fbconfigs and why would one be preferable to the other?
In the X Window System a Visual encapsulates the color mapping (color type, color depth) for a Display. The same Display can be configured with different Visuals.
When OpenGL was born, about a decade after X System, a structure XVisualInfo was created in the OGL part, not in the X System. This new structure extended the Visual type by adding more features, such as ancillary buffers, double buffer, and stereo. This XVisualInfo was used to create the gl-context.
In 1998 the GLX 1.3 specification (find it at Khronos page), added more features, notably GLXPbuffer for off-screen rendering, but easier than GLXPixmap. Also added were transparency, multi-sampling, and samples buffers. The configuration for the GLXDrawable (Window or GLXPixmap, or now also GLXWindow and GLXPbuffer) was going too different from the Visual abilities, and so GLXFBConfig was introduced.
The current GLX 1.4 specification allows, for backwards compatibility reasons and if you don't use GLX>1.2 features, the use of XVisualInfo. But the prefered way of creating a context is by GLXFBConfig.
Notice that rendering to a GLXPbuffer does not use a X Visual. Notice also that using Framebuffer objects since OGL 3.0 makes obsolete the use of GLXPbuffer.
The visual is a concept of X11 itself. It describes the color encoding properties. A particular X11 server my supper a set of different visuals, and and X11 client (graphical application) may choose one that is best suited for it's use case. Every X11 window is created with respect to one visual. See the documentation about X11 visual types for details.
On an X11 server with the glX extension, there are a couple of such visuals which provide hardware accelerated rendering via OpenGL. Before you can create a X11 window which you're going to use for GL rendering, you need to query a suitable visual. In traditional glX, you would use for example glXChooseVisual to do that.
A GLXFBConfig on the other hand is a entity that is only relevant for GLX itself, the classical X server does not know anything about it. GLXFBconfigs can be used to create off-screen rendering buffers called P-Buffers (which are kind of obsolete nowadays, though).
One could classify FBConfigs into two groups:
GLXFBConfigs which you can use to create a X11 window with. In this case, the FBConfig refers to some X11 visual ID, and you can use glXGetVisualFromFBConfig to query that.
GLXFBConfigs which can solely be used for off-screen rendering. There is no associated visual ID, so you cannot use these to create X11 windows with.
FBConfigs provide a newer and more flexible interface via glxChooseFBConfig, so it is preferable alwyas to use the FBConfig API, even if you want an off-screen window.
What a typical GL implementation will do is to provide an FBconfig for each visual type it is supporting, so you should find those twice in the glxinfo output: as the actual visuals, and as more or less identical fbconfigs. Additionally, it will offer some more fbconfigs with formats which would be untypical for X11 windows (like more than 32bit color depth).
Related
The More General Question
I am wondering if there is a standard way that operating systems / desktop managers use to expose the user's preference regarding the placement of the window frame controls (Close, Maximize/Miniaturize, Minimize).
For platforms like Windows and MacOS, it's "pretty" safe to assume that the users wants their window controls on the right and left respectively to match the rest of the windows in the GUI. But the key word here is "assume'. I hate to assume things when I code.
Furthermore, what about all the different Linux distributions and flavors?
I think this information could be useful to application developers in the same way that it's useful to know the user's preferences regarding dark or light themes.
My More Specific Question
Now, what I'm building currently is an Electron application that could really benefit from a custom title bar (a.k.a. a frameless window). And I do understand that my problem is caused by the fact that I want to bypass the window frame abstraction that is normally offered by the operating systems, but I'd really like to be able to position my custom controls in my title bar without having to guess.
But anyway, since I use Electron, I do have access to native features using NodeJS, but I'd also be curious to know if browsers have or are planning to implement a way for the CSS or JavaScript running in the browser to determine the intended placement of the window controls, again, similarly to prefers-color-scheme?
I am sorry if my question is a bit vague. I am trying to understand where to look for my problems. I have a regression test suite that captures and compare the screen. It seems like whenever we do some kind of library upgrade the regression tests would fail. Our font settings are the same. The difference would be like the graphics card upgrade (driver), window manager upgrade, or just third party library upgrade (for example Qt library). From human visual testing, the fonts look almost identical, but if I do pixel to pixel comparison, it would show that the snapshots are different. Does anyone have insight how the fonts are rendered ?
Graphics rendering on Linux is a proper mess. While Linux is about as old as Windows, Linux first tried to copy the old X11 window system. This was one of the oldest GUI systems in the world, and it shows - the API is beyond horrible. As a result, lots and lots of libraries were stacked on top of X11 to make it workable, with various degrees of compatibility.
To make things worse, X11 was not just a single implementation, there were competing X11 implementations. Linux chiefly used XFree86, which later became Xorg. And because that's not confusing enough, recent developments added a number of alternatives to X11, which support backwards-compatibility interfaces to X11. Some of those GUI libraries on top of X11 are aware of these new libraries, and may now use the new interfaces.
So, you basically have a pretty fragile system, and any library with a decent programming model has shaky foundations. It's no wonder that changing any part may suddenly cause re-rendering, possibly even choosing entirely new rendering paths.
Windows is a bit better, but it too is old and has some competing GUI libraries. The reason why it's better is probably threefold: there's a single party in control of all the interfaces (Microsoft), they were aware of the bad X11 design from the start (avoided beginner mistakes) and Microsoft has far more resources to spend.
But still, both Linux and Windows had to evolve to support Unicode and the much larger fonts it brought, 24 bits color, high-DPI screens, LCD screens with subpixel resolution, accelerated GPU's, etc. And it's been hard for both to dump old interfaces.
I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.
I need to build a command line tool, that will take a 3D model as an argument, and will output photos of it, that may or may not be processed by this application. The tool will be deployed on Linux, but I want to make it as cross-platform as possible.
The program is not supposed to present a window of any kind, or accept any other input apart from the command line arguments.
I was wondering, how would someone approach this? I am currently able to display the 3D model on-screen with the help of GLFW, which actually drives my event handlers to peripheral input, and also my main loop. However, I don't know if using GLFW will help me if I want to make a command-line program with input-output as files.
Does anyone have any indications as to how to approach this?
create invisible/hidden window,
use its gl context to render to FBO and
use readpixels to save that to file
For OpenGL to work you need an OpenGL context. Which used to require some kind of windowing system active, that could produce you some drawable for which the context could be created.
Some OpenGL implementations, like Mesa, actually allow you to create an OpenGL context for drawables that are created without a windowing system; Mesa calls this "off-screen mesa". With Gallium3D drivers on Linux this even may give you GPU acceleration. But usually you end up in the "softpipe" software rasterizer.
Does anyone have any indications as to how to approach this?
Don't use OpenGL for it. OpenGL is mostly meant for creating interactive graphics; but of course if your goal is visualization of complex data, then a GPU would be better suited.
With NVidia hardware you'll need to use an X server for that; the X server must be running and active on the console for this to work. AMD hardware with the open source drivers and Mesa may give you off-screen capabilities without X (but I never tried that).
On Windows Server you don't have proper OpenGL support anyway (just v1.4 and very slow), so don't bother with it.
I am writing an OpenGL game which will hopefuflly be for both linux and iphoneOS, I basically want to be able to build using the OpenGL ES 1.5 headers and run it on my linux desktop. Can I do this? IE, I want to only use the subset of API calls common between OpenGL and OpenGL-ES.
Doing the above and linking with normal libGL.a from my system gets me my screen but I seem to be able to do nothing but change the scene background colour.
I've done exactly that, and it worked well for me.
There are a bunch OpenGL|ES extensions that aren't available on standard OpenGL but very nice to have on a low spec platform. glDrawTexImage is such an extension. Emulating these extensions using a hand full of desktop OpenGL-calls is not a big deal though.
Also OpenGL|ES supports the fixed-point data-format for most entrypoints. Take glClearColorx for example. These aren't available for the desktop OpenGL, so you have to write a wrapper if you want to use them. It's a bit more work if you also store your vertex data in this format.
Oh - and note that OpenGL|ES does not come with the glu-library. You can use it on the desktop, but if you do you'll have to reimplement them later (see the 100 questions about gluLookAt and gluUnproject).
There is no such thing as OpenGL ES 1.5. Did you mean 1.1 ?
Also, how do you get a window ? This is platform specific.
In any case, you still should compile against the header that corresponds to the lib you will link against. You don't know for sure what the header sets up (e.g. on windows, which you don't care about but still, calling conventions are specified in there).
There are also some calls that don't map well between the 2. E.g. APIs that are only using doubles in GL are float in GLES (from the ES spec):
The double-precision only commands
DepthRange, Frustum, and Ortho are
replaced with single-precision or
fixed-point variants
So in short, there is a bit more work than just using the same code, although the work in question is still minimal if you stick to GL ES subset.