I have an OpenGL window opened using glfw under linux and I am trying to use OpenGL in a two threads.
I understood from reading around the web that I am supposed to open a second rendering context with the same window. Only one of the threads will be actually used for rendering, while the other will be used to exchange data through PBOs.
The question is how can I get the needed params to call glxCreateContext() for the window I opened using glfw?
EDIT: I ended creating 2 rendering contexts, with shared lists. which is probably a better idea.
Related
I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.
Currently I am using OSMesa for off-screen rendering. I am running it on linux (RHEL) command line interface. It works really well but rendering consumes a lot of time. Basically i run opengl animation off-screen and capture frames on the fly and create a video using ffmpeg. So, my question is, whether it is possible to use GPU for off-screen rendering in order to make rendering process faster.
I know i can use FBOs but i think they require window support which i dont have due to linux CLI.
So in short, is there anyway to use FBOs in my case or what is the best solution to speed up the rendering process?
So, my question is, whether it is possible to use GPU for off-screen rendering in order to make rendering process faster.
In principle yes, but so far no standard API on how to do it was settled down for. If you're using NVidia GPUs you can use headless EGL with the Nvidia proprietary drivers: https://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/
Using Kernel DRM and the Mesa OpenGL drivers it is possible to configure and operate the GPU in a single process without a display server. There's a demo called "kmscube", I forked it into my GitHub and made a few small modifications to it: https://github.com/datenwolf/kmscube In the current state kmscube will draw to the screen, but it should be possible to change the selection of a connector in a way, that you get full offscreen rendering.
Also the whole Wayland infrastructure is centered around the possibility to give clients arbitrary framebuffers to render to, that compositors then combine, so looking at the way how Wayland compositors allocate the off-screen framebuffers for Wayland clients to use is also worth looking at.
I have two x11 windows which need to maintain a certain stacking order between each other, namely one window needs to stay above the other. I don't care about other windows outside the application. Normally, I would use a parent/child for this, but since X11 clips the child window to the parent, I have to fake it. I've tried various methods to keep and/or adjust the window stack to maintain the proper order. However, the WM is ignoring pretty much everything except for XRaiseWindow() which is too brute force and causes problems for other windows.
So the question is how do I set the stacking between two windows, or is there a way to set a parent/child that doesn't result in the parent clipping the child?
Yes, you can use the WM_TRANSIENT_FOR Window property to make the (transient) parent appear behind the child without clipping it. Qt uses it internally, you can grep its sources for an example usage.
See also this answer by cap.
On Windows I do
HGLRC glContext = wglGetCurrentContext();
HDC deviceGLContext = wglGetCurrentDC();
wglMakeCurrent(glContext, deviceGLContext);
On Linux there are analogous functions for getting current GL context and current device context, glXGetCurrentContext and glXGetCurrentDisplay, respectively. But I am stuck with
Bool glXMakeCurrent( Display *dpy,
GLXDrawable drawable,
GLXContext ctx )
I don't know how to deal with the second parameter. I use Qt for GUI, but I still need several Windows API function, among which are the three ones mentioned above.
How to make the invoke glXMakeCurrent in the same fashion described at the beginning of the post? The problem is that I don't know how to get GLXDrawable.
I need to get a GLXContext, then create another one to share Display lists, and make it current in another thread, add it to OpenCL context attributes. The point is that I need to be able to make it current.
That 'GLXDrawable' is the X11 window for which you have got the context.
If you are using qt, I would have assumed it would have provided a 'myWindow.makeCurrent()' function, or something to the effect.
You can make a window using XCreateWindow (there is also a function for making a basic window with less options). Before this, you will need to have got a connection to the display using XOpenDisplay.
I have been very short on the details here, as there is a lot steps for getting an OpenGL Context in an X11 window, and whilst not hard, does involve a lot of error checking. I suggest you make use of a library that handles this for you.
Contrary to Windows, in X11 you are dealing with a client server model. The "display" represents the connection to the X11 server. In X11 there are Drawables, which can be used interchangably. One kind of Drawable are Windows.
You might want to have a look at
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/x11argb_opengl
for an example on how to create OpenGL window with a transparent background using plain X11/GLX, that can be used in compositing.
--
Update
I need to get a GLXContext, then create another one to share Display lists, and make it current in another thread, add it to OpenCL context attributes. The point is that I need to be able to make it current.
Familiar problem. My solution to this is to treat a QGLWidget as if it was a context. In your other thread create another QGLWidget, that will never be shown and pass the visible QGLWidget instance into the sharing parameter of the constructor. Then you can use the QGLWidget as if it were a context. It's dirty, not really to the point, but Qt's internal OpenGL system is that way.
I'm working on an app which needs to draw with OpengGL at a refresh rate at least equal to the refresh rate of the monitor. And I need to perform the drawing in a separate thread so that drawing is never locked by intense UI actions.
Actually I'm using a NSOpenGLView in combination with CVDisplayLink and I'm able to achive 60-80FPS without any problem.
Since I need also to display some cocoa controls on top of this view I tried to subclass NSOpenGLView and make it layer-backed, following LayerBackedOpenGLView Apple example.
The result isn't satisfactory and I get a lot of artifacts.
Therefore I've solved the problem using a separate NSWindow to host the cocoa controls and adding this window as a child window of the main window containing the NSOpenGLView.
It works fine and I'm able to get quite the same FPS as the initial implementation.
Since I consider this solution quite like a dirty hack, I'm looking for an alternative and more clean way of achiving what I need.
Few days ago I came across NSOpenGLLayer and I thought that it could be used as a viable solution for my problem.
So finally, after all this preamble, here comes my question:
is it possible to draw to a NSOpenGLLayer from a separate thread using CVDisplayLink callback?.
So far I've tried to implement this but I'm not able to draw from the CVDisplayLink callback. I can only -setNeedsDisplay:TRUE on the NSOpenGLLayer from the CVDisplayLink callback and then perform the drawing in -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime: when it gets automatically called by cocoa. But I suppose that this way I'm drawing from the main thread, isn't it?
After googling for this I've even found this post in which the user claims that under Lion drawing can occur only inside -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime:.
I'm on Snow Leopard at the moment but the app should run flawlessly even on Lion.
Am I missing something?
Yes, it is possible, though not recommend. Call display on the layer from within your CVDisplayLink. This will cause canDrawInContext:... to be called and if it returns YES, drawInContext:... will be called and all this on whatever thread called display. To make the rendered image visible on screen, you have to call [CATransaction flush]. This method has been suggested on the Apple mailing list, though it is not completely problem free (the display method of other view may get called on your background thread as well and not all views support rendering from a background thread).
The recommend way is to make the layer asynchronous and render the OpenGL context on main thread. If you cannot achieve a good framerate that way, since your main thread is busy elsewhere, it is recommend to rather move everything else (pretty much your whole application logic) to other threads (e.g. using Grand Central Dispatch) and only keep user input and drawing code on the main thread. If your window is very big, you may still not get anything better than 30 FPS (one frame ever two screen refreshes), yet that comes from the fact, that CALayer composition seems a rather expensive process and it has been optimized for more or less static layers (e.g. layers containing a picture) and not for layers updating themselves 60 FPS.
E.g. if you are writing a 3D game, you are advised not to mix CALayers with OpenGL content at all. If you need Cocoa UI elements, either keep them separated from your OpenGL content (e.g. split the window horizontally into a part that displays only OpenGL and a part that only displays controls) or draw all controls yourself (which is pretty common for games).
Last but not least, the two window approach is not as exotic as you may think, that's how VLC (the video player) draws its controls over the video image (which is also rendered by OpenGL on Mac).