Make current OpenGL context on Linux - linux

On Windows I do
HGLRC glContext = wglGetCurrentContext();
HDC deviceGLContext = wglGetCurrentDC();
wglMakeCurrent(glContext, deviceGLContext);
On Linux there are analogous functions for getting current GL context and current device context, glXGetCurrentContext and glXGetCurrentDisplay, respectively. But I am stuck with
Bool glXMakeCurrent( Display *dpy,
GLXDrawable drawable,
GLXContext ctx )
I don't know how to deal with the second parameter. I use Qt for GUI, but I still need several Windows API function, among which are the three ones mentioned above.
How to make the invoke glXMakeCurrent in the same fashion described at the beginning of the post? The problem is that I don't know how to get GLXDrawable.
I need to get a GLXContext, then create another one to share Display lists, and make it current in another thread, add it to OpenCL context attributes. The point is that I need to be able to make it current.

That 'GLXDrawable' is the X11 window for which you have got the context.
If you are using qt, I would have assumed it would have provided a 'myWindow.makeCurrent()' function, or something to the effect.
You can make a window using XCreateWindow (there is also a function for making a basic window with less options). Before this, you will need to have got a connection to the display using XOpenDisplay.
I have been very short on the details here, as there is a lot steps for getting an OpenGL Context in an X11 window, and whilst not hard, does involve a lot of error checking. I suggest you make use of a library that handles this for you.

Contrary to Windows, in X11 you are dealing with a client server model. The "display" represents the connection to the X11 server. In X11 there are Drawables, which can be used interchangably. One kind of Drawable are Windows.
You might want to have a look at
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/x11argb_opengl
for an example on how to create OpenGL window with a transparent background using plain X11/GLX, that can be used in compositing.
--
Update
I need to get a GLXContext, then create another one to share Display lists, and make it current in another thread, add it to OpenCL context attributes. The point is that I need to be able to make it current.
Familiar problem. My solution to this is to treat a QGLWidget as if it was a context. In your other thread create another QGLWidget, that will never be shown and pass the visible QGLWidget instance into the sharing parameter of the constructor. Then you can use the QGLWidget as if it were a context. It's dirty, not really to the point, but Qt's internal OpenGL system is that way.

Related

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

Creating a second rendering context for a window

I have an OpenGL window opened using glfw under linux and I am trying to use OpenGL in a two threads.
I understood from reading around the web that I am supposed to open a second rendering context with the same window. Only one of the threads will be actually used for rendering, while the other will be used to exchange data through PBOs.
The question is how can I get the needed params to call glxCreateContext() for the window I opened using glfw?
EDIT: I ended creating 2 rendering contexts, with shared lists. which is probably a better idea.

Draw from a separate thread with NSOpenGLLayer

I'm working on an app which needs to draw with OpengGL at a refresh rate at least equal to the refresh rate of the monitor. And I need to perform the drawing in a separate thread so that drawing is never locked by intense UI actions.
Actually I'm using a NSOpenGLView in combination with CVDisplayLink and I'm able to achive 60-80FPS without any problem.
Since I need also to display some cocoa controls on top of this view I tried to subclass NSOpenGLView and make it layer-backed, following LayerBackedOpenGLView Apple example.
The result isn't satisfactory and I get a lot of artifacts.
Therefore I've solved the problem using a separate NSWindow to host the cocoa controls and adding this window as a child window of the main window containing the NSOpenGLView.
It works fine and I'm able to get quite the same FPS as the initial implementation.
Since I consider this solution quite like a dirty hack, I'm looking for an alternative and more clean way of achiving what I need.
Few days ago I came across NSOpenGLLayer and I thought that it could be used as a viable solution for my problem.
So finally, after all this preamble, here comes my question:
is it possible to draw to a NSOpenGLLayer from a separate thread using CVDisplayLink callback?.
So far I've tried to implement this but I'm not able to draw from the CVDisplayLink callback. I can only -setNeedsDisplay:TRUE on the NSOpenGLLayer from the CVDisplayLink callback and then perform the drawing in -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime: when it gets automatically called by cocoa. But I suppose that this way I'm drawing from the main thread, isn't it?
After googling for this I've even found this post in which the user claims that under Lion drawing can occur only inside -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime:.
I'm on Snow Leopard at the moment but the app should run flawlessly even on Lion.
Am I missing something?
Yes, it is possible, though not recommend. Call display on the layer from within your CVDisplayLink. This will cause canDrawInContext:... to be called and if it returns YES, drawInContext:... will be called and all this on whatever thread called display. To make the rendered image visible on screen, you have to call [CATransaction flush]. This method has been suggested on the Apple mailing list, though it is not completely problem free (the display method of other view may get called on your background thread as well and not all views support rendering from a background thread).
The recommend way is to make the layer asynchronous and render the OpenGL context on main thread. If you cannot achieve a good framerate that way, since your main thread is busy elsewhere, it is recommend to rather move everything else (pretty much your whole application logic) to other threads (e.g. using Grand Central Dispatch) and only keep user input and drawing code on the main thread. If your window is very big, you may still not get anything better than 30 FPS (one frame ever two screen refreshes), yet that comes from the fact, that CALayer composition seems a rather expensive process and it has been optimized for more or less static layers (e.g. layers containing a picture) and not for layers updating themselves 60 FPS.
E.g. if you are writing a 3D game, you are advised not to mix CALayers with OpenGL content at all. If you need Cocoa UI elements, either keep them separated from your OpenGL content (e.g. split the window horizontally into a part that displays only OpenGL and a part that only displays controls) or draw all controls yourself (which is pretty common for games).
Last but not least, the two window approach is not as exotic as you may think, that's how VLC (the video player) draws its controls over the video image (which is also rendered by OpenGL on Mac).

How do I write an panel task bar in FLTK for use on Linux systems

I need to write a small application in C/C++ to implement a panel task bar like thing to display information along the top of a desktop window (specifically an xorg desktop on a Linux system). I need to avoid bloat and steep learning curves for the GUI programming.
My research is pointing me at GTK+/GTKmm or FLTK. It looks like FLTK is probably the simpler to get to grips with and the most likely to provide a small clean package with minimal dependencies. So I've based my research on FLTK so far.
I've been doing some reading and am struggling to find out how to write a basic program that will create a narrow undecorated window that covers the width of a monitor in such a way that maximising other applications would not obscure it. The FLTK tutorials I have found so far (including the FLTK documentation) only implement standard windows with borders that can be moved around the screen.
I'd like to start by writing a simple program in FLTK (or GTK+/GTKmm) that creates a 20 pixel deep bar across the with of the screen containing a "hello world" message. The bar's area would be reserved outside the area that other programs can access so that maximising another application would not hide the "hello world" message. I think this has something to do with a WM_STRUT_PARTIAL property but I can't find information about this in FLTK.
Doing this is partially to understand how to write a simple GUI program and partially to solve a specific need that I have.
I'm looking for any help/guidance to put me in the right direction to get started. Many thanks.
starfry, it is not a trivial task I believe. The problem is that your desktop (say GNOME2/Metacity) reserved that space, and paints its panel in the area where you want your bar. -
If you really want your tray-bar applet to be based on FLTK, the you would have to "embed" it in a (GNOME) applet. It was long ago when I did similar thing with SDL application, but I am afraid I forgot how to do it. The first thing that comes to my mind is to use somehow get the XID from the GNOME applet and somehow pass it to the FLTK part of it, and then let FLTK do the rest...
Sure, you may use another desktop, like KDE, or i3 or IceWM, they ALL have their own ways of dealing with the tray bar (there is no standard for it!) so, pardon my "French" - it is going to be a PITA to support all environments...
If I was on GNOME, i would write the applet entirely using GNOME/GTK. Forget FLTK in that case. That is my recommendation. If you target KDE, then do it using KDE/QT libraries (Plasma widget would be what to look for).
However, if you still want to use FLTK, start with the fltk::draw_into() function (it is probably called fl_draw_into() in FLTK 1.x), fltk::xid() and related functions.

Coding a GTK+ application without window manager?

I want to code sth. that basically works like TiVo. Switch it on, you only see the menu or an output, so no underlying OS or anything else is directly visible to the user.
So I want to use Linux as base. Can you suggest a good base distribution?
Can I code a frontend without having a window-manager up and running?
If yes, is that possible with java-gnome or what language/gui-framework combination would you suggest?
If no, what's the minimal window manager that can handle fancy menus, etc?
What does it take to create video-overlays over a HD-stream? Are there some libraries I should take a look at?
Thanks
Yes. If you only have one window you don't need a window manager. Using X you can start some application and set it's position and size from the commandline (making it fullscreen). You might want to take a look at xinit if this is what you want. This is likely the easiest why to get something working. But another option is skip X and use DirectFB. If you want to display several windows, on the other hand, you need some sort of window manager to manage them.
As long as you run X there is no problem using java-gnome as framework if that's what you are confortable with. I guess you didn't mean to run the stock gnome applications, but code everything visible to the user yourself.
This very much depends on what you mean with fancy menus. If you mean transparancy and such you need a composite manager (if you don't just render everything yourself inside your application window). I'm not sure about this but I think you can run a composite manager independent from a window manager if you find that suitable. Again, this is if you run X. Using DirectFB transparency and such are done in a more simple way.
If you intend to write your own media player you should take a look at GStreamer. It can stream, decode and display video and also add video-overlays (among other things) and is extremly easy to use.
Minimalistic tiling window manages like Awesome, Ratpoison or XMonad may be useful as a base, otherwise you'll have to manage focus and window sizing yourself. It is normally fairly easy to make these invisible to the user.
Absolutely.
I wouldn't count on Gnome itself working without a window manager. Other than that... language doesn't matter.
Window managers only do window management. Menus and the like are the job of the widget toolkit. Anyways, Metacity.
... This one I have no clue about.

Resources