off-screen rendering with GPU support BUT without windowing support - linux

Currently I am using OSMesa for off-screen rendering. I am running it on linux (RHEL) command line interface. It works really well but rendering consumes a lot of time. Basically i run opengl animation off-screen and capture frames on the fly and create a video using ffmpeg. So, my question is, whether it is possible to use GPU for off-screen rendering in order to make rendering process faster.
I know i can use FBOs but i think they require window support which i dont have due to linux CLI.
So in short, is there anyway to use FBOs in my case or what is the best solution to speed up the rendering process?

So, my question is, whether it is possible to use GPU for off-screen rendering in order to make rendering process faster.
In principle yes, but so far no standard API on how to do it was settled down for. If you're using NVidia GPUs you can use headless EGL with the Nvidia proprietary drivers: https://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/
Using Kernel DRM and the Mesa OpenGL drivers it is possible to configure and operate the GPU in a single process without a display server. There's a demo called "kmscube", I forked it into my GitHub and made a few small modifications to it: https://github.com/datenwolf/kmscube In the current state kmscube will draw to the screen, but it should be possible to change the selection of a connector in a way, that you get full offscreen rendering.
Also the whole Wayland infrastructure is centered around the possibility to give clients arbitrary framebuffers to render to, that compositors then combine, so looking at the way how Wayland compositors allocate the off-screen framebuffers for Wayland clients to use is also worth looking at.

Related

In Vulkan how can you associate each individual video card with monitors they're directly connected to

I have two monitors, each connected to a different GPU. Both GPUs are in a single machine, and I want to run a single application. I have two independent views, and I would like to render each one using a GPU/Monitor set. I can create multiple surfaces and devices, but I want to ensure I associate each surface with the GPU its monitor is plugged into, otherwise I suspect I'll suffer performance issues as the frame buffers need to be copied back and forth between cards.
I'm using fullscreen surfaces, and I was thinking this was something vkGetPhysicalDeviceSurfaceSupportKHR would tell me. However, both VkSurfaceKHR appear to be valid targets for each VkPhysicalDevice so I guess this is something the OS and GPU Driver can handle, but is there any hint about which surface is optimal to associate with a device?
From what I can tell the extension VK_KHR_display is one way of doing this, but it's not available on my Windows 10 machine or Nvidia GPU. It seems to be intended for embedded platforms only. However it lets you list attached displays for each device which is pretty much what I'm looking for: https://vulkan.lunarg.com/doc/view/1.0.30.0/linux/vkspec.chunked/ch29s03.html
This quote from the docs makes me belive this may not be supported on Windows:
Issues
1) Does Win32 need a way to query for compatibility between a particular physical device and a specific screen? Compatibility between a physical device and a window generally only depends on what screen the window is on. However, there is not an obvious way to identify a screen without already having a window on the screen.
RESOLVED: No. While it may be useful, there is not a clear way to do this on Win32. However, a method was added to query support for presenting to the windows desktop as a whole.
However, I'm still interested in hearing if there's a work around to achieve a similar effect.
Finally figured out a work around for this:
Direct X actually supports this through use of the IDXGIAdapter::EnumOutputs function. This lets you list the monitors connected to each GPU. Then using these two extensions you can remap this information to Vulkan:
VK_KHR_external_memory_capabilities
VK_KHR_get_physical_device_properties2
You can use these to get the deviceLUID from VkPhysicalDeviceIDPropertiesKHR.
This can then be compared with the Luid from this structure in Direct X DXGI_ADAPTER_DESC
You can also use glfwGetWin32Window to get the HWND of the monitor. This lets you associate a vulkan surface with a direct x monitor.
You now have all the information you need to accociate vulkan surfaces with the devices they're actually connected to.
At least in my application, setting this up correctly results in a significant difference in performance.
This would all be way simpler (and cross platform) if Windows would just support the VK_KHR_display and VK_KHR_display_swapchain extensions as Linux does.
There are two extensions that are useful for such things: the one mentioned by You, VK_KHR_display and the second called VK_KHR_display_swapchain which allows You to create a swapchain directly on a device’s display without any underlying window system.
But these extensions are rarely supported on Windows. In core Vulkan API there is no way to achieve what You want. And I'm afraid You need to use OS-specific functions (You need to rely on the WinAPI functions in this situation).
[EDIT]
Did You saw this question? How can you get the display adapter used for a particular monitor in Windows? If not, maybe it will help You start with Your research.
As you already discovered, on Win32 you need to use the OS windowing system to pick the display you want to use, using the Window API. It can be straight forward.
BUT if you intend to make simple and agnostic OS code, check GLFW project. It has high level functions to handle windows on all major OSs.
Check :
GLFW monitor Guide
GLFW Vulkan integration
GLFW on its own words:
GLFW is a free, Open Source, multi-platform library for OpenGL, OpenGL ES and Vulkan application development. It provides a simple, platform-independent API for creating windows, contexts and surfaces, reading input, handling events, etc.

Force existing OpenGL application to render offscreen on a headless machine

I want to create a framework for automated rendering tests for video games.
I want to test an application that normally renders to a window with OpenGL. Instead, I want it to render into image files for further evaluation. I want to do this on a Linux server with no GPU.
How can I do this with minimal impact on the evaluated application?
Some remarks for clarity:
The OpenGL version is 2.1, so software rendering with Mesa should be possible.
Preferably, I don't want to change any of the application code. If there is a solution that allows me to emulate a X server or something like that, I would prefer it.
I don't want to change any of the rendering code. If it is really necessary, I can change the way I initialize OpenGL, but after that, I want to execute arbitrary OpenGL code.
Ideally, your answer would explain how to set up an environment on a headless Linux server that allows me to start arbitrary OpenGL binaries and render its output into images. If that's not possible, I am open for any suggestions.
Use Xvfb for your X server. The installation of Mesa deployed on any modern Linux distribution should automatically fall back to software rasterization if no supported GPU is found. You can take screenshots with any X11 screen grabber program; heck even ffmpeg -i x11grab will work.
fbdev/miniglx might be something that you are looking for. http://www.mesa3d.org/fbdev-dri.html I haven't used it so I have no idea if it works for your purpose or not.
Alternative is to just start and xserver without any desktop environment with xinit. That setup is using well tested code paths making it better suited for running your test. miniglx might have bugs which none has noticed because it isn't used everyday.
To capture the rendering output to images could be done with LD_PRELOAD trick to wrap glXSwapBuffers. Basic idea is to add your own swapbuffers function in between your application and gl library where you can use glReadPixels to download rendered frame and then use your favorite image library to write that data to image/video files. After the glReadPixels has completed you can call to library glXSwapBuffers to make swap happen like it would happen in real desktop.
The prog subdirectory has been removed from main git repository and you can find it from git://anongit.freedesktop.org/git/mesa/demos instead.

Tegra 3 OpenGL ES directly to framebuffer

I am developing an embedded Linux system using the Apapalis T30 Tegra 3 SOM from Toradex. I only need a very simple multi-touch user interface for it. I am trying to push the performance and efficiency of the UI as far as I can because the device will have to render complex 3D models whilst allowing live interactions and I know for a fact that my users will be have models that will make it bog down no matter what. I am therefore trying to push that point as far away as I can. Memory will also be a constraint and some models might use up all the RAM if there is not enough available.
What I would like to do to solve this is to have an OpenGL ES 2 GUI with SVG UI elements combined with GLES 3D views, rendered directly to the frame-buffer. In other words, I want to completely ditch any form of a window/desktop manager because I won't need it. I only need a single full-screen GLES drawing surface. I don't even need pointer events etc. as I will be talking to the touch panel directly from my application.
I have looked around quite a bit but I could not really find any conclusive information. I am constantly reading reports of HW acceleration not working when the frame-buffer is directly used, but I guess one could render the GLES into an image and then just push it to the FB? I am also reading that the graphics driver might be locked to X11 but I am also struggling to find details about the Tegra GFX driver, I am reading reports about Nvidia opensourcing their driver, is this true?
Any assistance or explanations will be greatly appreciated.
PS. Please don't preach me on how bad an idea this is and how I should rather use Qt or something like that, I want to find out how to do what I am planning here.
PPS. What a basically want to be able to do is what I understand embedded Qt 5 does in its "EGLFS" rendering mode.

Is there any way to show camera stream and draw something on top of it (in linux)?

I am using freescale gpu sdk,Open GLES APIs for drawing and Gstreamer APIs for camera streaming for ARM architecture. It is possible in my case to do them separately but i want to know is there any way to show camera stream and draw something on it?
Thanks in Advance.
Some of freescale's processor (such as imx6) have multiple framebuffer overlay (/dev/fb0, /dev/fb1, /dev/fb2, ...).
You can then stream camera content on fb1, and draw on fb0, for exemple.
knowing that all those frambuffer are not activated by default.
It depends on your concrete root file system but if you are using the one generated with Freescale Yocto for i.MX6 the default configuration is at /usr/share/vssconfig
In that file you can specify which framebuffer gstreamer uses. By default /dev/fb0 is the BACKGROUND framebuffer and /dev/fb1 is the FOREGROUND framebuffer.
You can make gstreamer to draw in /dev/fb0 while you draw using cairo over /dev/fb1 (mmap /dev/fb1 and cairo_image_surface_create_for_data) controlling the transperency level with ioctls() over /dev/fb1.
In fact, I don't really know the behavior of X11. That's why I suggest you to disable X11 and make direct rendering with openGL via openGL DRI (Direct Rendering infrastructure) driver and DRM (Direct Rendering Manager) on one of the two framebuffers, and stream your camera on the other fb. (May be I am wrong and I hope someone else will correct me if it is the case)
This is a french documentation on how DRM and DRI works.
I have already faced this problem in the past.
I had to stream video with GStreamer and draw text over with pango. First thing I did was to generate a minimal image (with GStreamer enabled of course) but without any X11 library. For me (maybe it's different on your module), GStreamer used the /dev/fb1 node by default, and I then used /dev/fb0 for pango rendering.
It was quite easy to do that after several tests. So I also suggest you to make tests, try different things, different way, and I hope it will work as you want.

Command-Line linux OpenGL processing

I need to build a command line tool, that will take a 3D model as an argument, and will output photos of it, that may or may not be processed by this application. The tool will be deployed on Linux, but I want to make it as cross-platform as possible.
The program is not supposed to present a window of any kind, or accept any other input apart from the command line arguments.
I was wondering, how would someone approach this? I am currently able to display the 3D model on-screen with the help of GLFW, which actually drives my event handlers to peripheral input, and also my main loop. However, I don't know if using GLFW will help me if I want to make a command-line program with input-output as files.
Does anyone have any indications as to how to approach this?
create invisible/hidden window,
use its gl context to render to FBO and
use readpixels to save that to file
For OpenGL to work you need an OpenGL context. Which used to require some kind of windowing system active, that could produce you some drawable for which the context could be created.
Some OpenGL implementations, like Mesa, actually allow you to create an OpenGL context for drawables that are created without a windowing system; Mesa calls this "off-screen mesa". With Gallium3D drivers on Linux this even may give you GPU acceleration. But usually you end up in the "softpipe" software rasterizer.
Does anyone have any indications as to how to approach this?
Don't use OpenGL for it. OpenGL is mostly meant for creating interactive graphics; but of course if your goal is visualization of complex data, then a GPU would be better suited.
With NVidia hardware you'll need to use an X server for that; the X server must be running and active on the console for this to work. AMD hardware with the open source drivers and Mesa may give you off-screen capabilities without X (but I never tried that).
On Windows Server you don't have proper OpenGL support anyway (just v1.4 and very slow), so don't bother with it.

Resources