I'm trying to use a software (Paraview) in client/server mode opening the client on my desktop machine (linux, debian 10) and do the heavy computations on a remote server (linux, CentOS 8). The software requires OpenGL 3.2 or later implementation and it should be OK with what is installed on my machine as you see from the output of command glxinfo shown below:
myaccount#desktopmachine:$ glxinfo | grep OpenGL
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: *GeForce GTX 650 Ti BOOST/PCIe/SSE2 OpenGL core*
profile version string: 4.6.0 NVIDIA 440.82 OpenGL core profile
shading language version string: 4.60 NVIDIA OpenGL core profile
context flags: (none) OpenGL core profile profile mask: core profile
OpenGL core profile extensions: OpenGL version string: 4.6.0 NVIDIA
440.82 OpenGL shading language version string: 4.60 NVIDIA OpenGL context flags: (none) OpenGL profile mask: (none) OpenGL extensions:
OpenGL ES profile version string: **OpenGL ES 3.2** NVIDIA 440.82 OpenGL
ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:
The problem is that, when connecting to the remote server through SSH, the OpenGL resulting from the same command is:
myaccount#server:$ glxinfo | grep "OpenGL"
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: *GeForce GTX 650 Ti BOOST/PCIe/SSE2 OpenGL*
version string: **1.4** (2.1.2 NVIDIA 440.82) OpenGL extensions:
So it looks like the version of OpenGL is not correctly transmitted.
What should I do to fix this issue that does not allow me to run the software?
SSH tunnels X11. GLX is the X11 protocol extension that does OpenGL, and is also the protocol that encapsulates OpenGL calls into X11, which is then tunneled over SSH.
Now here's the thing:
GLX only goes up to OpenGL-1.4 (https://www.khronos.org/registry/OpenGL/specs/gl/glx1.4.pdf page 49). Everything beyond that is supported only by direct rendering contexts and GLX is used merely to setup the context, but from there on, everything related to OpenGL-3.x and beyond bypasses GLX and goes directly to the driver.
Sure, in theory GLX could be updated to support OpenGL-3 and later. But nobody bothers.
Your option now is to run everything on the remote end and only transmit the rendering result. Ideally this would be done by applications creating an X-less, headless OpenGL context, then copying over the rendering result into a X11 SHM pixmap (however the performance will be abysmal, over your typical network).
My preferred solution is using Xpra, using the GPU on the remote end.
In the context of ParaView and the pvserver, you need to use your local display, not the "X11 forwarding" mechanism.
Do not use -X or -Y when connecting and then run DISPLAY=:0 glxinfo, DISPLAY=:0 pvserver. on your server
Related
I have a Lenovo G580 computer with intel CPU and a Nvidia 610M GPU. Running Linux Lite OS (Ubuntu based).
I would like to use Nvidia prime to run programs with the GPU.
I installed some packages about Nvidia drivers, version 390 according to this page.
With the Nvidia X Server Settings I can switch to on demand mode. On th UI there is only one settings for prime, no mention about the GPU settings.
My problem is that when the on demand mode is enable, many programs (games and glx debug programs) throw this error : (even without asking to use GPU)
Error: couldn't find RGB GLX visual or fbconfig
I know there is other posts like mine on internet however I can't understand the problem or identify a missing package on my computer. Have you already install prime on this GPU ? I can send logs or system info if needed.
I have a Linux system with Mesa3D and llvmpipe software driver.
glxinfo reports
Extended renderer info (GLX_MESA_query_renderer):
Vendor: VMware, Inc. (0xffffffff)
Device: llvmpipe (LLVM 9.0, 128 bits) (0xffffffff)
Version: 19.2.8
Accelerated: no
Video memory: 65482MB
Unified memory: no
Preferred profile: core (0x1)
Max core profile version: 3.3
Max compat profile version: 3.1
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.0
If I read this info correctly, OpenGL 3.3 has to be supported in core profile. Unfortunately, any OpenGL program starts only in OpenGL 3.1 mode (compatibility profile). For example,
~$ xvfb-run glxgears -info
GL_RENDERER = llvmpipe (LLVM 9.0, 128 bits)
GL_VERSION = 3.1 Mesa 19.2.8
GL_VENDOR = VMware, Inc.
...
Is there a way to start OpenGL programs with core 3.3 profile?
Unfortunately, any OpenGL program starts only in OpenGL 3.1 mode.
Not any GL program will do that. Only old programs which use the legacy context creation (so they don't even know or care about the existence of different profiles) will get this version.
(compatibility profile).
Actually, OpenGL 3.1 compatibility profile does not even exist. Profiles were introduced in OpenGL 3.2, even if mesa's interpretation of that slightly differs. Technically, mesa llvmpipe simply does not support compatibility profiles.
Is there a way to start OpenGL programs with core 3.3 profile?
Not in a useful way. If the program uses legacy context creation, it does either not know about newer GL functions (and will not be able to make use of them as a consequence), or it is just broken and simply assumes to get some newer GL version which is nothing a program can rely on, as per the spec.
In any case, if the program is not written for OpenGL core profile, is will most likely not function correctly, because a lot of deprecated legacy functionality is simply not available in core profiles.
Your example glxgears will simply generate a lot of GL errors and only show a black screen wehn run in core profile, because it is using display lists and immediate mode rendering commands and the fixed-function pipeline, which are all not available in core profile OpenGL.
Allthough it is most likely quite useless, to make a program use core profile OpenGL which doesn't request it, you either can modify the source code, or you can somehow interfere with its context creation operations.
In some ironic turn of events, I myself just a few days ago added some functionality to my glx_hook hack which actually allows to modify the context an application is requesting without having to modify the source code.
When I install a nvidia proprietary driver then Nvidia OpenGL implementation is used (I don't need Mesa). Which OpenGL implementation can be used with an open source nvidia driver - Nouveau ? Does Nouveau also provide OpenGL implementation or it has to use Mesa OpenGL implementation ? Can I use nvidia drivers with Mesa OpenGL implementation ? What are possibilities ?
First things first: The open source graphics drivers, all of them, use Mesa for the front side OpenGL interface and state tracking.
Let's break this down: Theoretically a OpenGL implementation can directly talk to the hardware. This is what the NVidia and AMD proprietary drivers actually do.
But in the open source world code reuse is highly favoured. So a typical open source graphics driver looks like this:
User API frontend (OpenGL + state tracker) → abstraction layer (Gallium3D or device specific internal layer) → kernel backend.
The Mesa project actually encompasses the whole chain. The OpenGL part of Mesa, (the frontend) can attach to different abstraction layers (for example also a software rasterizer, softpipe/llvm for example). But the Mesa project is also an umbrella for the other parts: The userland graphics drivers (nouveau, radeon, intel and so on), the infrastructure that allows for userspace processes to directly talk to the graphics driver, bypassing display servers (DRI) and the kernel interface (DRM), as well the kernel modules.
A few weeks ago AMD released a new kernel module (amdgpu) that uses the DRM API, is open source and will be merged into the Linux kernel. That new kernel module is there to be used by both the proprietary AMD OpenGL drivers and the open source Mesa drivers. AMD is pushing for open source for some time now and the logical next step would be, that AMD ditches their own OpenGL frontend in favor of Mesa and providing their proprietary driver as a middle end that plug into Mesa.
Can I use nvidia drivers with Mesa OpenGL implementation ?
That depends. If you're doing indirect OpenGL over X11 then in fact you can use the Mesa libGL.so for your program, talking through the X11 server to the nvidia backend driver. However used that way libGL.so merely acts as a GLX conduit. It works the other way round as well BTW.
However since this lacks "direct GL context" capabilities you'll not be able to use OpenGL features for which no indirect opcodes have been defined; that would be anything OpenGL-3 or later, sadly. Also if your data is highly dynamic there is some significant bottleneck due to serializing the command stream (theoretically using syscalls like vmsplice most of the overhead could be alleviated, though).
I want to learn OpenGL development and I am running Linux Mint. Khronos.org says the following:
The OpenGL 4.4 and OpenGL Shading Language 4.40 Specifications were released on July 22, 2013.
As far as I understand Mesa is the OpenGL implementation for Linux but it is only one version 3.1 I believe. My question is, can I develop OpenGL 4.4 apps in the Linux environment or do I have to use Mesa's 3.1 version?
You can develop OpenGL 4.4 software by #including the glext.h file from http://www.opengl.org/registry/
You can run OpenGL 4.4 software by using hardware drivers that implement the OpenGL 4.4 specification and a GPU that supports the necessary hardware features. In practical terms, this means you need an AMD or Nvidia GPU that supports Direct3D 11 and very recent proprietary (closed-source) drivers from the GPU vendor.
Mesa3d is the open-source driver framework, with partial support for OpenGL 4.x. The proprietary drivers from AMD and Nvidia replace Mesa3d with their own OpenGL implementation.
Note that you can develop OpenGL 4.4 software even if your system cannot run said software.
AMD and NVIDIA proprietary drivers are probably the only drivers on Linux supporting OpenGL 4.4.
Using VMware 10 and ubuntu 13.10 as the guest OS.
Installing the guest additions can provide hardware rendering for OpenGL 2.1
For academic purposes, there is a need to develop and run OpenGL 3+ code, preferably in the virtual machine.
I assume that it is not possible to use the host GPU, so I am trying to force software rendering, using an OpenGL 3+ renderer.
Mesa3D + llvmpipe seems promising, but I am unable to find information on whether the software renderer supports OpenGL 3+.
Is there a way to develop OpenGL 3+ under vmware?
EDIT: (For someone who replied and then deleted their post :p)
Yes, I am also seeing OpenGL 2.1 using glxinfo. I removed hardware acceleration in my VM, and am only interested in software rasterization, even if it is really slow. The question is, is there a version of llvmpipe that implements a software rasterizer for OpenGL versions higher than 2.1? I know that mesa3d supports it, albeit only for hardware.
The mesa software renderer (both the "old" pre-gallium swrast and the "new" gallium softpipe/llvmpipe), do support most of GL3.2. The only major thing missing is support for multisampling, hence they are not advertising full 3.0 support.
Update 2017
Current versions of mesa's various software rasterizers now do claim to support up to GL 3.3 in a core profile. (The progress can be tracked on https://mesamatrix.net/). However, there is a caveat, as documented in mesa's feature.txt:
freedreno, llvmpipe, softpipe, and swr have fake Multisample anti-aliasing support
which means they still do not fullfill the requirements of the GL 3.0 spec. However, in most cases, this will not matter in practice. But one should still be aware of that limitiation.
In case anyone is still interested, VMWare Workstation (both Workstation Pro and Workstation Player) have added OpenGL 3.3 support in version 12.
However, at the time of writing, the Linux guest drivers side of the equation has not been available, and is planned for Linux 4.3.
So: Use VMWare Workstation Player (or Pro, if you have it) version 12 or up, and Linux 4.3 or up.
Update: using VirtualBox without any kind of acceleration and Mesa LLVMpipe, I also get OpenGL 3.3 support (Mesa version is 17.1.1)