Error: couldn't find RGB GLX visual or fbconfig | Prime on demand Nvidia card - linux

I have a Lenovo G580 computer with intel CPU and a Nvidia 610M GPU. Running Linux Lite OS (Ubuntu based).
I would like to use Nvidia prime to run programs with the GPU.
I installed some packages about Nvidia drivers, version 390 according to this page.
With the Nvidia X Server Settings I can switch to on demand mode. On th UI there is only one settings for prime, no mention about the GPU settings.
My problem is that when the on demand mode is enable, many programs (games and glx debug programs) throw this error : (even without asking to use GPU)
Error: couldn't find RGB GLX visual or fbconfig
I know there is other posts like mine on internet however I can't understand the problem or identify a missing package on my computer. Have you already install prime on this GPU ? I can send logs or system info if needed.

Related

Android studio error running emulator screen shot provided

Keep getting this error and have tried pretty much everything I have found on this website and no luck.SS
It means that the hardware acceleration is either disabled or not available for your cpu.
Most intel chips and amd chips have this feature. All AMD Ryzen cpus have this feature. Check the website for your cpu if it is available.
Now if it is disabled then you can go to bios to enable it.
If you are on amd cpu then it won't work with windows os. Windows os can support only intel cpus for hardware acceleration using haxm or recently on Ryzen cpus with windows 10 latest april update and hyper-v enabled.
If you are using amd cpus before the ryzen series, you can switch to any linux distro which will be fine, at it has kvm for hardware acceleration.

Is it possible to get OpenCL on Windows Linux Subsystem?

I'v been trying for the past day to get Tensorflow built with OpenCL on the Linux Subsystem.
I followed this guide. But when typing clinfo it says
Number of platforms 0
Then typing /usr/local/computecpp/bin/computecpp_info gives me
OpenCL error -1001: Unable to retrieve number of platforms. Device Info:
Cannot find any devices on the system. Please refer to your OpenCL vendor documentation.
Note that OPENCL_VENDOR_PATH is not defined. Some vendors may require this environment variable to be set.
Am I doing anything wrong? Is it even possible to install OpenCL on Windows Linux Subsystem?
Note:
I'm using an AMD R9 390X from MSI, 64bit Windows Home Edition
With the launch of WSL2, CUDA programs are now supported in WSL (more information here), however there is still no support for OpenCL as of this writing: https://github.com/microsoft/WSL/issues/6951.
According to a Microsoft representative in this forum post, Windows Subsystem for Linux does not support OpenCL or CUDA GPU programs, and support is not currently planned. To experiment with TensorFlow/OpenCL it would probably be easiest to install Linux in a dual-boot configuration.
You could use the Intel OpenCL SDK for the CPU, https://software.intel.com/en-us/articles/opencl-drivers.

Direction Installing OpenCL on Linux Mint Dell 9550

I'm about to enter the next phase of a project where I am moving computation to the GPU. Unfortunately, I have had very poor success setting up OpenCL in my environment. I hoped I could garner some specific direction about what implementation of OpenCL to use and how to avoid certain pitfalls upon installation.
My machine:
Linux Mint 17.3
Dell XPS 15 9550 with an Nvidia GTX 960M graphics chip
Some specifics:
I have been unable to find any graphics drivers that work with this hardware other than the Nvidia-352 version found in this PPA:
https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa
Every other one I try bricks the machine. I've reinstalled Mint more times than I can count finding this one driver. Keep in mind that I must use this configuration for my machine to work.
I attempted to install Nvidia's CUDA toolkit from their site (https://developer.nvidia.com/cuda-downloads) and for some reason the installation overwrote my Nvidia-352 driver and bricked the machine again.
At this point Im not certain which implementation is correct anyway. I do not want to try another and have the same thing happen.
Some specific questions:
Does every implementation of OpenCL assert itself over the currently installed drivers?
If it does then how can I direct my machine to use the correct one?
Which implementation would be right for my machine?
Can you think of any resources or links that I might be interested in to keep me moving forward? Specifically some installation instructions?
Thanks,
Chronic
Disclaimer: all this is based on my experience with Ubuntu 15.10, but hopefully Mint isn't too different.
Does every OpenCL installation overwrite the others?
If you're installing two different vendor's OpenCL implementations then no, they shouldn't overwrite each other. For example, I have Nvidia, Intel CPU, POCL and Beignet (Intel GPU) platforms installed and working. The only caveat is that the Intel CPU runtime overwrote libOpenCL.so* files, resulting in a crash in clinfo because it required libOpenCL.so.1 which the Intel CPU runtime decided to delete. Re-installing the package ocl-icd-opencl-dev fixed this and you can also make libOpenCL.so.1 a symlink to the actual .so file left by the Intel CPU runtime.
If you try installing two versions for the same platform, like you tried, then yes the last one you install will overwrite the previous one. In your case, remember that the CUDA toolkit also includes the GPU drivers. I haven't played with the CUDA toolkit in a while, perhaps there is an option to install the toolkit only and not the drivers, but since each toolkit requires a certain minimum driver version, you'd have to pick a toolkit version that works with the driver version you can get installed.
On Ubuntu, there is an nvidia-cuda-toolkit package you can sudo apt-get install. Id doesn't ask to change my drivers, hopefully it will work for you. I don't know what version of the toolkit this one installs.
Which implementation is right
If you only want to do OpenCL development then install the nvidia-352 package that worked for you, as well as installing ocl-icd-opencl-dev. This package installs the ocl-icd-libopencl and opencl-headers packages, giving the header files and libOpenCL.so (the ICD loader). You also need to sudo apt-get install nvidia-opencl-icd-352 as that provides the OpenCL runtime for Nvidia GPUs. If you also want to do CUDA development then you need the toolkit.
As a side note, install one of the CPU runtimes, e.g. POCL, in addition to the Nvidia runtime. I found this useful for detecting a bug in my kernel - the kernel worked most of the time on my Nvidia GPU but failed consistently on POCL. It was a race condition.
Useful links
Sorry, no up-to-date installation instructions. However, the instructions provided by each vendor with their OpenCL runtime (except Nvidia) seem to be good enough for me.
Here's some older instructions:
https://wiki.tiker.net/OpenCLHowTo
https://streamcomputing.eu/blog/2011-06-24/install-opencl-on-debianubuntu-orderly/ - The rest of the StreamComputing blog is also interesting.

Using OpenGL with a headless Tesla K20X for offscreen rendering on Linux

For Windows, I have to use nvidia-smi to switch to WDDM mode, which negatively affects performance for compute but allows you to use OpenGL. Unfortunately this does not work on Linux and gives me an error that it is unsupported for this platform when running the command on Linux. So far I cannot seem to find any method to enable this on Linux.

CUDA performance penalty when running in Windows

I've noticed a big performance hit when I run my CUDA application in Windows 7 (versus Linux). I think I may know where the slowdown occurs: For whatever reason, the Windows Nvidia driver (version 331.65) does not immediately dispatch a CUDA kernel when invoked via the runtime API.
To illustrate the problem I profiled the mergeSort application (from the examples that ship with CUDA 5.5).
Consider first the kernel launch time when running in Linux:
Next, consider the launch time when running in Windows:
This post suggests the problem might have something to do with the windows driver batching the kernel launches. Is there anyway I can disable this batching?
I am running with a GTX 690 GPU, Windows 7, and version 331.65 of the Nvidia driver.
There is a fair amount of overhead in sending GPU hardware commands through the WDDM stack.
As you've discovered, this means that under WDDM (only) GPU commands can get "batched" to amortize this overhead. The batching process may (probably will) introduce some latency, which can be variable, depending on what else is going on.
The best solution under windows is to switch the operating mode of the GPU from WDDM to TCC, which can be done via the nvidia-smi command, but it is only supported on Tesla GPUs and certain members of the Quadro family of GPUs -- i.e. not GeForce. (It also has the side effect of preventing the device from being used as a windows accelerated display adapter, which might be relevant for a Quadro device or a few specific older Fermi Tesla GPUs.)
AFAIK there is no officially documented method to circumvent or affect the WDDM batching process in the driver, but unofficially I've heard , according to Greg#NV in this link the command to issue after the cuda kernel call is cudaEventQuery(0); which may/should cause the WDDM batch queue to "flush" to the GPU.
As Greg points out, extensive use of this mechanism will wipe out the amortization benefit, and may do more harm than good.
EDIT: moving forward to 2016, a newer recommendation for a "low-impact" flush of the WDDM command queue would be cudaStreamQuery(stream);
EDIT2: Using recent drivers on windows, you should be able to place Titan family GPUs in TCC mode, assuming you have some other GPU set up for primary display. The nvidia-smi tool will allow you to switch modes (using nvidia-smi --help for more info).
Additional info about the TCC driver model can be found in the windows install guide, including that it may reduce the latency of kernel launches.
The statement about TCC support is a general one. Not all Quadro GPUs are supported. The final determinant of support for TCC (or not) on a particular GPU is the nvidia-smi tool. Nothing here should be construed as a guarantee of support for TCC on your particular GPU.
Even it's been almost 3 years since the issue has been active, I still consider it necesssary to provide my findings.
I've been in the same situation: the same cuda programme elapsed for 5ms in Ubuntu cuda 8.0 while over 30ms in Windows 10 cuda 10.1. Both with GTX 1080Ti.
However, in Windows when I changed the compiler from VS Studio to cmd's nvcc compiler suddenly the programme was boosted to the same speed as the Linux one.
This suggests that maybe the problem comes from Visual Studio.

Resources