I'm trying to run Pytorch on a laptop that I have. It's an older model but it does have an Nvidia graphics card. I realize it is probably not going to be sufficient for real machine learning but I am trying to do it so I can learn the process of getting CUDA installed.
I have followed the steps on the installation guide for Ubuntu 18.04 (my specific distribution is Xubuntu).
My graphics card is a GeForce 845M, verified by lspci | grep nvidia:
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce 845M] (rev a2)
01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)
I also have gcc 7.5 installed, verified by gcc --version
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
And I have the correct headers installed, verified by trying to install them with sudo apt-get install linux-headers-$(uname -r):
Reading package lists... Done
Building dependency tree
Reading state information... Done
linux-headers-4.15.0-106-generic is already the newest version (4.15.0-106.107).
I then followed the installation instructions using a local .deb for version 10.1.
Now, when I run nvidia-smi, I get:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 845M On | 00000000:01:00.0 Off | N/A |
| N/A 40C P0 N/A / N/A | 88MiB / 2004MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 982 G /usr/lib/xorg/Xorg 87MiB |
+-----------------------------------------------------------------------------+
and I run nvcc -V I get:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
I then performed the post-installation instructions from section 6.1, and so as a result, echo $PATH looks like this:
/home/isaek/anaconda3/envs/stylegan2_pytorch/bin:/home/isaek/anaconda3/bin:/home/isaek/anaconda3/condabin:/usr/local/cuda-10.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
echo $LD_LIBRARY_PATH looks like this:
/usr/local/cuda-10.1/lib64
and my /etc/udev/rules.d/40-vm-hotadd.rules file looks like this:
# On Hyper-V and Xen Virtual Machines we want to add memory and cpus as soon as they appear
ATTR{[dmi/id]sys_vendor}=="Microsoft Corporation", ATTR{[dmi/id]product_name}=="Virtual Machine", GOTO="vm_hotadd_apply"
ATTR{[dmi/id]sys_vendor}=="Xen", GOTO="vm_hotadd_apply"
GOTO="vm_hotadd_end"
LABEL="vm_hotadd_apply"
# Memory hotadd request
# CPU hotadd request
SUBSYSTEM=="cpu", ACTION=="add", DEVPATH=="/devices/system/cpu/cpu[0-9]*", TEST=="online", ATTR{online}="1"
LABEL="vm_hotadd_end"
After all of this, I even compiled and ran the samples. ./deviceQuery returns:
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce 845M"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 2004 MBytes (2101870592 bytes)
( 4) Multiprocessors, (128) CUDA Cores/MP: 512 CUDA Cores
GPU Max Clock rate: 863 MHz (0.86 GHz)
Memory Clock rate: 1001 Mhz
Memory Bus Width: 64-bit
L2 Cache Size: 1048576 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS
and ./bandwidthTest returns:
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce 845M
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 11.7
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 11.8
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 14.5
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
But after all of this, this Python snippet (in a conda environment with all dependencies installed):
import torch
torch.cuda.is_available()
returns False
Does anybody have any idea about how to resolve this? I've tried to add /usr/local/cuda-10.1/bin to etc/environment like this:
PATH=$PATH:/usr/local/cuda-10.1/bin
And restarting the terminal, but that didn't fix it. I really don't know what else to try.
EDIT - Results of collect_env for #kHarshit
Collecting environment information...
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce 845M
Nvidia driver version: 418.87.00
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.18.5
[pip] pytorch-ranger==0.1.1
[pip] stylegan2-pytorch==0.12.0
[pip] torch==1.5.0
[pip] torch-optimizer==0.0.1a12
[pip] torchvision==0.6.0
[pip] vector-quantize-pytorch==0.0.2
[conda] numpy 1.18.5 pypi_0 pypi
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] stylegan2-pytorch 0.12.0 pypi_0 pypi
[conda] torch 1.5.0 pypi_0 pypi
[conda] torch-optimizer 0.0.1a12 pypi_0 pypi
[conda] torchvision 0.6.0 pypi_0 pypi
[conda] vector-quantize-pytorch 0.0.2 pypi_0 pypi
PyTorch doesn't use the system's CUDA library. When you install PyTorch using the precompiled binaries using either pip or conda it is shipped with a copy of the specified version of the CUDA library which is installed locally. In fact, you don't even need to install CUDA on your system to use PyTorch with CUDA support.
There are two scenarios which could have caused your issue.
You installed the CPU only version of PyTorch. In this case PyTorch wasn't compiled with CUDA support so it didn't support CUDA.
You installed the CUDA 10.2 version of PyTorch. In this case the problem is that your graphics card currently uses the 418.87 drivers, which only support up to CUDA 10.1. The two potential fixes in this case would be to either install updated drivers (version >= 440.33 according to Table 2) or to install a version of PyTorch compiled against CUDA 10.1.
To determine the appropriate command to use when installing PyTorch you can use the handy widget in the "Install PyTorch" section at pytorch.org. Just select the appropriate operating system, package manager, and CUDA version then run the recommended command.
In your case one solution was to use
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
which explicitly specifies to conda that you want to install the version of PyTorch compiled against CUDA 10.1.
For more information about PyTorch CUDA compatibility with respect drivers and hardware see this answer.
Edit After you added the output of collect_env we can see that the problem was that you had the CUDA 10.2 version of PyTorch installed. Based on that an alternative solution would have been to update the graphics driver as elaborated in item 2 and the linked answer.
TL; DR
Install NVIDIA Toolkit provided by Canonical or NVIDIA third-party PPA.
Reboot your workstation.
Create a clean Python virtual environment (or reinstall all CUDA dependent packages).
Description
First install NVIDIA CUDA Toolkit provided by Canonical:
sudo apt install -y nvidia-cuda-toolkit
or follow NVIDIA developers instructions:
# ENVARS ADDED **ONLY FOR READABILITY**
NVIDIA_CUDA_PPA=https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/
NVIDIA_CUDA_PREFERENCES=https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
NVIDIA_CUDA_PUBKEY=https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
# Add NVIDIA Developers 3rd-Party PPA
sudo wget ${NVIDIA_CUDA_PREFERENCES} -O /etc/apt/preferences.d/nvidia-cuda
sudo apt-key adv --fetch-keys ${NVIDIA_CUDA_PUBKEY}
echo "deb ${NVIDIA_CUDA_PPA} /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
# Install development tools
sudo apt update
sudo apt install -y cuda
then reboot the OS load the kernel with the NVIDIA drivers
Create an environment using your favorite manager (conda, venv, etc)
conda create -n stack-overflow pytorch torchvision
conda activate stack-overflow
or reinstall pytorch and torchvision into the existing one:
conda activate stack-overflow
conda install --force-reinstall pytorch torchvision
otherwise NVIDIA CUDA C/C++ bindings may not be correctly detected.
Finally ensure CUDA is correctly detected:
(stack-overflow)$ python3 -c 'import torch; print(torch.cuda.is_available())'
True
Versions
NVIDIA CUDA Toolkit v11.6
Ubuntu LTS 20.04.x
Ubuntu LTS 22.04 (prior official release)
In my case, just restarting my machine made the GPU active again. The initial message I got was that the GPU is currently in use by another application. But when I looked at nvidia-smi, there was nothing that I saw. So, no changes to dependencies, and it just started working again.
Another possible scenario is that environment variable CUDA_VISIBLE_DEVICES is not set correctly before installing PyTorch.
In my case it worked to do as follows:
remove the CUDA drivers
sudo apt-get remove --purge nvidia*
Then get the exact installation script of the drivers based on your distro and system from the link: https://developer.nvidia.com/cuda-downloads?target_os=Linux
In my case it was dabian on x64 so I did:
wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo add-apt-repository contrib
sudo apt-get update
sudo apt-get -y install cuda
And now nvidia-smi works as intended!
I hope that helps
If your CUDA version does not match what PyTorch expects, you will see this issue.
On Arch / Manjaro:
Get Pytorch from here: https://pytorch.org/get-started/locally/
Note what CUDA version you are getting PyTorch for
Get the same CUDA version from here: https://archive.archlinux.org/packages/c/cuda/
Install CUDA using (e.g.) sudo pacman -U --noconfirm cuda-11.6.2-1-x86_64.pkg.tar.zst
Do not update to a newer version of CUDA than PyTorch expects. If PyTorch wants 11.6 and you have updated to 11.7, you will get the error message.
Make sure that os.environ['CUDA_VISIBLE_DEVICES'] = '0' is set after if __name__ == "__main__":. So your code should look like this:
import torch
import os
if __name__ == "__main__":
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
print(torch.cuda.is_available()) // true
...
On Manjaro Linux.
Running Android Studio 3.6.3. Everything works great, but when I try to launch any emulator I am getting this error:
Emulator: createOrGetGlobalVkEmulation: Warning: Vulkan 1.1 APIs missing from device
I'm not having any driver issues outside of trying to use the Android Studio Emulator.
Studio SDK's installed:
Vulkan driver's installed:
I have Radeon's Vulkan mesa driver installed in manjaro. I also have virtualization enabled for processor in BIOS.
Manjaro Information:
System: Host: command Kernel: 5.4.40-1-MANJARO x86_64 bits: 64 compiler: gcc v: 9.3.0
CPU: Topology: 8-Core model: AMD Ryzen 7 2700X bits: 64 type: MT MCP arch: Zen+ rev: 2 L2 cache: 4096 KiB
Graphics: Device-1: Advanced Micro Devices [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590]
vendor: Micro-Star MSI driver: amdgpu v: kernel bus ID: 0a:00.0
Display: x11 server: X.org 1.20.8 driver: amdgpu resolution: <xdpyinfo missing>
OpenGL: renderer: Radeon RX 580 Series (POLARIS10 DRM 3.35.0 5.4.40-1-MANJARO LLVM 10.0.0) v: 4.6 Mesa 20.0.6
direct render: Yes
You can work around it as indicated here https://stackoverflow.com/a/59715169/1796802 :
Create the file ~/.android/advancedFeatures.ini
(for Windows users path should be C:\Users\Dane\.android\advancedFeatures.ini) with the following content:
# Here's how to disable Vulkan apps to talk to the emulator.
# Add the following lines to ~/.android/advancedFeatures.ini (create this file if it doesn't exist already):
Vulkan = off
GLDirectMem = on
I am trying to load QT application which have some 3D graphics, application works on two Linux PC(where QT ) but on third one(new PC) this error is showing.
"QXcbIntegration: Cannot create platform OpenGL context, neither GLX nor EGL are enabled"
I tried to install
sudo apt-get install mesa-common-dev
sudo apt-get install freeglut3
sudo apt-get install freeglut3-dev
and added soft link
sudo ln -s /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 libGL.soudo ln -s /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 libGL.so
Machine Configuration
description: VGA compatible controller
product: 4th Generation Core Processor Family Integrated Graphics Controller
vendor: Intel Corporation
physical id: 2
bus info: pci#0000:00:02.0
version: 06
width: 64 bits
clock: 33MHz
capabilities: msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:30 memory:f7800000-f7bfffff memory:e0000000-efffffff ioport:f000(size=64) memory:c0000-dffff
Please guide,any input is a great help.
i am getting below error while running vainfo
libva info: VA-API version 0.99.0 libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD' libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva error: /opt/intel/mediasdk/lib64/iHD_drv_video.so init failed
libva info: va_openDriver() returns 1 vaInitialize failed with error code 1 (operation failed),exit
This is happening after installing Intel Media Server Studio 2017 (see), before that vainfo was working fine, as i installed below packages and drivers, suggested from here.
sudo apt-get install i965-va-driver libva-intel-vaapi-driver vainfo
Machine configuration : ubuntu 14.04 LTS
Processor : Intel i7(i7-3720QM)
Graphics : Intel Corporation 3rd Gen Core processor Graphics Controller
Is this just because Intel SDK does not support 3rd Generation Processor?
The problem is with version of Media SDK used, Media SDK-2017 supports only 4th/5th Generation Processors.
For 3rd Generation Machine Media SDK-2015-R1 is the compatible version, which can be downloaded from here. Using the correct version with corresponding hardware will solve this problem.
Details
What drivers/packages do I have to install in order to enable OpenCL over multiple platforms: CPU (Intel), Integrated GPU (Intel), Dedicated GPU (NVIDIA)?
It would be nice to have all platforms running OpenCL 1.2 or above
I know it is probably a simple fix, maybe just the right selection of libraries/SKDs, but I am having some trouble getting more them one platform to work.
Running ubuntu 14.04: I have a intel core I5 with the integrated intel graphics and a dedicated NVIDIA Geforce 710m board.
Resources I have used
https://wiki.tiker.net/OpenCLHowTo
Here (under Debian) it tells me that I only need:
Packages of ICD loaders: (you just need one of these)
Packages of ICDs
Package for headers
What I have already tried
Installed CUDA7.5 (yes for all)
Had a black screen due drivers conflics
Resolved by uninstalling all nvidia drivers and installing 352
Still left Cuda SDK install
From: How to make OpenCL work on 14.10 + Nvidia 331.89 drivers?
sudo apt-get install nvidia-331 nvidia-331-uvm nvidia-opencl-dev nvidia-modprobe
Those packages downgraded my drivers to 331 and 340
Also from: How to make OpenCL work on 14.10 + Nvidia 331.89 drivers?
Linked libraries with:
sudo ln -s /usr/include/nvidia-352/GL /usr/local/include
sudo ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 /usr/local/lib/libOpenCL.so
OpenCL 1.1 worked for NVIDIA GPU
Could not get OpenCL 1.2, thus decided to uninstall 331 and 340 and install again 352
Installed 352 (again)
OpenCL 1.1 stopped working for NVIDIA GPU (and still does not work)
Installed Intel opencl_runtime_14.2_x64_4.5.0.8.tgz
Created a simbolic link to intel ICD with:
sudo ln -s /opt/intel/opencl-1.2-4.5.0.8/etc/intel64.icd
OpenCL 1.2 worked for CPU (and still works)
Installed clinfo sudo apt-get install clinfo
only intel CPU platform is detected
Tried to install several different NVIDIA packages to get NVIDIA GPU work again but had no luck with that
Installed packages and some information:
ICD in Vendors?
ls -l /etc/OpenCL/vendors/
total 4
-rw-r--r-- 1 root root 15 Out 22 2015 Altera.icd
lrwxrwxrwx 1 root root 45 Abr 28 13:48 intel64.icd -> /opt/intel/opencl-1.2-4.5.0.8/etc/intel64.icd
Note the missing nvidia.icd
CL and GL - GL had a valid link... now is IN RED
ls -l /usr/local/include
total 4
lrwxrwxrwx 1 root root 31 Abr 28 12:48 CL -> /usr/local/cuda-7.5/include/CL/
lrwxrwxrwx 1 root root 26 Abr 27 11:44 GL -> /usr/include/nvidia-352/GL (IN RED COLOR - folder doesn't exist anymore)
.so Files
ls -l /usr/local/lib/ | grep CL
lrwxrwxrwx 1 root root 40 Abr 27 11:45 libOpenCL.so -> /usr/lib/x86_64-linux-gnu/libOpenCL.so.1
Installed packages
dpkg --get-selections | grep nvidia
nvidia-340 deinstall
nvidia-352 install
nvidia-libopencl1-340 deinstall
nvidia-libopencl1-340-updates deinstall
nvidia-libopencl1-352 deinstall
nvidia-libopencl1-352-updates install
nvidia-modprobe install
nvidia-opencl-icd-340 deinstall
nvidia-opencl-icd-352 deinstall
nvidia-prime install
nvidia-settings install
dpkg --get-selections | grep opencl
nvidia-libopencl1-340 deinstall
nvidia-libopencl1-340-updates deinstall
nvidia-libopencl1-352 deinstall
nvidia-libopencl1-352-updates install
nvidia-opencl-icd-340 deinstall
nvidia-opencl-icd-352 deinstall
ocl-icd-libopencl1:amd64 deinstall
ocl-icd-libopencl1:i386 deinstall
opencl-headers install
unity-scope-openclipart install
clinfo
clinfo
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 1.2 LINUX
Platform Name: Intel(R) OpenCL
Platform Vendor: Intel(R) Corporation
Platform Extensions: cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_spir cl_intel_exec_by_local_thread cl_khr_depth_images cl_khr_3d_image_writes cl_khr_fp64
Platform Name: Intel(R) OpenCL
Number of devices: 1
Device Type: CL_DEVICE_TYPE_CPU
Device ID: 32902
Max compute units: 4
Max work items dimensions: 3
Max work items[0]: 8192
Max work items[1]: 8192
Max work items[2]: 8192
Max work group size: 8192
Preferred vector width char: 1
Preferred vector width short: 1
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 16
Native vector width short: 8
Native vector width int: 4
Native vector width long: 2
Native vector width float: 8
Native vector width double: 4
Max clock frequency: 1800Mhz
Address bits: 64
Max memory allocation: 2040185856
Image support: Yes
Max number of images read arguments: 480
Max number of images write arguments: 480
Max image 2D width: 16384
Max image 2D height: 16384
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 480
Max size of kernel argument: 3840
Alignment (bits) of base address: 1024
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: Yes
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: No
Round to +ve and infinity: No
IEEE754-2008 fused multiply-add: No
Cache type: Read/Write
Cache line size: 64
Cache size: 262144
Global memory size: 8160743424
Constant buffer size: 131072
Max number of constant args: 480
Local memory type: Global
Local memory size: 32768
Error correction support: 0
Unified memory for Host and Device: 1
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: Yes
Queue properties:
Out-of-Order: Yes
Profiling : Yes
Platform ID: 0x1659390
Name: Intel(R) Core(TM) i5-3337U CPU # 1.80GHz
Vendor: Intel(R) Corporation
Device OpenCL C version: OpenCL C 1.2
Driver version: 1.2.0.8
Profile: FULL_PROFILE
Version: OpenCL 1.2 (Build 8)
Extensions: cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_spir cl_intel_exec_by_local_thread cl_khr_depth_images cl_khr_3d_image_writes cl_khr_fp64
So...
How can I get NVIDIA gpu to also show up as a OPENCL 1.2 (or higher) platform? How about the integrated intel graphics?
Would AMD libraries work with my hardware?
Why most of the nvidia packages are marked as deinstall?
As mentioned before, three things are necessary:
From https://wiki.tiker.net/OpenCLHowTo
Packages of ICD loaders: (you just need one of these)
Packages of ICDs
Package for headers
Thus for an Intel CPU and a NVIDIA GPU
Packages of ICD loaders:
ocl-icd-libopencl1
Packages of ICDs
Installed Intel OpenCL runtime
nvidia-opencl-icd-352
Package for headers
opencl-headers
However, to get it to work, it is necessary to make sure the packages are marked as install by dpkg --get-selections | grep opencl
sudo apt-get install --reinstall nvidia-opencl-icd-352 opencl-headers ocl-icd-libopencl1
On top of that, you must make sure that intel64.icd and nvidia.icd are in /etc/OpenCL/vendors (ls -l /etc/OpenCL/vendors).
That said, I had to link intel64.icd with:
cd /etc/OpenCL/vendors/
sudo ln -s /opt/intel/opencl-1.2-X.X.X.X/etc/intel64.icd
And, since nvidia.icd was not in the folder (even after the installation of the right package) I had to extract it and manually move from the deb package
dpkg -x /var/cache/apt/archives/nvidia-opencl-icd-352_352.63-0ubuntu0.14.04.1_amd64.deb ~/tempfolder
sudo mv ~/tempfolder/etc/OpenCL/vendors/nvidia.icd /etc/OpenCL/vendors/nvidia.icd
rm -r ~/tempfolder
Finally, make sure nvidia is the active GPU
sudo prime-select nvidia
sudo reboot -r now
Install and execute clinfo and both platforms should show-up.
clinfo
Number of platforms: 2
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 1.2 CUDA 7.5.23
Platform Name: NVIDIA CUDA
Platform Vendor: NVIDIA Corporation
Platform Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 1.2 LINUX
Platform Name: Intel(R) OpenCL
Platform Vendor: Intel(R) Corporation
Platform Extensions: cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_spir cl_intel_exec_by_local_thread cl_khr_depth_images cl_khr_3d_image_writes cl_khr_fp64