How to build Tensorflow for my AMD Linux system? - linux

I recently installed Tensorflow onto my system. Upon fitting to the model, I am getting this error in the Jupyter Notebook terminal:
This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
I read that I am supposed to build Tensorflow for my system, but I am unaware how to. What kinda settings do I need to set? What compiler flags do I need? Here is my setup:
GPU: AMD RX 5700XT
CPU: AMD Ryzen 9 3900X
RAM: 64GB DDR4
OS: Ubuntu 20.04
Can someone give me advice on how to build Tensorflow for my system?

Most deep learning and machine learning frameworks and libraries use NVIDIA CUDA for GPU processing, we would need to select an NVIDIA graphics card.
Note While Amd has some excellent graphics card models, their compatibility and support with ML tasks are still experimental. So we will need to stick to NViDia.
AMD provides a ROCm enabled TensorFlow library for AMD GPUs. It's based on the ROCm software stack. To know more about this library: MIOpen - ROCm.
Based on these factors, the following graphics cards families can be recommended:
GeForce 10 series
GeForce 16 series.
GeForce 20 series.
GeForce 30 series

Related

NVIDIA Geforce GT 540M not appearing in physical device list

I have an ASUS laptop with built-in NVIDIA GeForce GT 540M. I have installed the Cuda toolkit and CuDNN package and other utilities like Visual C++. But whey I run this codetf.config.experimental.list_physical_devices(). only CPU is there no GPU appeared. My question is can I use this GPU for machine learning or not?
https://developer.nvidia.com/cuda-gpus
Under "CUDA-Enabled GeForce and TITAN Products" you can see that your GPU is CUDA enabled. It also has a compute capability of 2.1.
Incidentally all CUDA versions have a minimum compute capability they require.
https://tech.amikelive.com/node-930/cuda-compatibility-of-nvidia-display-gpu-drivers/
As you can see most of the latest versions of CUDA require compute capability 3 or higher.
If you'd like to try installing an older version of CUDA, make sure to check out the following link to see a list of tested OS - tensorflow - CUDA - cuDNN combinations.
Which TensorFlow and CUDA version combinations are compatible?

OpenCL support for both Intel CPU and Nvidia GPU

When querying for supported OpenCL platforms/devices, my Nvidia GPU gets returned, thanks to the OpenCL runtime provided by the CUDA SDK.
However, the Intel Xeon CPU does not.
I see that Intel provides an OpenCL Runtime for its CPUs. However, it seems to require replacing libOpenCL.so.
With some linker hackery I was able to get the Intel runtime loaded into my process but then only the Intel CPU, not the Nvidia device, is enumerated.
Is there a way to get them to co-exist so I can distribute tasks across both the GPU and CPU?

Which PyTorch version is CUDA 3.0 compatible?

I have a Nvidia GeForce GTX 770, which is CUDA 3.0 compatible, but upon running PyTorch training on the GPU, I get the warning
Found GPU0 GeForce GTX 770 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
and subsequently the error RuntimeError: CUDA error: no kernel image is available for execution on the device.
Was there an old PyTorch version, that supported graphics cards like mine with CUDA capability 3.0? If yes, which version, and where to find this information? Is there a table somewhere, where I can find the supported CUDA versions and compatibility versions?
If it is relevant, I have CUDA 10.1 installed.

Can I run Cuda or OpenCl on Intel processor graphics I7 (3rd or 4rd generation)

I'm developing on SBC (which have Intel I7 3ed or 4ed, and doesn't have external GPU)
I'm using linux.
I want to take the advantage of Intel processor graphics .
I thought to learn developing with Cuda or OpenCl.
I read some old posts (several years ago) (and I'm not sure there is a better technology now):
Can i run CUDA on Intel
GPU Programming, CUDA or OpenCL
Can I run cuda code on Intel processor graphics ?
Can I run OpenCl code on Intel processor graphics ?
If I can run Cuda & OpenCl code on Intel processor graphics, which is better ?
As #Robert Crovella said you cannot run CUDA on Intel GPU/CPU. Where it comes to OpenCL you have few choices:
Intel OpenCL Driver for Intel GPU and CPU
Open Source Beignet for Intel GPU
AMD APP SDK which can be run on Intel CPU
I cannot say which one will be best for Intel GPU on Linux. I think Beignet was first to support Intel GPU then official Intel drivers appeared. For Intel CPU on Linux I use AMD APP SDK.
You can now :-) Using: coriander, which is an NVIDIA® CUDA™ compiler for OpenCL 1.2 GPUs (full disclosure: I'm the author)

Can I run CUDA on Intel's integrated graphics processor?

I have a very simple Toshiba Laptop with i3 processor. Also, I do not have any expensive graphics card. In the display settings, I see Intel(HD) Graphics as display adapter. I am planning to learn some cuda programming. But, I am not sure, if I can do that on my laptop as it does not have any nvidia's cuda enabled GPU.
In fact, I doubt, if I even have a GPU o_o
So, I would appreciate if someone can tell me if I can do CUDA programming with the current configuration and if possible also let me know what does Intel(HD) Graphics mean?
At the present time, Intel graphics chips do not support CUDA. It is possible that, in the nearest future, these chips will support OpenCL (which is a standard that is very similar to CUDA), but this is not guaranteed and their current drivers do not support OpenCL either. (There is an Intel OpenCL SDK available, but, at the present time, it does not give you access to the GPU.)
Newest Intel processors (Sandy Bridge) have a GPU integrated into the CPU core. Your processor may be a previous-generation version, in which case "Intel(HD) graphics" is an independent chip.
Portland group have a commercial product called CUDA x86, it is hybrid compiler which creates CUDA C/ C++ code which can either run on GPU or use SIMD on CPU, this is done fully automated without any intervention for the developer. Hope this helps.
Link: http://www.pgroup.com/products/pgiworkstation.htm
If you're interested in learning a language which supports massive parallelism better go for OpenCL since you don't have an NVIDIA GPU. You can run OpenCL on Intel CPUs, but at best you can learn to program SIMDs.
Optimization on CPU and GPU are different. I really don't think you can use Intel card for GPGPU.
Intel HD Graphics is usually the on-CPU graphics chip in newer Core i3/i5/i7 processors.
As far as I know it doesn't support CUDA (which is a proprietary NVidia technology), but OpenCL is supported by NVidia, ATi and Intel.
in 2020 ZLUDA was created which provides CUDA API for Intel GPUs. It is not production ready yet though.

Resources