Choose CUDA version for ffmpeg encoding - linux

I have multiple CUDA versions installed and I need all of them, so I can’t unistall any. The problem is when I try to encode with the nvenc_h264 encoder, it doesn’t work because it says that there are multiple version of CUDA.
I’m trying to choose the CUDA version, but I don’t find any parameter in ffmpeg documentation to do so.
Does anyone know how to choose the CUDA version?
I’m working with Linux 22 and the latest ffmpeg version with the NVIDIA libraries.

Related

Tensorflow GPU: Error says .dll file not found, but it does exist

I've been trying to get CUDA to work with TensorFlow for a while now because the neural nets I've been building are now taking hours to train on my CPU, and it'd be great to get that big speed boost. However, whenever I try to use it with TensorFlow (it works with PyTorch, but I want to learn multiple APIs), it tells me that one of the .dll files needed to run CUDA doesn't exist, when it actually does.
I've downloaded and replaced that .dll with other versions from dll-files.com. I've tried uninstalling and reinstalling TensorFlow, CUDA, and cuDNN. I've tried different versions of CUDA, but that only caused all the .dll files to not be found (and yes, I did change the CUDA_PATH value). I've tried switching the PATH between C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin to see if that changed anything.
If anyone could help with this, that would be much appreciated.
The errors I get when I run tf.test.is_gpu_available()
The file existing
Try installing a different older version of the CUDA toolkit on top of the version you have installed already. It fixed it for me however I also had to import all the previous dlls from the latest cuDNN toolkit into the new legacy CUDA toolkit installs as well.
Have you checked if your TF version was compatible with your CUDA version?
Check the compatibility matrix here: https://www.tensorflow.org/install/source#tested_build_configurations
Unless you compile TF from source, CUDA 11 is not supported yet.
In any case, I would avoid downloading dll from the website you mentioned.

How to use AMD GPU for fastai/pytorch?

I'm using a laptop which has Intel Corporation HD Graphics 5500 (rev 09), and AMD Radeon r5 m255 graphics card.
Does anyone know how to it set up for Deep Learning, specifically fastai/Pytorch?
Update 3:
Since late 2020, torch-mlir project has come a long way and now supports all major Operating systems. Using torch-mlir you can now use your AMD, NVIDIA or Intel GPUs with the latest version of Pytorch.
You can download the binaries for your OS from here.
Update 2:
Since October 21, 2021, You can use DirectML version of Pytorch.
DirectML is a high-performance, hardware-accelerated DirectX 12
based library that provides GPU acceleration for ML based tasks. It supports all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
Update:
For latest version of PyTorch with DirectML see: torch-directml
you can install the latest version using pip:
pip install torch-directml
For detailed explanation on how to setup everything see Enable PyTorch with DirectML on Windows.
side note concerning pytorch-directml:
Microsoft has changed the way it released pytorch-directml. it deprecated the old 1.8 version and now the offers the new torch-directml(as apposed to the previously called pytorch-directml).
It is now installed as a plugin for the actual version of Pytorch and works align side it.
Old version:
The initial release of pytorch-directml (Oct 21, 2021):
Microsoft has release Pytorch_DML a few hours ago.
You can now install it (in windows or WSL) using pypi package:
pytorch-directml 1.8.0a0.dev211021
pip install pytorch-directml
So if you are on windows or using WSL, you can hop in and give this a try!
Update :
As of Pytorch 1.8 (March 04, 2021), AMD ROCm versions are made available from Pytorch's official website. You can now easily install them on Linux and Mac, the same way you used to install the CUDA/CPU versions.
Currently, the pip packages are being provided only. Also, the Mac and Windows platforms are still not supported (I haven't tested with WSL2 though!)
Old answer:
You need to install the ROCm version. The official AMD instructions on building Pytorch is here.
There was previously a wheel package for rocm, but it seems AMD doesn't distribute that anymore, and instead, you need to build PyTorch from the source as the guide which I linked to explains.
However, you may consult this page, to build the latest PyTorch version: The unofficial page of ROCm/PyTorch.
Update: In March 2021, Pytorch added support for AMD GPUs, you can just install it and configure it like every other CUDA based GPU. Here is the link
Don't know about PyTorch but, Even though Keras is now integrated with TF, you can use Keras on an AMD GPU using a library PlaidML link! made by Intel. It's pretty cool and easy to set up plus it's pretty handy to switch the Keras backends for different projects

PyTorch C++ - how to know the recommended version of cuDNN?

I've previously inferenced TensorFlow graphs from C++. Now I'm embarking on working out how to inference PyTorch graphs via C++.
My first question is, how can I know the recommended version of cuDNN to use with LibTorch, or if I'm doing my own PyTorch compile?
Determining the recommended CUDA version is easy. Upon going to https://pytorch.org/ and choosing the options under Quick Start Locally (PyTorch Build, Your OS, etc.) the site makes it pretty clear that CUDA 10.1 is recommended, but there is no mention of cuDNN version and upon Googling I'm unable to find a definitive answer for this.
From what I understand about PyTorch on ubuntu, if you use the Python version you have to install the CUDA driver (ex. so nvidia-smi works, version 440 currently), but the CUDA and cuDNN install are not actually required beyond the driver because they are included in the pip3 package, is this correct? If so, then is there a command I can run in a Python script that shows the version of CUDA (expected to be 10.1) and cuDNN that the pip pre-compiled .whl uses? I suspect there is such a command but I'm not familiar enough with PyTorch yet to know what that may be or how to look it up.
I've ran into compile and inferencing errors using C++ with TensorFlow when I was not using the specific recommended version of cuDNN for a certain version of TensorFlow and CUDA so I'm aware these version can be sensitive and I have to make the right choices from the get-go. If anybody can assist in determining the recommended version of cuDNN for a certain version of PyTorch that would be great.
CUDA is supported via the graphics card driver, AFAIK there's no separate "CUDA driver". The system graphics card driver pretty much just needs to be new enough to support the CUDA/cudNN versions for the selected PyTorch version. To the best of my knowledge backwards compatibility is included in most drivers. For example a driver that supports CUDA 10.1 (reported via nvidia-smi) will also likely support CUDA 8, 9, 10.0
If you installed with pip or conda then a version of CUDA and cudNN are included with the install. You can query the actual versions being used in python with torch.version.cuda and torch.backends.cudnn.version().

What is difference between lcuda and lcudart in CUDA SDK 7.5?

I have Nvidea SDK 7.5. I want to compile OpenMP for CUDA. On step "Clone, build & install OpenMP target backends" I get:
Yet I see no libcuda.lib at all in my SDK:
So I wonder what is difference between lcuda and lcudart and if there is any where one can find lcuda in CUDA 7.5 SDK?
The l you are discussing in "lcuda" and "lcudart" is actually part of the compiler switch. The library is cuda or libcuda, and cudart or libcudart.
On linux, I don't think these libraries end in .lib either. They should end in .so That's pretty evident in your printout - take a look.
Anyway, libcuda.so is installed by the GPU driver installer, not the toolkit.
(except possibly for stubs, which you should not use) libcuda.so will not be found in the toolkit location but somewhere else where the driver puts it. This particular location will vary depending on the specific linux distro.
So I suggest using find or a similar linux command to locate it on your machine. It may be in /usr/lib or /usr/lib64 or someplace like that.
If you can't find it at all, then the likely explanation is that you have not installed (properly) the GPU driver on your machine.
In a nutshell, libcuda.so provides access to the CUDA driver API, whereas libcudart.so provides access to the CUDA runtime API.
By the way, in the future, please post text output, not pictures, for actual text output from your console.

Installing MonoDevelop on Suse Enterprise 10.0

I tried to install MonoDevelop on Suse 11.0 Enterprise, using the 1-click install on the MonoDevelop download page, but quickly wound up in a tangle of missing dependencies. I then tried using the Suse software repositories to get MonoDevelop, and waded through several of the dependencies for awhile trying to get the necessary packages to fulfill the dependencies, but some of the packages in the Suse repositories actually appear to be missing the needed RPM files. Are these repositories no longer being actively maintained?
I am aware that there is a CD on the Mono site (called the Mono LiveCD) that appears to contain a complete installation of the development environment, as well as a DVD for OpenSuse 11.2 (on the OpenSuse site) that might actually have all of the Mono software already installed. But the target environment for the utility I am writing is Suse 11.0 Enterprise Server. Does that matter?
What is the shortest distance between two points here?
that might actually have all of the Mono software already installed.
It has all of the Mono software installed ( of course it depends on what do you mean 'all') Last time when i tried it, the installation included Mono itself, monodevelop, MoMa and some software, written in Mono ( gome-do, for example ). I did not perform any additional steps for migrating my solution from a windows machine. the one thing worth to mention, monodevelop crashed several times, but i think it was memory-related ( an image had access only to 512 mb of RAM ).
But the target environment for the
utility I am writing is Suse 11.0
Enterprise Server. Does that matter?
In theory, it does not. You will have to check Mono versions of course, but i don't think you will face any problems during migration process. As far as i know, Suse 11.0 comes with Mono 2.0 pre-installed. With a latest life-cd you are going to develop using mono 2.6. There were some major changes ( partial LINQ to SQL suport ) you will have to compare changelog's of course.
If you're trying to install Mono on Suse Linux Enterprise Server, you need to download the Mono Extension Pack (available here). In SLES 11 it was separated out into a separate product in order to decouple the support life cycle from that of the general distribution. This allows them to release more often.
The add-on is available for x86, x86_64, and IBM System Z.

Resources