How do I get my CUDA specs on a Linux machine? - linux

I'm accessing a remote machine that has a good nVidia card for CUDA computing, but I can't find a way to know which card it uses and what are the CUDA specs (version, etc.). I used the "lspci" command on the terminal, but there is no sign of a nvidia card. I'm pretty sure it has a nVidia card, and nvcc seems to be installed.
But I really want to figure out the card and CUDA specs. Any ideas?
Thanks!

If you can find where the CUDA SDK directory has been installed then you can just run the deviceQuery example which will tell you all you need to know and more. The executable should be at $(SDK)/C/bin/linux/release/deviceQuery - if it's not there then you may need to build the samples first:
$ cd $(SDK)
$ make
$ ./C/bin/linux/release/deviceQuery
The CUDA SDK directory is typically named NVIDIA_GPU_Computing_SDK (more recent CUDA versions) or just NVIDIA_CUDA_SDK (older CUDA versions).

If you have all of Cuda installed, then just run:
nvidia-smi

Related

clinfo error for opencl amd

i had install AMDAPPSDK-3.0 for my laptop with intel i5 3rd generation configuration. i have no GPU other than my intel's processors inbuilt graphics card.
i had installed the SDK in the below way:
./AMD-APP-SDK-v3.0.130.136-GA-linux64.sh
my .bashrc file has:
**export LD_LIBRARY_PATH=/home/roadeo/AMDAPPSDK-3.0/lib/x86_64/
export AMDAPPSDKROOT="/home/roadeo/AMDAPPSDK-3.0"
export OPENCL_VENDOR_PATH="/home/roadeo/AMDAPPSDK-3.0/etc/OpenCL/vendors/"**
When i run clinfo to check whether OPENCL is installed properly or not. But i get this error:
**terminate called after throwing an instance of 'cl::Error'
what(): clGetPlatformIDs
Aborted core dumped.**
after googling i with frustration install fglrx using sudo apt-get. When i run clinfo i get a lot of details about opencl versions, vendor etc.. I don't know whether is it required or not.
What i m doing wrong kindly suggest.
I'm not familiar with AMD drivers on Linux, but it seems to me that installing the SDK only installed a bunch of examples, header files, etc. but did not actually install any OpenCL runtimes. Installing fglrx probably installed the CPU runtime, in which case the only device you'll see listed is your CPU. If you want to write OpenCL code for your GPU, you'll need to look at Beignet: https://freedesktop.org/wiki/Software/Beignet/

Linux Arch OpenCL ICD Loader - Nvidia GPU, Intel CPU

I am trying to run my OpenCL application at my Intel CPU and Nvidia GPU at the same time for load balancing purposes. But i have the problem that only the Nvidia Plattform is detected.
I use this cl.hpp call:
std::vector<cl::Platform> m_platforms;
cl::Platform::get(&m_platforms);
My system setup:
Linux Arch 3.14.6-1-ARCH
Intel Haswell 4770K
Nvidia 250GTS
installed packages:
nvidia-libgl 337.25-1
opencl-nvidia 337.25-1
nvidia 337.25-1
nvidia-utils 337.25-1
mesa 10.2.1-1
opencl-headers12
intel-opencl-sdk 2014R1-2
opencl-icd 1.2.11.0-4
Both platforms (Intel, Nvidia) have there .icd files in /etc/OpenCL/Vendors. So I don't know what i can try to finally get an Intel CPU OpenCL platform recognized. Maybe you have some ideas?
For now, it would be best to uninstall both of these packages (intel-opencl-sdk and intel-opencl-runtime) and install beignet from the community repository.
sudo pacman -S --needed beignet
The package provides the same functionality and allows you to use the Intel GPU cores also.
I can confirm that it coexists well with other OpenCL platforms such as that provided by NVIDIA, as tested on an Optimus-capable ASUS G750JM. Currently switching via bbswitch and offloading GL-renders via PRIME and primus.
Some pipelines (pyrit) can use OpenCL simultaneously on both platforms, boosting the performance noticeably.
I found an additional package called "intel-opencl-runtime". The intel-opencl-sdk contains the icd file, but it seems not to be enough to be able to run OpenCL apps with the Intel platform. To be able to run OpenCL using the Intel platform it was necessary to also install the above mentioned runtime package.

How to compile FFProbe as stand-alone application? (Mac/Linux/Win)

I'm trying to compile FFMPEG (I'm actually only looking for FFProbe, which is included in the FFMPEG tools) as a standalone application for MacOS X, Linux and possibly Windows.
With standalone I mean that the libraries (x264, OpenJPEG, etc) are "embedded" into the executable so that I have to distribute only one executable for FFProbe - I apologize that I don't know the proper lingo for this (please let me know), I'm a newbie when it comes to this.
I have started by trying to do this under MacOS X, without any luck, but plan to do this for Linux and MacOS X as well.
I did see a few pre-compiled binaries that do this, and do not depend on extra libraries, but the versions I have found so far are either not the current version (1.x) or do not included the libraries in the executable (evermeet).
I followed several guides (for example: FFMpeg MacOSX Compilation Guide, reneVolution), with or without the use of Brew, but none of these show me how to embed the libraries in the executable.
I assume this is an option to be set for linking.
I'm not sure if it's appropriate to ask this question for Win, Mac and Linux at the same time - if it's not appropriate: I'd like to start with doing this for the Mac version.
My system runs MacOS X 10.9, XCode 5.0.2, with commandline tools and brew installed. I'm only looking for an Intel binary, so not a universal or PowerPC binary.
For Windows I can use either Windows 8.1 or XP, for Linux I currently use Ubuntu 12, all of which are virtual machines.
You need to install FFProbe with all the options, like in mac
brew install ffmpeg --with-fdk-aac --with-ffplay --with-freetype --with-frei0r --with-libass --with-libvo-aacenc --with-libvorbis --with-libvpx --with-opencore-amr --with-openjpeg --with-opus --with-rtmpdump --with-schroedinger --with-speex --with-theora --with-tools

installing headers for 3.5 kernel in debian wheezy?

Yesterday, I compiled the 3.5 kernel in debian wheezy (testing), in a thinkpad edge S430 (i5). I did it following this blog, with all the default options. It seems succesful, but then, I tried to install the proprietary nvidia driver with m-a auto-install nvidia-kernel. The install is not able to proceed until the correct headers are installed. However, I have tried both manually to install linux-headers-3.5.0-18 and the linux-headers-amd64 package, but module assistant is not able to see them, showing the following message:
Bad luck, the kernel headers for the target kernel version could not be found and you did not specify other valid kernel headers to use.
There are other ways to install the driver, but I think that the problem with headers is broader.
Although I have been a Debian user for some years, I am far from being an expert, and I am not clear with the problems that I might face when compiling a 3.5 kernel on a Debian testing, so any help and explanation will be much appreciated.
First run
sudo m-a prepare
Getting source for kernel version: 3.8.5-ck1
Kernel headers available in /usr/src/linux-headers-3.8.5-ck1
Creating symlink..
Then do
sudo m-a a-i nvidia
and it should work.
Note that I did this on 3.8.5-ck1, but I built and installed that kernel in a similar fashion to how I wrote up the 3.5 build that you followed.

Is CUDA in installed correctly on my Ubuntu 10.04? Some samples don't run.

I am trying to install CUDA on a server running Ubuntu 10.04.
I followed the NVDIA instructions and installed the "CUDA toolkit for Ubuntu Linux 10.04", "GPU Conputing SDK code samples",and "Developer Drivers for Linux (260.19.26) (64 bit)", my system is 64 bit. This installation seems successful. everything downloaded from http://developer.nvidia.com/object/cuda_3_2_downloads.html#Linux
According to the messages of the installation packages, I added /usr/local/cuda/bin to PATH, /usr/local/cuda/lib64:/usr/local/cuda/lib to LD_LIBRARY_PATH
Then, I tried to run the sample programs. The strange things is, some of them can be run, and some of them don't even through they can be made with no problem.
For example,
- convolutionSeparable will just stop there without any message, I can kill it by ctrl + c.
matrixMul outputs a line
Device 0: "Quadro 5000" with Compute 2.0 capability
and stop there, again can be killed by Ctrl+C
clock works, outputs
PASSED
time = 12574
Press ENTER to exit...
simpleMultiCopy outputs PASSED
MonteCarlo outputs PASSED
simpleZeroCopy outputs PASSED
bandwidthTest stops there with blinking cursor for ever.
What is wrong with this?! How can I check if my CUDA installation is successful ? What is wrong with those programs don't run? They don't even have a error message.
I would start by upgrading the driver to 260.19.36, which can be found here. Then I would suggest running nvidia-smi -a to see if the driver is happy. Then I second the suggestion to run deviceQuery to see if the CUDA Toolkit 3.2 is working.
If deviceQuery output appears nominal, then I would start adding printf's to see where things go awry in matrixMul.
What does deviceQuery say? Also check the output of dmesg right after you run that program to see if you can figure out whats up.
Another tip, if you still are having issues, is try running:
strace ./deviceQuery 2> out.txt
Then check out.txt to see if you can find any clues why this error is occuring.
I have similar problem but solved by updating kernel and drivers.
install newer kernel on 10.04
linux-image-generic-pae-lts-backport-natty
linux-headers-generic-pae-lts-backport-natty
download the latest nvidia driver
from http://www.nvidia.com/Download/index.aspx?lang=en-us
install the latest CUDA (at moment 4.0) from
http://developer.nvidia.com/cuda-toolkit-40
CUDA Toolkit for Ubuntu Linux 10.10 32-bit
CUDA Tools SDK 32-bit
GPU Computing SDK code samples
then I passed all SDK example tests.
ThinkPad w520 Quadro 1000 on Ubuntu 10.04

Resources