I am trying to install the Intel OpenMP Runtime on my Linux box in the context of enabling OpenMP support for the clang compiler. I am following the following guide: http://clang-omp.github.io/.
I've gotten to the point where the compiler is built, recognizes -fopenmp, and knows where omp.h is. Now the only thing missing is that it can't find -liomp5, which I assume is the Intel OpenMP runtime. The instructions at the aforementioned website say that I need to do this:
To run (rather than just compile) code you need to get and build an Intel® OpenMP* Runtime Library.
So I downloaded the runtime, and was presented with the following choice of compilers in the readme:
Supported Architectures: IA-32 architecture, Intel(R) 64, and
Intel(R) Many Integrated Core Architecture
-----------------------------------------------------------
| icc/icl | gcc |
--------------|------------------------------|--------------------------|
| Linux* OS | Yes(1,5) | Yes(2,4) |
| OS X* | Yes(1,3,4) | No |
| Windows* OS | Yes(1,4) | No |
-------------------------------------------------------------------------
(1) On IA-32 architecture and Intel(R) 64, icc/icl versions 12.x are
supported (12.1 is recommended).
(2) gcc version 4.6.2 is supported.
(3) For icc on OS X*, OS X* version 10.5.8 is supported.
(4) Intel(R) Many Integrated Core Architecture not supported.
(5) On Intel(R) Many Integrated Core Architecture, icc/icl versions 13.0
or later are required.
Which, unless my reading comprehension skills are worse than I thought, suggest that I can choose to use gcc 4.6.2 for compiling the runtime. Unfortunately, when I proceed to build it, I get:
$ make compiler=gcc
[...]
----- 1/1 --- making lib inc common -----
Build : 00000000 (development)
check-tools.pl: (!) No "gfortran" found in PATH.
check-tools.pl: (!) No "icc" found in PATH.
check-tools.pl: (!) No "icpc" found in PATH.
check-tools.pl: (!) No "ifort" found in PATH.
../../tools/src/common-checks.mk:59: *** Development tools not found: icc, icpc, ifort. Stop.
I am really at a loss here. Has anyone had any success using clang-omp, and is this step of building the Intel runtime actually required? I must be really bad at following instructions and it just isn't working out, I need some assistance. Thank you.
The first error message is the key one. You need to install gfortran, which is the optional Fortran compiler that is part of the GNU Compiler Collection (which is what GCC, the project, now stands for - as opposed to gcc, the executable, which is a C compiler).
Related
I am very confused by the different CUDA versions shown by running which nvcc and nvidia-smi. I have both cuda9.2 and cuda10 installed on my ubuntu 16.04. Now I set the PATH to point to cuda9.2. So when I run
$ which nvcc
/usr/local/cuda-9.2/bin/nvcc
However, when I run
$ nvidia-smi
Wed Nov 21 19:41:32 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.72 Driver Version: 410.72 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 106... Off | 00000000:01:00.0 Off | N/A |
| N/A 53C P0 26W / N/A | 379MiB / 6078MiB | 2% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1324 G /usr/lib/xorg/Xorg 225MiB |
| 0 2844 G compiz 146MiB |
| 0 15550 G /usr/lib/firefox/firefox 1MiB |
| 0 19992 G /usr/lib/firefox/firefox 1MiB |
| 0 23605 G /usr/lib/firefox/firefox 1MiB |
So am I using cuda9.2 as which nvcc suggests, or am I using cuda10 as nvidia-smi suggests? I saw this answer but it does not provide direct answer to the confusion, it just asks us to reinstall the CUDA Toolkit, which I already did.
CUDA has 2 primary APIs, the runtime and the driver API. Both have a corresponding version (e.g. 8.0, 9.0, etc.)
The necessary support for the driver API (e.g. libcuda.so on linux) is installed by the GPU driver installer.
The necessary support for the runtime API (e.g. libcudart.so on linux, and also nvcc) is installed by the CUDA toolkit installer (which may also have a GPU driver installer bundled in it).
In any event, the (installed) driver API version may not always match the (installed) runtime API version, especially if you install a GPU driver independently from installing CUDA (i.e. the CUDA toolkit).
The nvidia-smi tool gets installed by the GPU driver installer, and generally has the GPU driver in view, not anything installed by the CUDA toolkit installer.
Recently (somewhere between 410.48 and 410.73 driver version on linux) the powers-that-be at NVIDIA decided to add reporting of the CUDA Driver API version installed by the driver, in the output from nvidia-smi.
This has no connection to the installed CUDA runtime version.
nvcc, the CUDA compiler-driver tool that is installed with the CUDA toolkit, will always report the CUDA runtime version that it was built to recognize. It doesn't know anything about what driver version is installed, or even if a GPU driver is installed.
Therefore, by design, these two numbers don't necessarily match, as they are reflective of two different things.
If you are wondering why nvcc -V displays a version of CUDA you weren't expecting (e.g. it displays a version other than the one you think you installed) or doesn't display anything at all, version wise, it may be because you haven't followed the mandatory instructions in step 7 (prior to CUDA 11) (or step 6 in the CUDA 11 linux install guide) of the cuda linux install guide
Note that although this question mostly has linux in view, the same concepts apply to windows CUDA installs. The driver has a CUDA driver version associated with it (which can be queried with nvidia-smi, for example). The CUDA runtime also has a CUDA runtime version associated with it. The two will not necessarily match in all cases.
In most cases, if nvidia-smi reports a CUDA version that is numerically equal to or higher than the one reported by nvcc -V, this is not a cause for concern. That is a defined compatibility path in CUDA (newer drivers/driver API support "older" CUDA toolkits/runtime API). For example if nvidia-smi reports CUDA 10.2, and nvcc -V reports CUDA 10.1, that is generally not cause for concern. It should just work, and it does not necessarily mean that you "actually installed CUDA 10.2 when you meant to install CUDA 10.1"
If nvcc command doesn't report anything at all (e.g. Command 'nvcc' not found...) or if it reports an unexpected CUDA version, this may also be due to an incorrect CUDA install, i.e the mandatory steps mentioned above were not performed correctly. You can start to figure this out by using a linux utility like find or locate (use man pages to learn how, please) to find your nvcc executable. Assuming there is only one, the path to it can then be used to fix your PATH environment variable. The CUDA linux install guide also explains how to set this. You may need to adjust the CUDA version in the PATH variable to match your actual CUDA version desired/installed.
Similarly, when using docker, the nvidia-smi command will generally report the driver version installed on the base machine, whereas other version methods like nvcc --version will report the CUDA version installed inside the docker container.
Similarly, if you have used another installation method for the CUDA "toolkit" such as Anaconda, you may discover that the version indicated by Anaconda does not "match" the version indicated by nvidia-smi. However, the above comments still apply. Older CUDA toolkits installed by Anaconda can be used with newer versions reported by nvidia-smi, and the fact that nvidia-smi reports a newer/higher CUDA version than the one installed by Anaconda does not mean you have an installation problem.
Here is another question that covers similar ground. The above treatment does not in any way indicate that this answer is only applicable if you have installed multiple CUDA versions intentionally or unintentionally. The situation presents itself any time you install CUDA. The version reported by nvcc and nvidia-smi may not match, and that is expected behavior and in most cases quite normal.
nvcc is in the CUDA bin folder - as such check if the CUDA bin folder has been added to your $PATH.
Specifically, ensure that you have carried out the CUDA Post-Installation actions (e.g. from here):
Add the CUDA Bin to $PATH (i.e. add the following line to your ~/.bashrc)
export PATH=/usr/local/cuda-10.1/bin:/usr/local/cuda-10.1/NsightCompute-2019.1${PATH:+:${PATH}}
PS. Ensure the following two paths above, exist first: /usr/local/cuda-10.1/bin and /usr/local/cuda-10.1/NsightCompute-2019.1 (the NsightCompute path could have a slightly different ending depending on the version of Nsight compute installed...
Update $LD_LIBRARY_PATH (i.e. add the following line to your ~/bashrc).
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64\
${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
After this, both nvcc and nvidia-smi (or nvtop) report the same version of CUDA...
If you are using cuda 10.2 :
export PATH=/usr/local/cuda-10.2/bin:/opt/nvidia/nsight-compute/2019.5.0${PATH:+:${PATH}}
might help because when I checked, there was no directory for nsight-compute in cuda-10.2.
I am not sure if this was just the problem with me or else why wouldn't they mention it in the official documentation.
Adding onto Robert Crovella's answer...
The difference between the device driver and the runtime driver is that, with device driver you will be able to run compiled CUDA C code. That is, you can download CUDA powered applications and they will be able to successfully execute their code on your GPU.
Whereas, with the runtime driver you will be able to able to compile the CUDA C code, which then will be executed with the help of the device driver on your GPU.
Section 2.2.3 - Cuda Development Toolkit
nvidia-smi can show a “different CUDA version” from the one that is reported by nvcc. Because they are reporting two different things:
nvidia-smi shows that maximum available CUDA version support for a given GPU driver.
And the 2nd thing which nvcc -V reports is the CUDA version that is currently being used by the system.
In short
nvidia-smi shows the highest version of CUDA supported by your driver. nvcc -V shows the version of the current CUDA installation. As long as your driver-supported version is higher than your installed version, it's fine. You can even have several versions of CUDA installed at the same time.
For my current project, I need to use CUDA and the Intel C/C++ compilers in the same project. (I rely on the SSYEV implementation of Intel's MKL, which takes roughly 10 times as long when using GCC+MKL instead of ICC+MKL (~3ms from GCC, ~300µs from ICC).
icc -v
icc version 12.1.5
NVIDIA states, that Intel ICC 12.1 is supported (http://docs.nvidia.com/cuda/cuda-samples/index.html#linux-platforms-supported), but even after having downgraded to Intel ICC 12.1.5 (installed as part of the Intel Composer XE 2011 SP1 Update 3), I am still running into this issue:
nvcc -ccbin=icc src/test.cu -o test
/usr/local/cuda-5.5/bin//..//include/host_config.h(72): catastrophic error: #error directive: -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
#error -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
Unfortunately, it seems as if Nvidia is merely tolerating the use of ICC, because I would hardly call it "support", given the lack of information provided by Nvidia for using ICC together with CUDA.
I am running Ubuntu 12.10 x86_64 and CUDA 5.5. Telling icc to mimick the behavior of the stock GCC 4.7.2 using the -Xcompiler -gcc-version=470 option did not help either. Using google/search, I was only able to find threads from the Nvidia forums dealing with CUDA 3.x and Intel ICC 11.1, but I was unable to transfer the obtained information to current CUDA releases.
I would be very grateful for any suggestion on how to solving this issue :-)
Referring to the file referenced in the error you received, it's specifically looking for an ICC compiler with a particular build date:
#if defined(__ICC)
#if !(__INTEL_COMPILER == 9999 && __INTEL_COMPILER_BUILD_DATE == 20110811) || !defined(__GNUC__) || !defined(__LP64__)
#error -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
#endif
The solution would be to have the intel compiler that actually matches that specified build date. As indicated, ICC 12.1, ie. version 12.1.0.233, instead of ICC 12.1.5 should do the trick.
The narrow focus is at least partly due to a test limitation. In this case, a particular ICC variant was tested with the CUDA toolkit before it was released, and so that host config check has this test in it.
I confronted the problem when compiling madagascar-1.5 with icc2013 and ifort2013. Then I try to resolve the problem by downloading ICC version 2011 update7. Based the INTEL_COMPILER_BUILD_DATE which is 20110811, I can download the correct one. I think the date 20110811 matched icc is the correct one.
According to NVIDIA Programming Guide:
Any PTX code loaded by an application at runtime is compiled further
to binary code by the device driver. This is called just-in-time
compilation. Just-in-time compilation increases application load time,
but allows applications to benefit from latest compiler improvements.
...
Setting CUDA_FORCE_PTX_JIT to 1 forces the device driver to ignore any
binary code embedded in an application (see Section 3.1.4) and to
just-in-time compile embedded PTX code instead; if a kernel does not
have embedded PTX code, it will fail to load
I've compiled my simple vectorAdd using following flags:
nvcc -o vectorAdd -gencode arch=compute_20,code=sm_20 vectorAdd.cu
When the CUDA_FORCE_PTX_JIT environment variable is unset, I get correct results. But when I set the CUDA_FORCE_PTX_JIT environment variable to 1 I get following error from cudaGetErrorString:
invalid device function
How can I fix this issue and get CUDA_FORCE_PTX_JIT working? Maybe the way of my compilation does not embed any PTX code.
Thanks in Advance.
Further information:
CUDA Driver Version: 295.41
CUDA Toolkit version: 4.0
OS: Ubuntu 10.04
Hardware: GTX 480, or Tesla C2050
I found a workout to handle the issue. During compile, the target GPU must not be specified in anyway (Remove -arch or -gencode flags). Subsequently, the driver generates the destination binary at the runtime.
I've spent a lot of time setting up the CUDA toolchain on a machine running Ubuntu Linux (11.04). The rig has two NVIDIA Tesla GPUs, and I'm able to compile and run test programs from the NVIDIA GPU Computing SDK such as deviceQuery, deviceQueryDrv, and bandwidthTest.
My problems arise when I try to compile basic sample programs from books and online sources. I know you're supposed to compile with NVCC, but I get compile errors whenever I use it. Basically any sort of include statement involving CUDA libraries gives a missing file/library error. An example would be:
#include <cutil.h>
Do I need some sort of makefile to direct the compiler to these libraries or are there additional flags I need to set when compiling with NVCC?
I followed these guides:
http://hdfpga.blogspot.com/2011/05/install-cuda-40-on-ubuntu-1104.html http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Getting_Started_Linux.pdf
To fix the include problems add the cuda include directory to your compilation options (assuming it is /usr/local/cuda/include):
nvcc -I/usr/local/cuda/include -L/usr/local/cuda/lib test.cu -o test
cutil is not part of the CUDA toolkit. It's part of the CUDA SDK. So, assuming you have followed the instructions and you have added the PATH and LIB directories to your environment variables you still need to point to the CUDA SDK includes and libraries directories.
In order to include that lib manually you must pass the paths to the compiler:
nvcc -I/CUDA_SDK_PATH/C/common/inc -L/CUDA_SDK_PATH/C/lib ...
Although I personally prefer not to use the CUDA SDK libraries, you probably will find easier start a project from a CUDA SDK example.
I met an err when I installed JikesRVM, that is,
skipping incompatible /usr/lib64/gcc/x86_64-suse-linux/4.4/libstdc++.so when searching for -lstdc++
So I am trying to install/update it to a later version. Now, the machine already has
gcc (SUSE Linux) 4.4.1 [gcc-4_4-branch revision 150839]
I am new to openSUSE, could you help?
Thanks!
You need to install 32 bit support for the GNU C/C++ compiler since JikesRVM on x86_64 currently supports only the 32 bit architecture.
To install this support in OpenSuSE 11.2 type
sudo zypper install gcc44-32bit gcc-32bit libstdc++44-devel-32bit
The first two provide runtime support for the C language and the 32-bit version of libgcc, the GCC low level runtime library. The third provides the 32-bit version of libstdc++, both the static import library and the dynamic library. It is the static libstdc++.a that was missing for JikesRVM.
To verify that the 32bit C++ build system is installed correctly you can test it with the following
echo "int main(){}" | g++ -x c++ -m32 -
Note Official support for OpenSuSE 11.2 has ended. Evergreen support will be available through 2013. Yet it is reasonable to update to 11.3 or a later version soon.