I'm trying to compile dolphin, a Nintendo emulator. I downloaded the trunk and it compiled fine but it won't link. Instead, I get this error:
Linking CXX shared library ../../../Binaries/plugins/libPlugin_VideoOGL.so
/usr/bin/ld: cannot find -lOpenCL
collect2: ld returned 1 exit status
make[2]: *** [Binaries/plugins/libPlugin_VideoOGL.so] Error 1
make[1]: *** [Source/Plugins/Plugin_VideoOGL/CMakeFiles/Plugin_VideoOGL.dir/all] Error 2
make: *** [all] Error 2
It looks like I need some sort of OpenCL library. I did some research and somehow had an idea that I needed to install fglrx. It did not go smoothly. In fact, when I tried to run fglrxinfo, I got a seg fault..nice. My video wasn't totally hosed, so I tried a make and got the same OpenCL error above. I just want to get Dolphin working. What are my options?
Here are my specs:
Ubuntu 10.4
Radeon X1200 (RS690M)
You need to install the ATI Stream SDK. I think ubuntu 10.10 includes the packages, if not, you can find the installed here.
Also your GPU doesn't support OpenCL, so you will get only the CPU OpenCL device, without much performance improvement.
Although your card doesn't support OpenCL, the answer to the question is that you need to have libOpenCL.so in /usr/lib/.
-lOpenCL is a flag being passed to linker telling it to link against the OpenCL library.
This library is included in both the ATI Stream SDK (for ATI users) and the NVIDIA GPU Computing SDK (for NVIDIA users).
Related
I have an old GPU and pytorch says it is too old to support:
Found GPU0 GeForce GTX 670 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
First question is what is this 3.0 / 3.5 referring to ? Clearly not cuda, nvidia driver nor pytorch?
Secondly, I know I can build Pytorch from source in order to have more support, however it's unclear what hardware pytorch supports when built from source, or even when installed with pip. I'm at the point where the likely pytorch version I need, requires python2, and cuda 9, so I'm spending solid time changing a bunch of software versions without knowing whether any of it will work.
Also if I were to get a newer GPU, I don't know whether it is currently supported.
Any way to have a list of supported NVIDIA hardware of pytorch? Ideally for each releases, but at least for current release?
I run an Nvidia GTX670 with Nvidia Driver 430.50. I changed from cuda 10 to 9, but seeing I might have to build pytorch 0.3.1, I think cuda8 is needed since when runnin python setup.py install in v0.3.1 i get:
CMakeFiles/THC.dir/build.make:560: recipe for target 'CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o' failed
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o] Error 1
CMakeFiles/Makefile2:72: recipe for target 'CMakeFiles/THC.dir/all' failed
make[1]: *** [CMakeFiles/THC.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
I am doing research on 4G SDN-LTE (software define network- long term evolution) using Ubuntu. First I ran .sh file which gave me error that the kernel newer than 3.14.x is not supported. I installed kernel 3.10, 3.13.11, 3.14. It fixed the error. Currently I'm running linux kernel 3.13.11, now it gave me the error that stack-protector enabled but compiler is broken.
I am building a 4G evolved packet core (EPC). The code is available on GitHub: https://github.com/networkedsystemsIITB/SDN_LTE_EPC/find/master. They used Ubuntu 14.04 for this project. Nothing else they have mentioned.
version 4:7.3.0 ubuntu 2.1
/usr/src/linux-headers-3.13.11-031311ckt39-generic/arch/x86/Makefile:96:
stack-protector enabled but compiler support broken Makefile:604: Cannot use CONFIG_CC_STACKPROTECTOR: -fstack-protector not supported by compiler
Makefile:604: *** missing separator. Stop.
make[4]: Leaving directory '/usr/src/linux-headers-3.13.11-031311ckt39-generic'
Makefile.main:70: recipe for target 'default' failed
make[3]: *** [default] Error 2
make[3]: Leaving directory '/home/shahrukh/shfiles/openvswitch-2.3.2/datapath/linux'
I'm using gsrc to compile glibc in order to use multiple versions of it on my machine. I'm using debian and when I tried make -C pkg/gnu/libc install -j8 it just reports this error:
Segmentation fault make: *** [sysinstall-install] Error 1
Does anyone have ideas on what this could be?
I'm using debian and when I tried make -C pkg/gnu/libc install -j8 it just reports this error
You didn't tell how this libc has been configured. In any case, incorrectly installing GLIBC on a system is a very easy way to render your system un-bootable. This is not recommended for novices (and you appear to be a novice).
compile glibc in order to use multiple versions of it on my machine
See this answer on how that can be achieved.
I am new to Linux Kernel and trying to build my first Kernel image, I am building from 3.4.28 Linux Kernel on i686 Architecture for 32 bit.
After one and half hours of build process, I got the following errors
BUILD arcarch/x86/boot/compressedh/x86/boot/bzImage
Setup is 16540 bytes (padded to 16896 bytes).
System is 4667 kB
CRC c2376a1f
Kernel: arch/x86/boot/bzImage is ready (#1)
Building modules, stage 2.
MODPOST 3508 modules
ERROR: "__sync_fetch_and_and_4" [drivers/staging/line6/line6usb.ko] undefined!
ERROR: "__sync_fetch_and_or_4" [drivers/staging/line6/line6usb.ko] undefined!
WARNING: modpost: Found 28 section mismatch(es).
To see full details build your kernel with:
'make CONFIG_DEBUG_SECTION_MISMATCH=y'
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
But if I go and look in the "arch/x86/boot/compressed" directory, I find vmlinux.bin image with ELF header.
I wanted to know can I use this vmlinux binary for my purpose or is a complete successful build required to make this binary fully functional.
As long as nothing relies on the kernel module line6usb.ko or any modules built after it you are likely ok.
My answer is also same as JimR; the kernel module line6usb.ko is not so much important module. Firstly disable this module by first running mrproper then after make menuconfig.
In the menuconfig find the driver in drivers staging and disable it. Later again cross compile the kernel.
disable the module in kernel configuration.
by choosing like this
device drivers ----
staging drivers
I've spent a lot of time setting up the CUDA toolchain on a machine running Ubuntu Linux (11.04). The rig has two NVIDIA Tesla GPUs, and I'm able to compile and run test programs from the NVIDIA GPU Computing SDK such as deviceQuery, deviceQueryDrv, and bandwidthTest.
My problems arise when I try to compile basic sample programs from books and online sources. I know you're supposed to compile with NVCC, but I get compile errors whenever I use it. Basically any sort of include statement involving CUDA libraries gives a missing file/library error. An example would be:
#include <cutil.h>
Do I need some sort of makefile to direct the compiler to these libraries or are there additional flags I need to set when compiling with NVCC?
I followed these guides:
http://hdfpga.blogspot.com/2011/05/install-cuda-40-on-ubuntu-1104.html http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Getting_Started_Linux.pdf
To fix the include problems add the cuda include directory to your compilation options (assuming it is /usr/local/cuda/include):
nvcc -I/usr/local/cuda/include -L/usr/local/cuda/lib test.cu -o test
cutil is not part of the CUDA toolkit. It's part of the CUDA SDK. So, assuming you have followed the instructions and you have added the PATH and LIB directories to your environment variables you still need to point to the CUDA SDK includes and libraries directories.
In order to include that lib manually you must pass the paths to the compiler:
nvcc -I/CUDA_SDK_PATH/C/common/inc -L/CUDA_SDK_PATH/C/lib ...
Although I personally prefer not to use the CUDA SDK libraries, you probably will find easier start a project from a CUDA SDK example.