Building pytorch on the NVIDIA Jetson Nano developer kit - pytorch

I'm working on a Jetson Nano, and trying to install pytorch 1.4.0 onto it to run some toy experiments.
However, I'm running into a lot of trouble with this. After failing to leverage the prebuilt wheels, I've gone the way of building from scratch, but after a couple hours, it fails with the following error.
[3249/3931] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o
FAILED: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o
cd /home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda && /usr/bin/cmake -E make_directory /home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/. && /usr/bin/cmake -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=/home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o -D generated_cubin_file:STRING=/home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o.cubin.txt -P /home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o.Release.cmake
Killed
CMake Error at torch_cuda_generated_Unique.cu.o.Release.cmake:281 (message):
Error generating file
/home/workingdir/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o
Does anyone know how to interpret this? Did I run out of memory/swap space?
Additionally, if anyone knows of an easier way to get pytorch>=1.1.0 on my nano, any tips would be appreciated :)
I followed this thread here both for the prebuilt installation and the scratch installation: https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available/72048

Related

How to install valgrind on linux

enter image description here
I have basically tried every tutorial out there and still cant run valgrind.
So far....
I installed valgrind from their website to directory 'memcheck'
tar xvf valgrind-3.18.1.tar.bz2
the picture is the outcome of "./configure" ,I cant tell if it was successful or not.
then the command "make" gives: make: *** No targets specified and no makefile found. Stop.
and the same for "make install"
this is what i tried to do. How to install valgrind properly?
If the output from configure contains "configure: error:" then it failed.
Installing with your package manager will be easiest.
Otherwise, you will need
A C compiler (e.g., gcc or clang), always.
GNU make, always.
Perl, probably always.
Sed and awk, always.
Autotools, m4, if regenerating the configure script.
Lots of packages if you want to generate the docs.
A C++ compiler (g++ or clang++) if you want to build and run the regression tests.

Install openmpi and compilation failed with linking mpi_cxx

Hi, All
I am currently installing the openmpi-4.1.1 on ubuntu18.04 from the tar.gz file.
However, when I use the nvcc (CUDA 11.2.2) compiler with -lmpi_cxx, it reports that this linking option does not exist.
is there anything wrong when I am building and installing the openmpi?
I use the following commands when building openmpi with CUDA-aware capability.
./configure --with-cuda
make -j8 install
I try to remove -lmpi_cxx and only keep -lmpi, the compiler reports errors like
undefined reference to `MPI::Comm::Comm()'
Thanks a lot!
I just figure this out by myself.
I need to enable the c++ binding of the MPI when building the openmpi.
Here are the commands
./configure --enable-mpi-cxx --with-cuda
make all install

arm-none-eabi-objdump: error while loading shared libraries: libdebuginfod.so.1: cannot open shared object file

If you have an answer for this, or further information, I'd welcome it. I'm following advice from here, to offer some unsolicited help by posting this question then an answer I've already found for it.
I have a bare-metal ARM board for which I'm building a cross-toolchain, from sources for GNU binutils, gcc and gdb, and for SourceWare's Newlib. I got those four working and cross-built a DoNothing.c into an ELF file - but I couldn't disassemble it with this:
$ arm-none-eabi-objdump -S DoNothing.elf
The error was:
$ arm-none-eabi-objdump: error while loading shared libraries: libdebuginfod.so.1: cannot open shared object file: No such file or directory
I'll follow up with a solution.
The error was correct - my system didn't have libdebuginfod.so.1 installed - but I have another cross-binutils, installed from binary for a different target, and its objdump -S works fine on the same host. Why would one build of objdump complain about missing that shared library, when clearly not all builds of objdump need it?
First I tried rebuilding cross binutils, specifying --without-debuginfod as a configure option. No change, which seems odd: surely that should build tools that not only don't use debuginfod but which don't depend on it in any way. (If someone can answer that, or point out what I've misunderstood, it may help people.)
Next I figured debuginfod was inescapable (for my cross-tools built from source at least), so I'd install it to get rid of the error. It's a component of the elfutils package, but installing the latest elfutils available for my Ubuntu 20.04 system didn't bring libdebuginfod.so.1 with it.
I found a later one, for Arch Linux, whose package contents suggested it would - but its package format doesn't match Ubuntu's and installing it was going to involve a lot of work. Instead I opted to build it from the Arch Linux source package. However, running ./configure on that gave a couple of infuriatingly similar errors:
configure: checking libdebuginfod dependencies, --disable-libdebuginfod or --enable-libdebuginfo=dummy to skip
...
configure: error: dependencies not found, use --disable-libdebuginfod to disable or --enable-libdebuginfod=dummy to build a (bootstrap) dummy library.
No combination of those suggestions would allow configure for elfutils-0.182 to run to completion.
The problem of course was my own lack of understanding. The solution came from the Linux From Scratch project: what worked was to issue configure with both of the suggested options, like this:
$ ./configure --prefix=/usr \
--disable-debuginfod \
--enable-libdebuginfod=dummy \
--libdir=/lib
That gave a clean configure; make worked first time, as did make check and then sudo make install which of course installed libdebuginfod.so.1 as required. I then had an arm-none-eabi-objdump which disassembles cross-compiled ELF files without complaining.

problems building CodeLite

Having a heck of a time trying to build CodeLite for an ARM-based Ubuntu Linux target. (Build instructions here: http://codelite.org/Developers/Linux). I get an error from CMAKE that says Could not locate GTK2. Looking in the CmakeLists.txt file I can see that this is a result of find_package(GTK2) failing to find GTK2. I think I have installed gtk according to what the CodeLite build instructions say to do using the command sudo apt-get install libgtk2.0-dev.
In terms of cmake, I don't understand what a "package" is. How would I [manually] locate this package on my filesystem and how do I get cmake to find it?
For my aarch64 ubuntu 17.04, the libraries and headers were under /usr/lib/aarch64-linux-gnu, so invoking cmake with them produced the correct build files:
cmake -DCMAKE_INCLUDE_PATH=/usr/lib/aarch64-linux-gnu/ -DCMAKE_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/ -DCMAKE_BUILD_TYPE=Release .. -DCOPY_WX_LIBS=1

How to compile c++ programs in the new c++ driver provided by Datastax in Linux

I am new to Cassandra. I installed c++ driver from Datastax. Can some one please provide me the steps like in which path I have to create the ā€˜.cā€™ file and how I can compile it. I can see some example programs in example folder. Can anyone plz tell me how to compile the example programs.
The cpp-driver uses cmake and depends on libuv. So the first steps would be to ensure you have cmake installed as well as libuv. Depending on your linux distribution it may be as simple as using package manager like apt or yum (i.e. sudo apt-get install cmake libuv-dev)
Building is just a matter of running the following steps in the cpp-driver directory:
cmake .
make
sudo make install
This will install libcassandra.so to somewhere in your lib path. You can then link by providing '-lcassandra' in your parameters to clang or gcc (i.e. clang myfile.c -o myfile -lcassandra)
There is very comprehensive documentation on building from source here.

Resources