What is the toolchain to be used to compile Berkeley bootloader (bbl)? - riscv

I have to run riscv-tests and SPEC2006 on riscv-linux (booted) on FPGA. I would like to know what is the compilation toolchain to be used for this flow.
I understand that riscv-linux has to be compiled with riscv64-linux-gcc. However, I'm unclear that about riscv-tests. Can riscv-elf-gcc be used to compile riscv-tests and run on riscv-linux? I read some of the posted mentioned in stackoverflow about SPEC2006 and bbl (both compiled with riscv-linux-gcc). I want to run riscv-tests also. Should they also be compiled with (riscv-linux-gcc) ?
Thanks!

To compile bbl or baremetal applications like riscv-tests you should you riscv64-unknown-elf- or riscv32-unknown-elf- (with Newlib) .
Because riscv64-linux contains more libraries that make compilation process complicated.We mainly use riscv64-linux to compile application that run on riscv-linux.

Related

GCC linking wrong libpthread for build.rs

I'm attempting to cross-compile from Linux (NixOS) to Windows and encountering some frustrations.
There seem to be two parts that together are breaking the build:
Code in my Rust project requires multithreading, and as such requires a version of libpthread for Windows.
To build properly, I need a build.rs file. For some reason, Rust requires a version of libpthread for Linux for that.
What's the problem? Well, the build.rs has to be built with regular GCC and not MinGW because it needs to execute on my system. But for some reason, GCC is attempting to link to the Windows libpthread library instead of the system one, and as such is failing with an error about not supporting the library format.
(Failed) Alternatives
If I remove the build.rs, the project builds fine. Unfortunately, I need it for full functionality.
If I remove the Windows version of libpthread the build.rs builds and runs correctly, but MinGW fails with a missing library error when building the rest of the project.
Solution Paths?
Either I have to figure out why GCC's linking to the wrong version of libpthread, or I have to disable -lpthread entirely for the build.rs. I have no idea why it would need pthreads, considering for testing I stripped it down to only fn main() {}.
I have no idea where to start on either of these, and I've already spent a couple of days getting the problem down to this. I'd appreciate some help!

Clang huge compilation?

Good Morning.
I am compiling Clang, following the instructions here Getting Started: Building and Running Clang
I am on linux and the compilation goes smoothly. But I think I am missing out something...
I want to compile ONLY clang, not all the related libraries. The option -DLLVM_ENABLE_PROJECTS=clang seems doing what I want (check LLVM_ENABLE_PROJECTS here)
If I use the instructions written there, I can compile, but I think I am compiling too much....a build directory of 70GB seems too much to me...
I tried to download the official debian source and compile the debian package (same source code! just using the "debian way" to create a package from official debian source), just to compare...The compilation goes smoothly, is very fast, and the build directory is much much smaller...as I expected...
I noticed in the first link I provided the phrase "This builds both LLVM and Clang for debug mode."...
So, anyone knows if my problem is due to the fact that I am compiling a "debug mode" version? if so, how could I compile the default version? and is there a way to compile ONLY clang without LLVM?
Yes, debug mode binaries are typically much larger than release mode binaries.
Cmake normally uses CMAKE_BUILD_TYPE to determine he build type. It can be set from the command line with -DCMAKE_BUILD_TYPE="Release" o -DCMAKE_BUILD_TYPE="Debug" (sometimes there are other build types as well).

Error running cross-compiled code with pthread

I'm uing ARM_EABI cross-compiler to compile a code that makes use of pthreads to run at an ARM Cortex A9 simulation.
While I'm able to compile it with no problems (just as I've did with others non-pthread applications, which ran fine in the simulation), I'm having an error message when trying to run my pthread application at the simulated ARM (which is running Linux as OS). It's the following:
./pttest.exe: /lib/libpthread.so.0: no version information available (required by ./pttest.exe)
I did my research and found out that's because it's a dynamic lib, and I'm compiling the application with a higher version than the one available on my simulator.
My question is: how to find force my cross-compiler to compile the application with the same pthread lib version of my simulator? Is there anywhere I can download different versions of pthreads? And how to set it?
Sorry, I'm quite a newbie in that area.
Try compiling your application statically, e.g.
gcc -static -o myapplication myapplication.c

Compiling Basic C-Language CUDA code in Linux (Ubuntu)

I've spent a lot of time setting up the CUDA toolchain on a machine running Ubuntu Linux (11.04). The rig has two NVIDIA Tesla GPUs, and I'm able to compile and run test programs from the NVIDIA GPU Computing SDK such as deviceQuery, deviceQueryDrv, and bandwidthTest.
My problems arise when I try to compile basic sample programs from books and online sources. I know you're supposed to compile with NVCC, but I get compile errors whenever I use it. Basically any sort of include statement involving CUDA libraries gives a missing file/library error. An example would be:
#include <cutil.h>
Do I need some sort of makefile to direct the compiler to these libraries or are there additional flags I need to set when compiling with NVCC?
I followed these guides:
http://hdfpga.blogspot.com/2011/05/install-cuda-40-on-ubuntu-1104.html http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Getting_Started_Linux.pdf
To fix the include problems add the cuda include directory to your compilation options (assuming it is /usr/local/cuda/include):
nvcc -I/usr/local/cuda/include -L/usr/local/cuda/lib test.cu -o test
cutil is not part of the CUDA toolkit. It's part of the CUDA SDK. So, assuming you have followed the instructions and you have added the PATH and LIB directories to your environment variables you still need to point to the CUDA SDK includes and libraries directories.
In order to include that lib manually you must pass the paths to the compiler:
nvcc -I/CUDA_SDK_PATH/C/common/inc -L/CUDA_SDK_PATH/C/lib ...
Although I personally prefer not to use the CUDA SDK libraries, you probably will find easier start a project from a CUDA SDK example.

crosscompile glibc for arm

Good day
Currently, I'm working on an embedded device based on arm-linux. I want to build GCC for my target architecture with Glibc. GCC builds successful, but I have trouble with Glibc build.
I use the latest version of Glibc (ftp.gnu.org/gnu/glibc/glibc-2.12.1.tar.gz) and port for them (ftp.gnu.org/gnu/glibc/glibc-ports-2.12.1.tar.gz)
my configuration line:
../../glibc-2.12.1/configure --host=arm-none-linux-gnueabi --prefix=/home/anatoly/Desktop/ARM/build/glibc-build --enable-add-ons --with-binutils=/home/anatoly/Desctop/ARM/toolchain/arm/bin/
configuration script work fine, but i get some compile error:
...
/home/anatoly/Desktop/ARM/src/glibc-2.12.1/malloc/libmemusage_pic.a(memusage.os): In function me':
/home/anatoly/Desktop/ARM/src/glibc-2.12.1/malloc/lmemusage.c:253: undefined reference to__eabi+read_tp'
...
I also tried using the old version (2.11, 2.10) but have the same error.
Does anybody know the solution for this problem?
Use a precompiled toolchain, like those provided by code sourcery.
If you want to make your own, optimised (premature optimization is the root of all evil), use crosstool-NG, which is a tool dedicated to cross-compilation toolchain building.
If you are not convinced, and want to do everything with your own hands, ask your question on the crosstool-NG mailing list.
Try substituting arm-linux-gnueabi for arm-none-linux-gnueabi. Check that a compiler, loader etc. with the prefix you used for "host" exist on your path.

Resources