compiling software for embedded linux distro (openwrt) - linux

I have a problem with Linux embedded OpenWRT.
I must compile a software for arch MIPS.
The software is composed of 2 file.c and 4 /include/file.h ...
I have compiled a toolchain for Openwrt with the "new" gcc (for mips) I have compiled one by one the 2 file.c getting 2 file.o as output. How can I create a unique binary file to execute the software?
Second question: it is correct to compile the file one by one?
Thanks, I hope that my question is clear.

You can create an OpenWrt package. See this tutorial. You can point OpenWrt to the folder, where your files are and let compile the project there instead of pulling it from a VCS repo.
See this SO answer about compiling multiple files with GCC.

Related

How to link mach-o format object files on linux?

I have been attempting to link a MACHO formatted object file on Linux, but I have failed miserably. So far, I have created the object file by running:
nasm -fmacho -o machoh.o hello.o
I have tried linking using:
clang --target=x86_64-apple-darwin machoh.o
but that failed. I have attempted using GCC, LD, and other linkers but I have still failed miserably. Are there any ideas on how I could solve my problem?
Thank you very much.
The most accessible solution would be lld, the LLVM linker.
lld does not ship with clang, but is a separate package.
sudo apt install lld
If you installed a version of clang that isn't the default (e.g. clang-12 explicitly), then you should use the same version for lld (i.e. lld-12).
Get a MacOS SDK from somewhere. This GitHub repo archives them.
If you're uncomfortable using the above, the "legitimate" way of obtaining it without a Mac would be:
Create an Apple ID
Go to https://developer.apple.com/download/all/
Download the "Command Line Tools for Xcode <version>"
Mount or extract the dmg
Extract the XAR package
For each ".pkg" folder inside, run pbzx <Payload | cpio -i
Find the Library/Developer/CommandLineTools/SDKs/MacOSX.sdk inside.
Feed both of the above to clang:
clang --target=x86_64-apple-darwin -fuse-ld=lld --sysroot=path/to/MacOSX.sdk machoh.o
I have tried linking using: clang --target=x86_64-apple-darwin machoh.o
but that failed.
Failed how? Details matter.
Anyway, there are 3 commonly used linkers on Linux: BFD-ld, Gold, and (newest) LLD.
Of these, Gold is an ELF-only linker, and will not work for Mach-O.
BFD-ld is only configured to support a few emulations (use ld --help to see which ones) in my distribution. BFD does appear to support Mach-O, so it's probably possible to build a Linux BFD-ld cross-linker with such support.
LLD should support Mach-O out of the box, but you are probably not using LLD.
So your first step should be to figure out which linker clang --target=x86_64-apple-darwin ... uses, and then make it use the one which does support Mach-O.

Cross-compilation with libraries

I am working on Windows 7, using Eclipse DS-5, to cross-compile projects for Altera SoC (FPGA+ARM). The toolchain is supplied by Altera tools, and it looks as follows :
GCC C++ Compiler 4 [arm-linux-gnueabihf]
GCC C Compiler 4 [arm-linux-gnueabihf]
GCC Assembler 4 [arm-linux-gnueabihf]
GCC C Linker 4 [arm-linux-gnueabihf]
GCC C++ Linker 4 [arm-linux-gnueabihf]
GCC Archiver 4 [arm-linux-gnueabihf]
The Altera SoC board is running Angstrom Linux distribution on ARM.
I need to add some libraries (e.g. libcURL) and set the Eclipse project settings, to include the library in compilation.
MY UNDERSTANDING:
Libraries in general contain 2 components. The headers and the library definition files (in binary format). The compiler requires the header files, The linker is then linking the library files.
(If anything above is wrong, please correct me).
MY QUESTIONS:
1) In case the binary files are not supplied for ARM processor, do I need to use Altera tools to compile the library source code on my Windows 7 machine with ARM compiler ?
I believe to use the Altera supplied compiler terminal, to run ./configure, make
2) For such widely used libraries such as libcURL, there are pre-compiled binaries for different platforms. How do I know what the compiled library looks like ? What files are necessary for Eclipse to compile the whole project (please be specific : *.lib, *.a, *.h, ...)
SUMMARY:
I am perplexed by cross-compilation, I am not sure, which compiler is required, and which library files are required.
Most common error I have came across is :
cannot find -lcurl
Does that mean the compiler can see *.h files, but the Linker is not able to locate binary files ?
Finally, I did the following :
I have copied the library source files to my target platform (Altera De_nano_SoC ARM) and compiled the library there (Angstrom Linux, compiler arm-angstrom-linux-gnueabi). This requires setting configuration file in library folder, and running make and make install commands.
Once compiled, I copied the output files (headers *.h and static library files *.a OR shared library files *.so - depending on compilation configuration) to my host machine (Windows 7). Then I have added the files to my Eclipse DS-5 project.
The Eclipse requires the path of .../include folder with header files *.h and the .../lib folder including *.a or *.so files.

how to make excutable files based on source codes

I write some programs on linux with C
I want to run these programs on many remote computers, which are installed with fedora or ubuntu
I compiled the program with gcc on local machine, however the excutable file is not workable on remote machines.
for example: I use
gcc -o udp_server udp_server.c
on local machine to get a excutable binary file udp_server and then I copy it to a remote machine and run it there, the error is:
-bash: ./udp_server: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
the local machine: fedora
Fedora release 16 (Verne)
Kernel \r on an \m (\l)
3.6.10-2.fc16.x86_64 GNU/Linux
the remote machine:
Fedora release 12 (Constantine)
Kernel \r on an \m (\l)
2.6.32-36.onelab.x86_64 GNU/Linux
on these remote machines, there are no gcc compiler
so I hope I can make some excutable files so that they can be executed on those remote machines
so what kind of excutable files should I make, and how to make them?
any recommenation tools or procedures?
thanks!
To run a program written in C, you must first compile it to produce an executable file. On Linux, the C compiler is typically the "Gnu C Compiler", or gcc.
If you compile a C program on Linux, it should usually run on any other Linux computer. However, a few conditions must be met for this to work:
A compiled executable is compiled for a specific processor architecture. For example, if you compile for x86-x64, the program will not run on x86 or PowerPC.
If the program uses shared libraries, these must be installed on the target system. The C library, "libc" is installed everywhere, other libraries may not be.
As to how to compile: For a simple program, you can invoke gcc directly. For more complex programs, some build tool is advisable. There are many to choose from; two popular choices are GNU make (the traditional solution), and CMake.
To distribute the program: If it is only a single executable, you can just copy this executable around. If the program consists of multiple files (images, data files, etc.), you should package it as a software package. This allows users to install it using a package manager such as RPM or dpkg. How to do this is explained in various packaging guides for the different Linux distributions.
Finally, a piece of advice: You seem to know very little about software development in general and in C in particular. Consider reading some tutorial on programmin in C - this will answer these (and many other) questions. There are countless books and online tutorials - I can recommend "The C book", by gbdirect.
The issue you see is you are missing a dynamic library on the target machine. To see which libraries you need you need to use "ldd" program. Example (I just execute it against standard program "test" which is in every single linux distribution):
$ ldd /usr/bin/test
linux-vdso.so.1 => (0x00007fff5fdfe000)
libc.so.6 => /lib64/libc.so.6 (0x00000032d0600000)
/lib64/ld-linux-x86-64.so.2 (0x00000032cfe00000)
On Fedora and RHEL you can find which RPM package you want to install using the following command
$ rpm -q --whatprovides /lib64/ld-linux-x86-64.so.2
glibc-2.16-28.fc18.x86_64
And then you need to install it:
$ yum -y install glibc-2.16-28.fc18.x86_64
I dont use Ubuntu / Debian, not sure how to do this. Please note that on 32bit systems libraries for 64bits are not avaiable, but on 64bit systems these libraries have usualla i686 tag and are installable.
Usually, you can execute your program on different machines as long as you keep the architecture. E.g. you cannot execute 64bit program on a 32bit machine, and also vice versa (you can workaround this by installing 32bit libs but thats maybe too difficult).
If you have different distributions, or different versions of same linux distribution, this might be problem - you need to make sure you have all the dependencies in the same major versions.
Or you can link libraries statically which is not usual in the linux world at all, but you can do this. Learn how to use GCC and then you will find out how to do that.

How to run a *.o file in Android

I currently compiled a set of source code in C in Linux and the output is a *.o file which is a object file. This supposedly does image compression. Now I want to use/test this in Android.
Is this possible? I have only tried NDK examples from the Android NDK developer side. Have not came across any reference on how this can be done.
Thanks In Advance,
Perumal
You don't run object code files (*.o). You would need to turn it into an executable. To do this, assuming you are using GCC you would run gcc file1.o file2.o -o executable which would convert a two file program with file1.o and file2.o into an executable called executable.
Object files (ending in .o) usually contain code that is incomplete. For example, if your program uses some library to print something on screen, to produce an executable, you must link your compiled code (the .o file) with the library, so that when the operating system loads the executable knows all the code that will be used. You do this linking with a linker (such as ld in Linux, or /system/bin/linker in Android). In your case, it's easier to let gcc call the linker for you, as Jalfor notes.
The answer is Yes. But you have to do some fair amount of work to see it running on Android.
1) If you are compiling on Linux, it means the object file or the final executable is being built for the x86 or AMD processor(Mostly). But mostly all the mobile devices have ARM processors running on their phones. So, though you have an executable you will not be able to execute it in ANdroid if it is not built for ARM Cpu. This is what android NDK does exactly.
2) So, we have to build the same code again for Android(ARM), for which we need a cross-compiler and the source code of the object files you are talking about.
3) If you have source code avilable, you can do 2 things again.
To include it in JNI folder, build the shared library and then do the
stuff of calling and all.
Build the code into an executable(Note you need to have main
inside the code) using the android NDK and then push the executable inside Android using
adb.
Now finally you can login and then check the result. In case anything is not clear, please do let me know. I wont mind explaining. Thanks..

Compiled gcc4.4.6 on one machine, how to let another machine use it?

I built gcc 4.4.6 (to use CUDA) on a fast server, it takes about 10 min. However, on my own desktop, it takes kinda for ever to compile.
So both machines are 64 bit Linux, although 1 is Ubuntu while the other is Arch Linux. Arch Linux has new kernel version.
So on the server, I installed the built gcc-4.4.6 to /opt. And I just copy /opt/gcc-4.4.6 to my PC's /opt/gcc-4.4.6.
em, seems like it doesn't quite work, when I tried
./x86_64-unknown-linux-gnu-gcc ~/Development/c/hello/hello.c
it shows
x86_64-unknown-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory
So what can I do now?
Thanks,
Alfred
If the systems are similar enough, you could compile GCC on the big machine (don't forget that GCC needs to be configured and built in a directory outside of its source tree), then run make -j3 all and then make install DESTDIR=/tmp/gccinst/ and copy that /tmp/gccinst directory to your small machine, and finally copy it into the root filesystem (on the small machine).
However, GCC 4.4.6 is quite old today, if you are compiling GCC try to compile GCC 4.6.2 (or 4.6.1 at least).
And (shameless plug for my work) if you compile a GCC 4.6, please enable plugins on it, then you might try the GCC MELT [meta-] plugin (MELT is a high level domain specific language to ease the development of GCC extensions).

Resources