I write some programs on linux with C
I want to run these programs on many remote computers, which are installed with fedora or ubuntu
I compiled the program with gcc on local machine, however the excutable file is not workable on remote machines.
for example: I use
gcc -o udp_server udp_server.c
on local machine to get a excutable binary file udp_server and then I copy it to a remote machine and run it there, the error is:
-bash: ./udp_server: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
the local machine: fedora
Fedora release 16 (Verne)
Kernel \r on an \m (\l)
3.6.10-2.fc16.x86_64 GNU/Linux
the remote machine:
Fedora release 12 (Constantine)
Kernel \r on an \m (\l)
2.6.32-36.onelab.x86_64 GNU/Linux
on these remote machines, there are no gcc compiler
so I hope I can make some excutable files so that they can be executed on those remote machines
so what kind of excutable files should I make, and how to make them?
any recommenation tools or procedures?
thanks!
To run a program written in C, you must first compile it to produce an executable file. On Linux, the C compiler is typically the "Gnu C Compiler", or gcc.
If you compile a C program on Linux, it should usually run on any other Linux computer. However, a few conditions must be met for this to work:
A compiled executable is compiled for a specific processor architecture. For example, if you compile for x86-x64, the program will not run on x86 or PowerPC.
If the program uses shared libraries, these must be installed on the target system. The C library, "libc" is installed everywhere, other libraries may not be.
As to how to compile: For a simple program, you can invoke gcc directly. For more complex programs, some build tool is advisable. There are many to choose from; two popular choices are GNU make (the traditional solution), and CMake.
To distribute the program: If it is only a single executable, you can just copy this executable around. If the program consists of multiple files (images, data files, etc.), you should package it as a software package. This allows users to install it using a package manager such as RPM or dpkg. How to do this is explained in various packaging guides for the different Linux distributions.
Finally, a piece of advice: You seem to know very little about software development in general and in C in particular. Consider reading some tutorial on programmin in C - this will answer these (and many other) questions. There are countless books and online tutorials - I can recommend "The C book", by gbdirect.
The issue you see is you are missing a dynamic library on the target machine. To see which libraries you need you need to use "ldd" program. Example (I just execute it against standard program "test" which is in every single linux distribution):
$ ldd /usr/bin/test
linux-vdso.so.1 => (0x00007fff5fdfe000)
libc.so.6 => /lib64/libc.so.6 (0x00000032d0600000)
/lib64/ld-linux-x86-64.so.2 (0x00000032cfe00000)
On Fedora and RHEL you can find which RPM package you want to install using the following command
$ rpm -q --whatprovides /lib64/ld-linux-x86-64.so.2
glibc-2.16-28.fc18.x86_64
And then you need to install it:
$ yum -y install glibc-2.16-28.fc18.x86_64
I dont use Ubuntu / Debian, not sure how to do this. Please note that on 32bit systems libraries for 64bits are not avaiable, but on 64bit systems these libraries have usualla i686 tag and are installable.
Usually, you can execute your program on different machines as long as you keep the architecture. E.g. you cannot execute 64bit program on a 32bit machine, and also vice versa (you can workaround this by installing 32bit libs but thats maybe too difficult).
If you have different distributions, or different versions of same linux distribution, this might be problem - you need to make sure you have all the dependencies in the same major versions.
Or you can link libraries statically which is not usual in the linux world at all, but you can do this. Learn how to use GCC and then you will find out how to do that.
Related
I'm trying to install gcc4.9 on a SUSE system without an internet connection. I compiled gcc on an Ubuntu machine and installed it into a prefix, then copied the prefix folder to the SUSE machine. When I tried to run it gcc complained about not finding GLIBC_2_14, so I downloaded an rpm for libc6 online and included it into the prefix folders. my LD_LIBRARY_PATH includes prefix/lib and prefix/lib64. When I try to run any program now (ls, cp, cat, etc) I get the error error while loading shared libraries: /home/***/prefix/lib64/libc.so.6: unexpected reloc type 0x25.
Is there any way I can fix this so that I can get gcc4.9 up and running on this system?
As an alternative, is it possible to build gcc staticaly so that I don't have to worry about linking at all when I transfer it between computers?
my LD_LIBRARY_PATH includes prefix/lib and prefix/lib64
See this answer for explanation of why this can't work.
Is there any way I can fix this so that I can get gcc4.9 up and running on this system?
Your best bet is to install whatever GCC package comes with the SuSE system, then use that GCC to configure and install gcc-4.9 on it.
If for some reason you can't do that, this answer has some of the ways in which you can build gcc-4.9 on a newer system and have it still work on an older one.
is it possible to build gcc staticaly so that I don't have to worry about linking at all when I transfer it between computers?
Contrary to popular belief, fully-static binaries are generally less portable then dynamic ones on Linux.
I am using a software for graph mining.
I have got the binary of that software in 2 folders for Linux mode and SunOs mode but don't have the source.
I am able to run the binary in Linux machine.
But when I want to run the binary in a Mac machine I am getting "command not found" for both the Linux and SunOs folders' binaries.
Could someone suggest if it can be able to run this in a MAC machine by any means like using a Linux shell or something
Gaurav
EDIT:I am getting "cannot execute binary" error when I set chmod to "u+x"
You'll need to recompile it for OS X or use a VM.
A command not found just means you're not executing it right, make sure it's chmod u+x and it's either on your PATH, or you specify the path explicitly.
If you use the file command you will see the difference, on the linux executable you'll have something like:
ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically
linked, for GNU/Linux 2.6.15, not stripped
and something like this for OS X executables:
command: Mach-O universal binary with 2 architectures command (for
architecture x86_64): Mach-O 64-bit executable x86_64 command (for
architecture i386): Mach-O executable i386
Operating systems generally don't support executing object code any extra formats... If Mac osx decended from solaris or Linux, then there could be some incentive for legacy support. But just assume everything to be binarily incomparable if it was compiled for a different arch and platform. There are a few places where you inherit backwards compatibility, running 32 but code on 64 bit oses... Or ppc code support on intel macs, but I suspect that both of those, especially the latter were non trivial engineering tasks.
Here are your options...
Get the source and compile on the Mac, if it compiles on Linux and solaris good chance it will compile and run ok on Mac.
Run through an emulator or boot camp
I built gcc 4.4.6 (to use CUDA) on a fast server, it takes about 10 min. However, on my own desktop, it takes kinda for ever to compile.
So both machines are 64 bit Linux, although 1 is Ubuntu while the other is Arch Linux. Arch Linux has new kernel version.
So on the server, I installed the built gcc-4.4.6 to /opt. And I just copy /opt/gcc-4.4.6 to my PC's /opt/gcc-4.4.6.
em, seems like it doesn't quite work, when I tried
./x86_64-unknown-linux-gnu-gcc ~/Development/c/hello/hello.c
it shows
x86_64-unknown-linux-gnu-gcc: error trying to exec 'cc1': execvp: No such file or directory
So what can I do now?
Thanks,
Alfred
If the systems are similar enough, you could compile GCC on the big machine (don't forget that GCC needs to be configured and built in a directory outside of its source tree), then run make -j3 all and then make install DESTDIR=/tmp/gccinst/ and copy that /tmp/gccinst directory to your small machine, and finally copy it into the root filesystem (on the small machine).
However, GCC 4.4.6 is quite old today, if you are compiling GCC try to compile GCC 4.6.2 (or 4.6.1 at least).
And (shameless plug for my work) if you compile a GCC 4.6, please enable plugins on it, then you might try the GCC MELT [meta-] plugin (MELT is a high level domain specific language to ease the development of GCC extensions).
How can I install gcc on a system that have not any c compiler?
this system is a linux base firewall and have not any c compiler.
I guess you a have an appliance running Linux and shell-access, but neither a package manager nor a compiler is installed.
So, you need to cross-compile gcc and the whole toolchain (at least binutils) - this is quite simple, because the ./configure scripts of gcc, binutils, gdb etc. support cross-compiling with the --target= option. So all you have to do is to find out the target architecure (uname helps) and then download, unpack the gcc sources on a linux-host and run ./configure --target=$YOUR_TARGET.
With this, you now can build a cross-compiler gcc - this still runs on your host, but produces binaries for your target (firewall appliances).
This may already be sufficient for you, a typical desktop PC is much faster than a typical appliance, so it may make sense to compile everything you need on the Desktop PC with the cross-compiler and cross-binutils.
But if you really wish to do so, you can now also use your cross-compiler to compile a gcc running on your target (set this as --host= option) and compiling for your target (set this as --target option).
You can find details about allowed host/targets and examples in the gcc documentation: http://gcc.gnu.org/install/specific.html.
It depends on the distribution, if it's based on debian or some other of the big ones you can install gcc through apt-get or similar tool.
If it's a more basic system you need to compile gcc yourself on another computer and copy it over. It will be easiest if you have another computer with the same architecture (i386, arm or x86_64 for example).
I think that you might want to compile it statically also, so that you don't have dependencies on external libraries.
How do you plan to get all the source code needed for GCC loaded onto your machine? Could you mount the ISO image onto this machine and install from there?
Since you are using Endian Firewall, see "Building a development box" at the following link:
http://alumnus.caltech.edu/~igormt/endian/tips.html
If it's a debian based distribution, you can use
sudo apt-get install gcc
Note: maybe you must change "gcc" by a specific version of the debian package.
Cheers,
I want to avoid problems with compiling my code on amd64, yet I don't have a 64-bit CPU available and have no hopes of getting upgrade to my machine any time soon. I have no dreams of testing the code (although that should theoretically be possible using qemu-system) but I'd like to at least compile the code using gcc -m64.
Basic idea works:
CFLAGS=-m64 CXXFLAGS=-m64 ./configure --host x86_64-debian-linux
However, the code depends on some libraries which I typically install from Debian packages, such as libsdl1.2-dev, libgmp3-dev and such. Obviously, getting 64-bit versions of packages installed alongside of 32-bit versions is not a one-liner.
What would be your practices for installing the 64-bit packages? Where would you put them, how would you get them there and how would you use them?
To repeat, I don't have 64-bit CPU and cannot afford getting a new machine.
I have already set up amd64-libs-dev to give some basic push to gcc's -m64.
Attempted so far:
Setting up a 64-bit chroot jail with debootstrap in order to simplify installation of 64-bit development packages for libraries. Failed since finishing the setup (and installing anything afterwards!) requires 64-bit CPU.
Installing gcc-multilib and g++-multilib. This appears to do nothing beside depending on libc6-dev-amd64 which I already installed through amd64-libs-dev.
If you're using debian, before you can use gcc -m64, you need to install gcc-multilib and g++-multilib. This will also install all files needed to link and create a 64bit binary.
You don't have to have a 64bit capable CPU for this either.
Then you can call GCC as follows:
$ gcc -m64 source.c -o source
As for external libraries, debian takes care of that if you have multilib installed. I have a 32bit machine that compiles 64bit code for another machine and links a handful of libraries (libpng, libz for example). Works great and the executable run (debian to debian).
You want to look into the dchroot package to set up a simple chroot(8) environment -- that way you can compile real amd64 binaries in a real 64-bit setting with proper libraries and dependencies. This surely works the other way (i.e. I am using i386 chroots on amd64 hosts) but I don't see why it shouldn't work the other way if your cpu supports amd64.
Edit: Now that you stress that you do not have a amd64-capable cpu, it gets a little trickier. "In theory" you could just rebuild gcc from source as a cross-compiler. In practice, that may be too much work. Maybe you can just get another headless box for a few dollars and install amd64 on that?
check out this fine article that describes how to easily create a 32bit chroot, where you can install all the 32bit tools (gcc and libs)
Doesn't Debian distinguish between lib32 and lib64 directories? In that case, you can just grab the packages and force them to install, regardless of architecture.
If that does not work (or would hose your system!) I would set up a chroot environment and apt-get the 64-bit libraries into there.
Check out pbuilder, It can create build environments for many architectures, some instructions here
Try cross compiling SDL, gmp and other libraries yourself. Or manually extract the files you need from the Debain packages.