How can I configure cabal to use different folders for 32-bit and 64-bit packages? - haskell

I'm doing some testing of 64-bit GHC on Windows, in tandem with migrating code forward to GHC 7.6.1. This means that I have both the 32-bit and 64-bit versions of GHC 7.6.1 installed, so I can distinguish 64-bit specific problems from general problems with 7.6.1.
My cabal config file ($APPDATA/cabal/config) contains
libsubdir: $pkgid\$compiler
which means that both 32-bit and 64-bit versions of packages I install are ending up in e.g. zip-archive-0.1.1.8/ghc-7.6.1, and overwriting each other.
Is there any variable like $compiler but distinguishing 32 and 64 bit, or other technique I can use to get it to keep the packages apart?

You can use $arch (and/or $os) with recent enough Cabal versions, which will be replaced by a string such as x86_64 (see Cabal documentation section "Path variables in the simple build system" for more details)

This is probably not the Right Way to do it, but on my laptop where I boot into 32-bit and 64-bit operating systems I have a hack set up to deal with this. Basically, I have two directories, .cabal-i386 and .cabal-x86_64, and I switch back and forth via symlinks. In my .zshrc:
CabalDir=$HOME/.cabal-`uname -m`
if [ ! -d $CabalDir]; then
echo WARNING: no cabal directory yet for `uname -m`, creating one.
mkdir -p $CabalDir/{bin,lib,logs,share}
fi
ln -sft $HOME/.cabal $CabalDir/{bin,lib,logs,share}
Perhaps you can adopt some similar strategy, giving yourself a short command to switch out some symlinks (or whatever Windows' analog of symlinks is).

Related

How to execute the CMU binary bomb in Ubuntu Linux?

I'm trying to do CMU's binary bomb as an independent project to learn some x86 Assembly and reverse engineering. (It's not an auto-graded version tied to a class.)
I downloaded bomb.tar from http://csapp.cs.cmu.edu/public/labs.html.
From CMU's lab description:
A "binary bomb" is a program provided to students as an object code
file. When run, it prompts the user to type in 6 different strings. If
any of these is incorrect, the bomb "explodes," printing an error
message and logging the event on a grading server. Students must
"defuse" their own unique bomb by disassembling and reverse
engineering the program to determine what the 6 strings should be. The
lab teaches students to understand assembly language, and also forces
them to learn how to use a debugger. It's also great fun. A legendary
lab among the CMU undergrads.
Here's a Linux/IA32 binary bomb that you can try out for yourself. The
feature that notifies the grading server has been disabled, so feel
free to explode this bomb with impunity.
After saving it into an appropriate folder I ran this command in the Terminal:
tar xvf bomb.tar
It did extract a file called bomb (no file extension), but I thought it would also give me bomb.c, which would also be helpful for reference.
I can't get "bomb" to run. Here's what I've tried:
bomb
bomb: command not found
./bomb
bash: ./bomb: No such file or directory
While I realize solving it requires stepping through it in gdb, I can't even run it in BASH and blow myself up with wrong answers yet! A little help would be fantastic.
As the other answers have suggested, this appears to a CPU architecture compatibility issue. I was able to resolve this on Ubuntu 15.04 64-bit by installing the packages located at AskUbuntu.com How to run 32-bit programs on a 64-bit system [duplicate]
Specifically, the following command helped.
sudo apt-get install lib32z1 lib32ncurses5 lib32bz2-1.0
Since Fabio A. Correa ran file on the bomb and found out that it was a 32-bit LSB executable, it seems that is is caused by some missing LSB scripts which should be loaded at startup.
Simply running sudo apt-get install lsb-core will fix this. After doing so, ldd bomb will also work.
Update:
Further ldd (after getting the LSB things ready) shows that it is actually caused by some inexist libc.so.6 => /lib32/libc.so.6, which is the libc of the i386 architecture. You can try installing the libc6-i386 package directly instead.
After that, you can run disassemble func_name in your gdb directly. With all the symbols preserved, you can see the names of the functions directly. strings might help you too.
Btw, this question should be placed in Unix&Linux, I guess.
file bomb informs:
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.0.0, not stripped
You should be able to run it on bash by typing:
tar xvf bomb.tar
chmod +x bomb
./bomb
It worked in my 64-bit Kubuntu 14.04.

how to make excutable files based on source codes

I write some programs on linux with C
I want to run these programs on many remote computers, which are installed with fedora or ubuntu
I compiled the program with gcc on local machine, however the excutable file is not workable on remote machines.
for example: I use
gcc -o udp_server udp_server.c
on local machine to get a excutable binary file udp_server and then I copy it to a remote machine and run it there, the error is:
-bash: ./udp_server: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
the local machine: fedora
Fedora release 16 (Verne)
Kernel \r on an \m (\l)
3.6.10-2.fc16.x86_64 GNU/Linux
the remote machine:
Fedora release 12 (Constantine)
Kernel \r on an \m (\l)
2.6.32-36.onelab.x86_64 GNU/Linux
on these remote machines, there are no gcc compiler
so I hope I can make some excutable files so that they can be executed on those remote machines
so what kind of excutable files should I make, and how to make them?
any recommenation tools or procedures?
thanks!
To run a program written in C, you must first compile it to produce an executable file. On Linux, the C compiler is typically the "Gnu C Compiler", or gcc.
If you compile a C program on Linux, it should usually run on any other Linux computer. However, a few conditions must be met for this to work:
A compiled executable is compiled for a specific processor architecture. For example, if you compile for x86-x64, the program will not run on x86 or PowerPC.
If the program uses shared libraries, these must be installed on the target system. The C library, "libc" is installed everywhere, other libraries may not be.
As to how to compile: For a simple program, you can invoke gcc directly. For more complex programs, some build tool is advisable. There are many to choose from; two popular choices are GNU make (the traditional solution), and CMake.
To distribute the program: If it is only a single executable, you can just copy this executable around. If the program consists of multiple files (images, data files, etc.), you should package it as a software package. This allows users to install it using a package manager such as RPM or dpkg. How to do this is explained in various packaging guides for the different Linux distributions.
Finally, a piece of advice: You seem to know very little about software development in general and in C in particular. Consider reading some tutorial on programmin in C - this will answer these (and many other) questions. There are countless books and online tutorials - I can recommend "The C book", by gbdirect.
The issue you see is you are missing a dynamic library on the target machine. To see which libraries you need you need to use "ldd" program. Example (I just execute it against standard program "test" which is in every single linux distribution):
$ ldd /usr/bin/test
linux-vdso.so.1 => (0x00007fff5fdfe000)
libc.so.6 => /lib64/libc.so.6 (0x00000032d0600000)
/lib64/ld-linux-x86-64.so.2 (0x00000032cfe00000)
On Fedora and RHEL you can find which RPM package you want to install using the following command
$ rpm -q --whatprovides /lib64/ld-linux-x86-64.so.2
glibc-2.16-28.fc18.x86_64
And then you need to install it:
$ yum -y install glibc-2.16-28.fc18.x86_64
I dont use Ubuntu / Debian, not sure how to do this. Please note that on 32bit systems libraries for 64bits are not avaiable, but on 64bit systems these libraries have usualla i686 tag and are installable.
Usually, you can execute your program on different machines as long as you keep the architecture. E.g. you cannot execute 64bit program on a 32bit machine, and also vice versa (you can workaround this by installing 32bit libs but thats maybe too difficult).
If you have different distributions, or different versions of same linux distribution, this might be problem - you need to make sure you have all the dependencies in the same major versions.
Or you can link libraries statically which is not usual in the linux world at all, but you can do this. Learn how to use GCC and then you will find out how to do that.

Using /etc/ld.so.preload in a multi arch setup

Is there some way to use ld.so.preload and cover both 32bit and 64bit binaries?
If I list both the 32bit and 64bit versions of the fault handler in ld.so.preload then the loader always complains that one of them fails to preload for whatever command I run. Not exactly earth shaking since the error is more a warning but I could certainly do without the printout.
Instead of specifying an absolute path I tried specifying simply "segv_handler.so" in the hopes that the loader would choose the lib in the arch appropriate path (a 32bit version is in /lib and a 64bit version is in /lib64).
Not likely apparently.
Is there a way to setup ld.so.preload to be architecturally aware? Or if not is there some way to turn off the error message?
This works:
put library under /path/lib for 32bit one, and put the 64bit one under /path/lib64
and they should have the same name
put the following line in /etc/ld.so.preload:
/path/$LIB/libname.so
$LIB will get the value "lib" (for 32bit) or "lib64" (for 64bit) automatically.
There's no reason to try to use ld.so.preload like this. By default ld is smart enough to know that if you're running a 64bit app to only lookup 64bit libs, and same with 32bit.
Case in point, if you have
/lib64/libawesome.so
/lib/libawesome.so
And you try
gcc -lawesome -o funtime funtime.c
It'll choose whatever the default that gcc wants to build, ld will skip libraries of incorrect bit size for that build.
gcc -m64 -lawesome -o funtime funtime.c will pick the 64bit one
gcc -m32 -lawesome -o funtime funetime.c will pick the 32bit one.
This presumes that /etc/ld.so.conf lists /lib and /lib64 by default..
Sadly, I think the answer might be "Don't do that."
From glibc, elf/rtld.c:
There usually is no ld.so.preload file, it should only be used for emergencies and testing. So the open call etc should usually fail. Using access() on a non-existing file is faster than using open(). So we do this first. If it succeeds we do almost twice the work but this does not matter, since it is not for production use.
You can provide 32 and 64 bit library using special expansion keys in the path name.
For instance you can use /lib/$PLATFORM/mylib.so and create /lib/i386/mylib.so and /lib/x86_64/mylib.so. Linked will choose the correct one for your executable.

How do shared libraries work in a mixed 64bit/32bit system?

Good morning,
on a 64bit RedHat box we have to compile and run a 32bit application. Meanwhile I managed to compile the gcc version needed (4.0.3) and all required runtime libraries in 32bit and have set the LD_LIBRARY_PATH to point to the 32bit versions, but now during the remaining build process, a small java program needs to be executed which is installed in /usr/bin as a 64bit program, which now finds the 32bit version of libgcc_s.so first.
In general, if I set the LD_LIBRARY_PATH to the 32bit versions, I break the 64bit programs and vice versa.
How it this supposed to work at all? I am certain I am not the first person with this problem at hand. How is it solved usually?
Regards,
Stefan
Add both the 32-bit and 64-bit directories to the LD_LIBRARY_PATH.
If you do this, then the ld.so for 32-bit or 64-bit will use the correct libraries.
e.g. A 32-bit test app "test32" and 64-bit test app "test", with a locally-installed copy of a (newer version of) gcc and binutils in a user homedir, to avoid clobbering the system-wide install of gcc:
=> export LD_LIBRARY_PATH=/home/user1/pub/gcc+binutils/lib:/home/user1/pub/gcc+binutils/lib64
=> ldd ./test32
libstdc++.so.6 => /home/user1/pub/gcc+binutils/lib/libstdc++.so.6 (0x00111000)
libgcc_s.so.1 => /home/user1/pub/gcc+binutils/lib/libgcc_s.so.1 (0x00221000)
=> ldd ./test
libstdc++.so.6 => /home/user1/pub/gcc+binutils/lib64/libstdc++.so.6 (0x00007ffff7cfc000)
libgcc_s.so.1 => /home/user1/pub/gcc+binutils/lib64/libgcc_s.so.1 (0x00007ffff7ad2000)
(Less interesting library paths removed)
This shows that the loaders know to ignore the libraries of the wrong architecture, at least on this Scientific Linux 6.3 (RHEL-derived) system. I would expect other distros to work similarly, but haven't tested this.
This may have only started being the case more recently than your (unspecified) distro version, however.
On Solaris one can use LD_LIBRARY32_PATH and LD_LIBRARY64_PATH, but that isn't supported on Linux.
In general, you should just never need to set LD_LIBRARY_PATH at all in the first place:
either install needed libraries into
/usr/lib32 or /usr/lib64 as
appropriate, or
build your 32-bit application with -Wl,-rpath=/path/to/32-bit/libs
As a workaround, wrap the Java call in a small shell script which unsets LD_LIBRARY_PATH and then calls the executable. Alternatively, this might also work:
LD_LIBRARY_PATH= java...
Note the space between "=" and the name of the executable.
Just set LD_LIBRARY_PATH to both paths (use colons to delimit). The linker will ignore the libraries that it cannot read.
I have faced this exact same problem when remastering a 32bit tinycore64 system running a 64bit kernel.
After much searching, I have discovered why these comments would make sense to both of them.
"That would be nice, but - at least in my environment - it did not
appear to work. The loader did complain; it did not simply skip the
libraries that do not match the bit-ness. Sadly!" - struppi
"This is very strange, could you describe how things failed? Also,
perhaps post the output of ldd?" - Adam Goode
And why this comment might appear to be true but is actually incorrect.
The linker will ignore the libraries that it cannot read.
This link shed's some light on it.
http://www.markusbe.com/2009/09/about-running-32-bit-programs-on-64-bit-ubuntu-and-shared-libraries/
And more to the point, you will find the ld.so manpage enlightening.
It turns out the path name can make a difference in what the runtime linker ld.so chooses as the library to load. On my 64bit linux system I have a range of odd directory names in addition to the standard ones. e.g. /lib/x86_64-linux-gnu. I actually thought I'd experiment by moving the libraries in that path to /lib64. When I did that, guess what happened? suddenly my 64bit app (brctl in this case) didn't work and complained with "Wrong ELF class". Hello... now we're onto something.
Now I'm not 100% certain but the key seems to be related to rpath token expansion.
I suspect the ${PLATFORM} expansion may have something to do with it. And the name x86_64 must be part of that.
In any case, I found when I put my 64-bit libraries in library paths named
x86_64-linux-gnu as apposed to just lib64, then they were preferred over the 32bit ones and things worked.
In your case, you probably want to do something very similar for 32bit libraries on 64. Try i386-linux-gnu.
So in my case where I am installing 64bit shared libraries onto a 32bit userland, I created the following paths:
mkdir /lib/x86_64-linux-gnu/
mkdir /usr/lib/x86_64-linux-gnu/
ln -s /lib/x86_64-linux-gnu /lib64
ln -s /usr/lib/x86_64-linux-gnu /usr/lib64
Add your 64bit libraries to the 64bit paths and 32bit libraries to the 32bit /lib & /usr/lib paths only.
Then add the 64bit specific paths to ld.so.conf and update your cache with ldconfig
Now your 32-bit & 64-bit applications will run seamlessly.

Can you compile 32-bit Apache DSOs (Oracle HTTP Server) on a 64-bit machine?

I've migrated an Oracle database and Oracle HTTP server install from a 32-bit machine to a 64-bit machine - both machines running Linux. Oracle Database is 64-bit, but the (Apache) HTTP server is 32-bit.
I use some non-Oracle DSOs (mod_ntlm for one) but whenever I run the standard "make install" type thing I end up with a 64-bit module.
Is there a standard way to compile 32-bit Apache modules on a 64-bit machine?
As an alternative to Andrew Medico's answer, use '-m32' for 32-bit compilations and '-m64' for 64-bit compilations on PPC or SPARC or Intel machines - since you don't actually mention which chip architecture you are using and that notation works on all of these.
I often use:
CC="gcc -m32" ./configure
to ensure a 32-bit compilation (or, more frequently, CC="gcc -m64" to ensure 64-bit compilation).
Question: "Is CC an environment variable used by make?"
Answer: Yes, though in this case, it is also recognized by configure, which is a shell script generated by autoconf. The notation I used - which is what I use at the command line - sets CC in the environment while the configure command is run. The other answer suggests using:
./configure CC="gcc -m32"
I assume that works and achieves much the same effect; I've not tried it so I don't know that it works.
If you run ./configure --help | less, you will see information (often just standard information) about how to use the script. And at the end, it will list (some of the) relevant environment variables, of which CC is one.
The advantage of setting the C compiler to "gcc -m32" is that the 32-bit flag is set every time the compiler is used - there is very little room for it to go wrong. If you set a flags variable (CFLAGS, etc), there is a chance that some command won't use it, and then things can go awry.
Also, going back to the question, make certainly uses a variable (macro) called CC. And you can set that on the make command line:
make CC="gcc -m32"
That overrides any setting in the makefile. By contrast, using an environment variable, the setting in the makefile overrides the value in the environment, so setting CC as an environment variable is less helpful. Although make -e gives the environment precedence over the makefile, it is usually a dangerous option to use - it can have unexpected side-effects.
./configure CFLAGS="-march=i686"
should do it
Along with the -m32 flag in gcc, you may need to include the -melf_i386 flag for ld to properly link the 32bit object files to the 32bit libraries if you have both the 32bit and 64bit libraries. The standard ld on 64bit linux boxes will default to the 64bit libraries and you get a compatibility error when the linking occurs.

Resources