Issue with _ILP32 in unix machine with gcc 2.96 - rhel

I am using Red Hat Linux release 9 Kernel 2.4.20-8 on an i686 with gcc version 2.96. I have code something like:
include "stdio.h"
.....
ifndef _ILP32
return fopen64 (fname, dhtype);
else
return fopen (fname, dhtype);
endif
but I am getting an error saying:
`fopen64' undeclared (first use this function).
I hope my operating system is 32-bit, but it is not detecting _ILP32 and is going to the line return fopen64 (fname, dhtype);, which it should not.
How can I make my compiler detect _ILP32?

Try adding
#define _ILP32
by hand. Same effect has adding -D_ILP32 to the compiler command line.
Another possiblity is to change everywhere
#ifndef _ILP32
to
#ifdef _ILP64
which is defined by gcc on 64bit systems.

Related

Using LD_PRELOAD on Fedora 25 causes Segmentation Fault

I have found a strange behavior while I was trying to use a library that I wrote a long time ago. The main problem is, when the program is executed on Fedora 25 and linked to my library by using LD_PRELOAD, the system raises a Segmentation Fault. I've made a small sample of my old library to easily understand the problem.
#include <stdio.h>
#include <stdlib.h>
#include <malloc.h>
extern void *__libc_malloc(size_t size);
void *malloc(size_t size)
{
void *ptr = __libc_malloc(size);
fprintf(stdout, "Malloc(%d)->(%p)\n", (int) size, ptr);
return ptr;
}
This code was compiled by using these parameters:
gcc -c -fPIC -O3 -Wall -o libtest.o libtest.c
gcc -shared -o libtest.so libtest.o
The program was executed as follows:
LD_PRELOAD=./libtest.so ./progtest
I found out that the "fprintf" line was causing the Segfault problem. So I've changed the "stdout" file descriptor to "stderr", and the problem was gone.
Then I tested the same code using the "stdout" file descriptor as output for "fprintf" on another machine running CentOS 7, and it worked fine in both cases using "stdout" and "stderr".
By observing these results, I wonder what I am missing and why this is happening.
GLibc and GCC versions installed on Fedora 25:
GNU ld version 2.26.1-1.fc25
gcc (GCC) 6.3.1 20161221 (Red Hat 6.3.1-1)
GLibc and GCC versions installed on CentOS 7:
GNU ld version 2.25.1-22.base.el7
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11)
fprintf itself may need to use malloc under some circumstances. One possible reason is that stderr is unbuffered while stdout is line buffered. fprintf(stdout) might have a buffer already, or it may try to allocate one, ending up calling your malloc, which calls into fprintf again, but it is not re-entrant on the same FILE*.
You can prevent re-entrancy with a flag, such as (C11):
#include <stdbool.h>
#include <threads.h>
thread_local bool inside_malloc = false;
void *malloc(size_t size) {
void *ptr = __libc_malloc(size);
if (!inside_malloc) {
inside_malloc = true;
fprintf(stdout, "Malloc(%zd)->(%p)\n", size, ptr);
inside_malloc = false;
}
return ptr;
}

Symbols defined with defsym give incorrect address on Ubuntu 16.10

If I define a symbol address when linking a program on Ubuntu 16.10, this doesn't seem to produce the correct address when running the program. For example taking the following program
#include <stdio.h>
extern int mem_;
int main()
{
printf("%p\n", &mem_);
}
and compiling with
gcc example.c -o example -Xlinker --defsym=mem_=0x80
then running the program prints 0x80 on older Ubuntu systems, but a random number on Ubuntu 16.10. The 0x80 symbol seems to go into the executable, as shown by the nm program, however.
Any ideas what's causing this? I'm suspecting a security feature.
Under the GCC section of the ChangeLog (found here: https://wiki.ubuntu.com/YakketyYak/ReleaseNotes)
"We have modified GCC to by-default compile programs with position independent executable support, on the amd64 and ppc64el architectures, to improve the security benefits provided by Address Space Layout Randomization."
To disable this option, simply add -no-pie to GCC's flags.

Can an OMF file generated by Windows tools be linked into a GCC assembly in linux?

I am porting a Windows VC++ application to Linux that links to an assembler module currently produced by MASM. After changing its Windows ABI assumptions to Linux ABI, I would like to again assemble the module on MASM to OMF (in Windows), then directly input that object file into the GCC build (in Linux). This would greatly simplify maintenance over time and guarantee an identical assembly under both operating systems. The alternative is porting the assembler code to YASM/NASM and its complications. The assembler code is entirely leaf routines (no calls), with no macros, no Unicode data and scant integer/real data; it includes 32-bit and a 64-bit assembler versions. Barring endian issues, does it really matter whose tool chain generates the OMF representation for this module?
I tested out it out using a simple test case and it worked fine when linked using the GNU linker under Linux. So you probably don't need to do anything special.
Here's the assembly file I tested it with:
_TEXT SEGMENT USE32
PUBLIC foo
foo:
mov eax, 1234
ret
_TEXT ENDS
END
And here's the C program:
#include <stdio.h>
extern int foo();
int
main() {
printf("%d\n", foo());
return 0;
}
I assembled the first file on Windows using MASM, copied the resulting .OBJ file to a Linux machine (Debian x86_64) and compiled/linked it with the following command:
gcc -m32 main.c foo.obj
Running the generated executable (a.out) produced the expected output: 1234. I also tested the equivalent 64-bit case and it worked as well.
Unless you're dependent on PECOFF-specific section (segment) ordering or other PECOFF-specific features it looks like you shouldn't have any problems, at least far the object file format goes. Note it's possible that the version of the GNU linker installed on your Linux machine wasn't built with support for PECOFF. In that case you may need to build your own version from source.

why detected recursion whilst expanding macro "inline"?

I am building 32-bit (i386) linux kernel on a x86_64 linux host. Both host and target version is 2.6.9. I am doing the following from the linux source tree:
make ARCH=i386 CFLAGS='-m32 -Iinclude/asm/mach-default' AFLAGS='--32' menuconfig
make ARCH=i386 CFLAGS='-m32 -Iinclude/asm/mach-default' AFLAGS='-m32 -Iinclude/asm/mach-default'
I would hit the following error from the 2nd make:
AS arch/i386/kernel/entry.o
In file included from include/linux/bitops.h:4,
from include/asm/cpufeature.h:10,
from include/asm/processor.h:16,
from include/asm/thread_info.h:16,
from arch/i386/kernel/entry.S:45:
include/asm/bitops.h:42: detected recursion whilst expanding macro "inline"
The line 42 in bitops.h reads like this:
static inline void set_bit(int nr, volatile unsigned long * addr)
Since gcc supports inline functions, I don't understand why this "inline" was treated as macro, and how come there is recursion expanding "inline"?
my gcc version is:
gcc --version
gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-11)
Any insights are appreciated.

"Compiler threading support is not turned on."

Normally I can google my way around and find solutions, but not this time.
I'm using 64 bit Linux Ubuntu 11.04 to compile a 32 bit windows application. I'm using i586-mingw32msvc-gcc to compile my C++ files.
test.cpp:
#include <boost/asio.hpp>
makefile:
i586-mingw32msvc-gcc -c -m32 -mthreads -o test.o test.cpp
Error:
boost/asio/detail/socket_types.hpp:
# include <sys/ioctl.h>
doesn't exist.
Added to makefile: -DBOOST_WINDOWS
Error:
# warning Please define _WIN32_WINNT or _WIN32_WINDOWS appropriately
Ok, added to makefile: -D_WIN32_WINNT=0x0501
Error:
# error "Compiler threading support is not turned on. Please set the correct command line options for threading: -pthread (Linux), -pthreads (Solaris) or -mthreads (Mingw32)"
Yet I did specify -mthreads.
Adding -DBOOST_HAS_THREADS might be sufficient (see # elif defined __GNUC__ from the offending header). But it's likely/possible that your boost installation has been crafted to support your build environment and not your target. Try building it yourself with your cross-compiling toolchain.
It turned out that I had a set of #undef and #defines to force the GLIBC version to something that allowed me to compile for Linux (not cross compile) RHEL5, which otherwise would give me all kinds of other errors. Turns out that when cross compiling for windows using mingw, that force feeding the GLIBC version causes boost to take a strange path leaving various aspects undefined, including the bahavior or availability of threading. I surrounded it with an #ifndef _WIN32 which made the problem go away.
Maybe that -mthreads argument needs to come last.

Resources