How can I select a static library to be linked while ARM cross compiling? - linux

I have an ARM cross compiler in Ubuntu(arm-linux-gnueabi-gcc) and the default archtecture is ARMv7. However, I want to compile an ARMv5 binary. I do this by giving the compiler the -march=armv5te option.
So far, so good. Since my ARM system uses BusyBox, I have to compile my binary statically linked. So I give gcc the -static option.
However, I have a problem with libc.a which the linker links to my ARMv5 binary. This file is compiled with the ARMv7 architecture option. So, even if I cross-compile my ARM binary with ARMv5, I can't run it on my BusyBox based ARMv5 box.
How can I solve this problem?
Where can I get the ARMv5 libc.a static library, and how can I link it?
Thank you in advance.

You have two choices,
Get the right compiler.
Write your own 'C' Library.
Get the right compiler.
You are always safest to have a compiler match your system. This applies to x86 Linux and various distributions. You are lucky if different compilers work. It is more difficult when you cross-compile as often the compiler will not be automatically synced. Try to run a program on a 1999 x86 Mandrake Linux compiled on your 2014 Ubuntu system.
As well as instruction compatibility (which you have identified), there are ABI and OS dependencies. Specifically, the armv7 is most likely hardfloat (has floating point FPU and register call convention) and you need a softfloat (emulated FPU). The specific glibc (or ucLibc) has specific calls and expectations of the Linux OS. For instance, the way threads works has changed over time.
Write your own
You can always use -fno-builtin and -ffreestanding as well as -static. Then you can not use any libc functions, but you can program them your self.
There are external source, like Mark Martinec's snprintf and building blocks like write() which is easy to implement,
#define _SYS_IOCTL_H 1
#include <linux/unistd.h>
#include <linux/ioctl.h>
static inline int write(int fd, void *buf, int len)
{
int rval;
asm volatile ("mov r0, %1\n\t"
"mov r1, %2\n\t"
"mov r2, %3\n\t"
"mov r7, %4\n\t"
"swi #0\n\t"
"mov %0, r0\n\t"
: "=r" (rval)
: "r" (fd),
"r" (buf),
"r" (len),
"Ir" (__NR_write)
: "r0", "r1", "r2", "r7");
return rval;
}
static inline void exit(int status)
{
asm volatile ("mov r0, %0\n\t"
"mov r7, %1\n\t"
"swi #0\n\t"
: : "r" (status),
"Ir" (__NR_exit)
: "r0", "r7");
}
You have to add your own start-up machinery taken care of by the 'C' library,
/* Called from assembler startup. */
int main (int argc, char*argv[])
{
write(STDOUT, "Hello world\n", sizeof("Hello world\n"));
return 0;
}
/* Wrapper for main return code. */
void __attribute__ ((unused)) estart (int argc, char*argv[])
{
int rval = main(argc,argv);
exit(rval);
}
/* Setup arguments for estart [like main()]. */
void __attribute__ ((naked)) _start (void)
{
asm(" sub lr, lr, lr\n" /* Clear the link register. */
" ldr r0, [sp]\n" /* Get argc... */
" add r1, sp, #4\n" /* ... and argv ... */
" b estart\n" /* Let's go! */
);
}
If this is too daunting, because you need to implement a lot of functionality, then you can try and get various library source and rebuild them with -fno-builtin and make sure that the libraries do not get linked with the Ubuntu libraries, which are incompatible.
Projects like crosstool-ng can allow you to build a correct compiler (maybe with more advanced code generation) that suits the armv5 system exactly. This may seem like a pain, but the alternatives above aren't easy either.

Related

any way to stop unaligned access from c++ standard library on x86_64?

I am trying to check for any unaligned reads in my program. I enable unaligned access processor exception via (using x86_64 on g++ on linux kernel 3.19):
asm volatile("pushf \n"
"pop %%rax \n"
"or $0x40000, %%rax \n"
"push %%rax \n"
"popf \n" ::: "rax");
I do an optional forced unaligned read which triggers the exception so i know its working. After i disable that I get an error in a piece of code which otherwise seems fine :
char fullpath[eMaxPath];
snprintf(fullpath, eMaxPath, "%s/%s", "blah", "blah2");
the stacktrace shows a failure via __memcpy_sse2 which leads me to suspect that the standard library is using sse to fulfill my memcpy but it doesnt realize that i have now made unaligned reads unacceptable.
Is my thinking correct and is there any way around this (ie can i make the standard library use an unaligned safe sprintf/memcpy instead)?
thanks
While I hate to discourage an admirable notion, you're playing with fire, my friend.
It's not merely sse2 access but any unaligned access. Even a simple int fetch.
Here's a test program:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <malloc.h>
void *intptr;
void
require_aligned(void)
{
asm volatile("pushf \n"
"pop %%rax \n"
"or $0x00040000, %%eax \n"
"push %%rax \n"
"popf \n" ::: "rax");
}
void
relax_aligned(void)
{
asm volatile("pushf \n"
"pop %%rax \n"
"andl $0xFFFBFFFF, %%eax \n"
"push %%rax \n"
"popf \n" ::: "rax");
}
void
msg(const char *str)
{
int len;
len = strlen(str);
write(1,str,len);
}
void
grab(void)
{
volatile int x = *(int *) intptr;
}
int
main(void)
{
setlinebuf(stdout);
// minimum alignment from malloc is [usually] 8
intptr = malloc(256);
printf("intptr=%p\n",intptr);
// normal access to aligned pointer
msg("normal\n");
grab();
// enable alignment check exception
require_aligned();
// access aligned pointer under check [will be okay]
msg("aligned_norm\n");
grab();
// this grab will generate a bus error
intptr += 1;
msg("aligned_except\n");
grab();
return 0;
}
The output of this is:
intptr=0x1996010
normal
aligned_norm
aligned_except
Bus error (core dumped)
The program generated this simply because of an attempted 4 byte int fetch from address 0x1996011 [which is odd and not a multiple of 4].
So, once you turn on the AC [alignment check] flag, even simple things will break.
IMO, if you truly have some things that are not aligned optimally and are trying to find them, using printf, instrumenting your code with debug asserts, or using gdb with some special watch commands or breakpoints with condition statements are a better/safer way to go
UPDATE:
I a using my own custom allocator am preparing my code to run on an architecture that doesnt suport unaligned read/writes so I want to make sure my code will not break on that architecture.
Fair enough.
Side note: My curiousity has gotten the better of me as the only [major] arches I can recall [at the moment] that have this issue are Motorola mc68000 and older IBM mainframes (e.g. IBM System 370).
One practical reason for my curiosity is that for certain arches (e.g. ARM/android, MIPS) there are emulators available. You could rebuild the emulator from source, adding any extra checks, if needed. Otherwise, doing your debugging under the emulator might be an option.
I can trap unaligned read/write using either the asm , or via gdb but both cause SIGBUS which i cant continue from in gdb and im getting too many false positives from std library (in the sense that their implementation would be aligned access only on the target).
I can tell you from experience that trying to resume from a signal handler after this doesn't work too well [if at all]. Using gdb is the best bet if you can eliminate the false positives by having AC off in the standard functions [see below].
Ideally i guess i would like to use something like perf to show me callstacks that have misaligned but so far no dice.
This is possible, but you'd have to verify that perf even reports them. To see, you could try perf against my original test program above. If it works, the "counter" should be zero before and one after.
The cleanest way may be to pepper your code with "assert" macros [that can be compiled in and out with a -DDEBUG switch].
However, since you've gone to the trouble of laying the groundwork, it may be worthwhile to see if the AC method can work.
Since you're trying to debug your memory allocator, you only need AC on in your functions. If one of your functions calls libc, disable AC, call the function, and then reenable AC.
A memory allocator is fairly low level, so it can't rely on too many standard functions. Most standard functions rely on being able to call malloc. So, you might also want to consider a vtable interface to the rest of the [standard] library.
I've coded some slightly different AC bit set/clear functions. I put them into a .S function to eliminate inline asm hassles.
I've coded up a simple sample usage in three files.
Here are the AC set/clear functions:
// acbit/acops.S -- low level AC [alignment check] operations
#define AC_ON $0x00040000
#define AC_OFF $0xFFFFFFFFFFFBFFFF
.text
// acpush -- turn on AC and return previous mask
.globl acpush
acpush:
// get old mask
pushfq
pop %rax
mov %rax,%rcx // save to temp
or AC_ON,%ecx // turn on AC bit
// set new mask
push %rcx
popfq
ret
// acpop -- restore previous mask
.globl acpop
acpop:
// get current mask
pushfq
pop %rax
and AC_OFF,%rax // clear current AC bit
and AC_ON,%edi // isolate the AC bit in argument
or %edi,%eax // lay it in
// set new mask
push %rax
popfq
ret
// acon -- turn on AC
.globl acon
acon:
jmp acpush
// acoff -- turn off AC
.globl acoff
acoff:
// get current mask
pushfq
pop %rax
and AC_OFF,%rax // clear current AC bit
// set new mask
push %rax
popfq
ret
Here is a header file that has the function prototypes and some "helper" macros:
// acbit/acbit.h -- common control
#ifndef _acbit_acbit_h_
#define _acbit_acbit_h_
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <malloc.h>
typedef unsigned long flags_t;
#define VARIABLE_USED(_sym) \
do { \
if (1) \
break; \
if (!! _sym) \
break; \
} while (0)
#ifdef ACDEBUG
#define ACPUSH \
do { \
flags_t acflags = acpush()
#define ACPOP \
acpop(acflags); \
} while (0)
#define ACEXEC(_expr) \
do { \
acoff(); \
_expr; \
acon(); \
} while (0)
#else
#define ACPUSH /**/
#define ACPOP /**/
#define ACEXEC(_expr) _expr
#endif
void *intptr;
flags_t
acpush(void);
void
acpop(flags_t omsk);
void
acon(void);
void
acoff(void);
#endif
Here is a sample program that uses all of the above:
// acbit/acbit2 -- sample allocator
#include <acbit.h>
// mymalloc1 -- allocation function [raw calls]
void *
mymalloc1(size_t len)
{
flags_t omsk;
void *vp;
// function prolog
// NOTE: do this on all "outer" (i.e. API) functions
omsk = acpush();
// do lots of stuff ...
vp = NULL;
// encapsulate standard library calls like this to prevent false positives:
acoff();
printf("%p\n",vp);
acon();
// function epilog
acpop(omsk);
return vp;
}
// mymalloc2 -- allocation function [using helper macros]
void *
mymalloc2(size_t len)
{
void *vp;
// function prolog
ACPUSH;
// do lots of stuff ...
vp = NULL;
// encapsulate standard library calls like this to prevent false positives:
ACEXEC(printf("%p\n",vp));
// function epilog
ACPOP;
return vp;
}
int
main(void)
{
int x;
setlinebuf(stdout);
// minimum alignment from malloc is [usually] 8
intptr = mymalloc1(256);
intptr = mymalloc2(256);
x = *(int *) intptr;
return x;
}
UPDATE #2:
I like the idea of disabling the check before any library calls.
If the AC H/W works and you wrap the library calls, this should yield no false positives. The only exception would be if the compiler made a call to its internal helper library (e.g. doing 64 bit divide on 32 bit machine, etc.).
Be aware/wary of the ELF loader (e.g. /lib64/ld-linux-x86-64.so.2) doing dynamic symbol resolution on "lazy" bindings of symbols. Shouldn't be a big problem. There are ways to force the relocations to occur at program start, if necessary.
I have given up on perf for this as it seems to show me garbage even for a simple program like the one you wrote.
The perf code in the kernel is complex enough that it may be more trouble than it's worth. It has to communicate with the perf program with a pipe [IIRC]. Also, doing the AC thing is [probably] uncommon enough that the kernel's code path for this isn't well tested.
Im using ocperf with misalign_mem_ref.loads and stores but either way the counters dont correlate at all. If i record and look at the callstacks i get completely unrecognizable callstacks for these counters so i suspect either the counter doesnt work on my hardware/perf or it doesnt actually count what i think it counts
I honestly don't know if perf handles process reschedules to different cores properly [or not]--it should [IMO]. But, using sched_setaffinity to lock your program to a single core might help.
But, using the AC bit is more direct and definitive, IMO. I think that's the better bet.
I've talked about adding "assert" macros in the code.
I've coded some up below. These are what I'd use. They are independent of the AC code. But, they can also be used in conjunction with the AC bit code in a "belt and suspenders" approach.
These macros have one distinct advantage. When properly [and liberally] inserted, they can check for bad pointer values at the time they're calculated. That is, much closer to the true source of the problem.
With AC, you may calculate a bad value, but AC only kicks in [sometime] later, when the pointer is dereferenced [which may not happen in your API code at all].
I've done a complete memory allocator before [with overrun checks and "guard" pages, etc.]. The macro approach is what I used. And, if I had only one tool for this, it is the one I'd use. So, I recommend it above all else.
But, as I said, it can be used with the AC code as well.
Here's the header file for the macros:
// acbit/acptr.h -- alignment check macros
#ifndef _acbit_acptr_h_
#define _acbit_acptr_h_
#include <stdio.h>
typedef unsigned int u32;
// bit mask for given width
#define ACMSKOFWID(_wid) \
((1u << (_wid)) - 1)
#ifdef ACDEBUG2
#define ACPTR_MSK(_ptr,_msk) \
acptrchk(_ptr,_msk,__FILE__,__LINE__)
#else
#define ACPTR_MSK(_ptr,_msk) /**/
#endif
#define ACPTR_WID(_ptr,_wid) \
ACPTR_MSK(_ptr,(_wid) - 1)
#define ACPTR_TYPE(_ptr,_typ) \
ACPTR_WID(_ptr,sizeof(_typ))
// acptrfault -- pointer alignment fault
void
acptrfault(const void *ptr,const char *file,int lno);
// acptrchk -- check pointer for given alignment
static inline void
acptrchk(const void *ptr,u32 msk,const char *file,int lno)
{
#ifdef ACDEBUG2
#if ACDEBUG2 >= 2
printf("acptrchk: TRACE ptr=%p msk=%8.8X file='%s' lno=%d\n",
ptr,msk,file,lno);
#endif
if (((unsigned long) ptr) & msk)
acptrfault(ptr,file,lno);
#endif
}
#endif
Here's the "fault" handler function:
// acbit/acptr -- alignment check macros
#include <acbit/acptr.h>
#include <acbit/acbit.h>
#include <stdlib.h>
// acptrfault -- pointer alignment fault
void
acptrfault(const void *ptr,const char *file,int lno)
{
// NOTE: it's easy to set a breakpoint on this function
printf("acptrfault: pointer fault -- ptr=%p file='%s' lno=%d\n",
ptr,file,lno);
exit(1);
}
And, here's a sample program that uses them:
// acbit/acbit3 -- sample allocator using check macros
#include <acbit.h>
#include <acptr.h>
static double static_array[20];
// mymalloc3 -- allocation function
void *
mymalloc3(size_t len)
{
void *vp;
// get something valid
vp = static_array;
// do lots of stuff ...
printf("BEF vp=%p\n",vp);
// check pointer
// NOTE: these can be peppered after every [significant] calculation
ACPTR_TYPE(vp,double);
// do something bad ...
vp += 1;
printf("AFT vp=%p\n",vp);
// check again -- this should fault
ACPTR_TYPE(vp,double);
return vp;
}
int
main(void)
{
int x;
setlinebuf(stdout);
// minimum alignment from malloc is [usually] 8
intptr = mymalloc3(256);
x = *(int *) intptr;
return x;
}
Here's the program output:
BEF vp=0x601080
acptrchk: TRACE ptr=0x601080 msk=00000007 file='acbit/acbit3.c' lno=22
AFT vp=0x601081
acptrchk: TRACE ptr=0x601081 msk=00000007 file='acbit/acbit3.c' lno=29
acptrfault: pointer fault -- ptr=0x601081 file='acbit/acbit3.c' lno=29
I left off the AC code in this example. On your real target system, the dereference of intptr in main would/should fault on an alignment, but notice how much later that is in the execution timeline.
Like I commented on the question, that asm isn't safe, because it steps on the red-zone. Instead, use
asm volatile ("add $-128, %rsp\n\t"
"pushf\n\t"
"orl $0x40000, (%rsp)\n\t"
"popf\n\t"
"sub $-128, %rsp\n\t"
);
(-128 fits in a sign-extended 8bit immediate, but 128 doesn't, hence using add $-128 to subtract 128.)
Or in this case, there are dedicated instructions for toggling that bit, like there are for the carry and direction flags:
asm("stac"); // Set AC flag
asm("clac"); // Clear AC flag
It's a good idea to have some idea when your code uses unaligned memory. It's not necessarily a good idea to change your code to avoid it in every case. Sometimes better locality from packing data closer together is more valuable.
Given that you shouldn't necessarily aim to eliminate all unaligned accesses anyway, I don't think this is the easiest way to find the ones you do have.
modern x86 hardware has fast hardware support for unaligned loads/stores. When they don't span a cache-line boundary, or lead to store-forwarding stalls, there's literally no penalty.
What you might try is looking at performance counters for some of these events:
misalign_mem_ref.loads [Speculative cache line split load uops dispatched to L1 cache]
misalign_mem_ref.stores [Speculative cache line split STA uops dispatched to L1 cache]
ld_blocks.store_forward [This event counts loads that followed a store to the same address, where the data could not be forwarded inside the pipeline from the store to the load.
The most common reason why store forwarding would be blocked is when a load's address range overlaps with a preceeding smaller uncompleted store.
See the table of not supported store forwards in the Intel? 64 and IA-32 Architectures Optimization Reference Manual.
The penalty for blocked store forwarding is that the load must wait for the store to complete before it can be issued.]
(from ocperf.py list output on my Sandybridge CPU).
There are probably other ways to detect unaligned memory access. Maybe valgrind? I searched on valgrind detect unaligned and found this mailing list discussion from 13 years ago. Probably still not implemented.
The hand-optimized library functions do use unaligned accesses because it's the fastest way for them to get their job done. e.g. copying bytes 6 to 13 of a string to somewhere else can and should be done with just a single 8-byte load/store.
So yes, you would need special slow&safe versions of library functions.
If your code would have to execute extra instructions to avoid using unaligned loads, it's often not worth it. Esp. if the input is usually aligned, having a loop that does the first up-to-alignment-boundary elements before starting the main loop may just slow things down. In the aligned case, everything works optimally, with no overhead of checking alignment. In the unaligned case, things might work a few percent slower, but as long as the unaligned cases are rare, it's not worth avoiding them.
Esp. if it's not SSE code, since non-AVX legacy SSE can only fold loads into memory operands for ALU instructions when alignment is guaranteed.
The benefit of having good-enough hardware support for unaligned memory ops is that software can be faster in the aligned case. It can leave alignment-handling to hardware, instead of running extra instructions to handle pointers that are probably aligned. (Linus Torvalds had some interesting posts about this on the http://realworldtech.com/ forums, but they're not searchable so I can't find it.
You're not going to like it, but there is only one answer: don't link against the standard libraries. By changing that setting you have changed the ABI and the standard library doesn't like it. memcpy and friends are hand-written assembly so it's not a matter of compiler options to convince the compiler to do something else.

GCC Segmentation fault with -O1 and inline assembler

I have detected a strange segmentation fault in my code and I would like to hear your opinion if that could be a GCC bug or is just my fault!
The function looks like that:
void testMMX( ... ) {
unsigned long a = ...;
unsigned char const* b = ...;
unsigned long c = ...;
__asm__ volatile (
"pusha;"
);
__asm__ volatile ( "mov %0, %%eax;" : : "m"( a ) : "%eax" ); // with "r"( a ) it just works fine!
__asm__ volatile ( "add %0, %%eax;" : : "m"( b ) : "%eax" );
__asm__ volatile ( "mov %0, %%esi;" : : "m"( c ) : "%eax", "%esi" );
__asm__ volatile (
"sub %eax, %esi;"
"dec %esi;"
"movd (%esi), %mm0;"
"popa;"
);
}
If I compile this with -O0 it just works fine. But it SegFaults with -O1 and -O2. It took me a long time to figure out that this segfault was caused by frame pointer omission. The pusha instruction increases the stack size by 4*8=32 byte (x86_32) and therefore ESP should be increases as well. But gcc does not recognize this. If I add the ESP fix manually
__asm__("add $32, %esp")
or use the "-fno-omit-frame-pointer" flag in gcc I can compile and run it with -O1 and -O2 without any errors!
So my question now is: why does gcc not adjust the ESP with any push/pop inline assembler operations if frame-pointer-omission is enabled? Is this a gcc bug? Is gcc even capable of detecting this? Am I missing something?
It would be very interesting to solve this.
Thanks in advance!
No - gcc is not capable of detecting this. It doesn't perform any analysis of the instructions that appear in the asm block. It is your responsibility to inform the compiler of any side effects. Can you explain what test you are performing?
Also, you should consider using a single asm block for this code; volatile may prevent reordering of the asm blocks, but you cannot assume this yields consecutive instructions.

what's the difference between gcc __sync_bool_compare_and_swap and cmpxchg?

to use cas, gcc provides some useful functions such as
__sync_bool_compare_and_swap
but we can also use asm code like cmpxchg
bool ret;
__asm__ __volatile__(
"lock cmpxchg16b %1;\n"
"sete %0;\n"
:"=m"(ret),"+m" (*(volatile pointer_t *) (addr))
:"a" (old_value.ptr), "d" (old_value.tag), "b" (new_value.ptr), "c" (new_value.tag));
return ret;
I have grep the source code of gcc 4.6.3, and found that __sync_bool_compare_and_swap is implemented use
typedef int (__kernel_cmpxchg_t) (int oldval, int newval, int *ptr);
#define __kernel_cmpxchg (*(__kernel_cmpxchg_t *) 0xffff0fc0)
it seems that 0xffff0fc0 is the adress of some kernel helper functions
but in gcc 4.1.2 , there is no such codes like __kernel_cmpxchg_t, and I can't find the implementation of __sync_bool_compare_and_swap.
so what's the difference between __sync_bool_compare_and_swap and cmpxchg?
is __sync_bool_compare_and_swap implemented by cmpxchg?
and with kernel helper function __kernel_cmpxchg_t, is it implementd by cmpxchg?
thanks!
I think the __kernel_cmpxchg is a fallback which Linux makes available on some architectures which don't have native hardware support for CAS. E.g. ARMv5 or something like that.
Usually, GCC inline expands the _sync* builtins. Unless you're really interested in GCC internals, an easier way to find out what it does is to make a simple C example and look at the ASM the compiler generates.
Consider
#include <stdbool.h>
bool my_cmpchg(int *ptr, int oldval, int newval)
{
return __sync_bool_compare_and_swap(ptr, oldval, newval);
}
Compiling this on an x86_64 Linux machine with GCC 4.4 the following asm is generated:
my_cmpchg:
.LFB0:
.cfi_startproc
movl %esi, %eax
lock cmpxchgl %edx, (%rdi)
sete %al
ret
.cfi_endproc

fesetround with MSVC x64

I'm porting some code to Windows (sigh) and need to use fesetround. MSVC doesn't support C99, so for x86 I copied an implementation from MinGW and hacked it about:
//__asm__ volatile ("fnstcw %0;": "=m" (_cw));
__asm { fnstcw _cw }
_cw &= ~(FE_TONEAREST | FE_DOWNWARD | FE_UPWARD | FE_TOWARDZERO);
_cw |= mode;
//__asm__ volatile ("fldcw %0;" : : "m" (_cw));
__asm { fldcw _cw }
if (has_sse) {
unsigned int _mxcsr;
//__asm__ volatile ("stmxcsr %0" : "=m" (_mxcsr));
__asm { stmxcsr _mxcsr }
_mxcsr &= ~ 0x6000;
_mxcsr |= (mode << __MXCSR_ROUND_FLAG_SHIFT);
//__asm__ volatile ("ldmxcsr %0" : : "m" (_mxcsr));
__asm { ldmxcsr _mxcsr }
}
The commented lines are the originals for gcc; uncommented for msvc. This appears to work.
However the x64 cl.exe doesn't support inline asm, so I'm stuck. Is there some code out there I can "borrow" for this? (I've spent hours with Google). Or will I have to go on a 2 week detour to learn some assembly and figure out how to use MASM? Any advice is appreciated. Thank you.
The VC++ runtime library does provide equivalents, e.g. _control87, _controlfp, __control87_2 so you should be able to provide an implementation without resorting to assembler.

Linux assembler error "impossible constraint in ‘asm’"

I'm starting with assembler under Linux. I have saved the following code as testasm.c
and compiled it with: gcc testasm.c -otestasm
The compiler replies: "impossible constraint in ‘asm’".
#include <stdio.h>
int main(void)
{
int foo=10,bar=15;
__asm__ __volatile__ ("addl %%ebx,%%eax"
: "=eax"(foo)
: "eax"(foo), "ebx"(bar)
: "eax"
);
printf("foo = %d", foo);
return 0;
}
How can I resolve this problem?
(I've copied the example from here.)
Debian Lenny, kernel 2.6.26-2-amd64
gcc version 4.3.2 (Debian 4.3.2-1.1)
Resolution:
See the accepted answer - it seems the 'modified' clause is not supported any more.
__asm__ __volatile__ ("addl %%ebx,%%eax" : "=a"(foo) : "a"(foo), "b"(bar));
seems to work. I believe that the syntax for register constraints changed at some point, but it's not terribly well documented. I find it easier to write raw assembly and avoid the hassle.
The constraints are single letters (possibly with extra decorations), and you can specify several alternatives (i.e., an inmediate operand or register is "ir"). So the constraint "eax" means constraints "e" (signed 32-bit integer constant), "a" (register eax), or "x" (any SSE register). That is a bit different that what OP meant... and output to an "e" clearly doesn't make any sense. Also, if some operand (in this case an input and an output) must be the same as another, you refer to it by a number constraint. There is no need to say eax will be clobbered, it is an output. You can refer to the arguments in the inline code by %0, %1, ..., no need to use explicit register names. So the correct version for the code as intended by OP would be:
#include <stdio.h>
int main(void)
{
int foo=10, bar=15;
__asm__ __volatile__ (
"addl %2, %0"
: "=a" (foo)
: "0" (foo), "b" (bar)
);
printf("foo = %d", foo);
return 0;
}
A better solution would be to allow %2 to be anything, and %0 a register (as x86 allows, but you'd have to check your machine manual):
#include <stdio.h>
int main(void)
{
int foo=10, bar=15;
__asm__ __volatile__ (
"addl %2, %0"
: "=r" (foo)
: "0" (foo), "g" (bar)
);
printf("foo = %d", foo);
return 0;
}
If one wants to use multiline, then this will also work..
__asm__ __volatile__ (
"addl %%ebx,%%eax; \
addl %%eax, %%eax;"
: "=a"(foo)
: "a"(foo), "b"(bar)
);
'\' should be added for the compiler to accept a multiline string (the instructions).

Resources