what is the right way to access builtin_bswap functions? - linux

I have an application that uses a database with data stored in big-endian order. To access this data portably across hardware platforms, I use 4 macros defined in a config.h module:
word(p) - gets a big-endian 16 bit value at pointer p as a native 16-bit value.
putword(p, w) - stores a native 16-bit variable (w) to pointer p as 16-bit big-endian.
dword(p) and putdword(p, d) do the same for 32-bit values
This all works fine, but the macros on a little-endian machine use the brute-force 'shift and mask' approach.
Anyway, it looks like there are builtin_bswap16 and builtin_bswap32 functions on linux that may do this more efficiently (as inline assembler code?). So what's the right way to code my word/putword macros so that they use these builtin functions on an X86_64 linux machine? Would coding my macros as htons/l function calls do the same thing as efficiently - and is it necessary to enable compiler optimiation to get any of these solutions to work? I'd rather not optimize if it renders gdb useless.

Hmmm. I wrote a trivial test program using no special include files and simply calling the __builtin_swap... functions directly (see the 'fast...' macros below). It all just works. When I disassemble the code in gdb, I see that the fast... macros do in 4-5 assembler instructions what takes up to 27 instructions for the worst case 'dword' macro. Pretty neat improvement for almost no effort.
typedef unsigned char uchar;
typedef unsigned short ushort;
typedef unsigned int uint;
#define word(a) (ushort) ( (*((uchar *)(a)) << 8) | \
(*((uchar *)(a) + 1)) )
#define putword(a,w) *((char *)(a)) = (char) (((ushort)((w) >> 8)) & 0x00ff), \
*((char *)(a)+1) = (char) (((ushort)((w) >> 0)) & 0x00ff)
#define dword(a) (uint) ( ((uint)(word(a)) << 16) | \
((uint)(word(((uchar *)(a) + 2)))) )
#define putdword(a,d) *((char *)(a)) = (char) (((uint)((d) >> 24)) & 0x00ff), \
*((char *)(a)+1) = (char) (((uint)((d) >> 16)) & 0x00ff), \
*((char *)(a)+2) = (char) (((uint)((d) >> 8)) & 0x00ff), \
*((char *)(a)+3) = (char) (((uint)((d) >> 0)) & 0x00ff)
#define fastword(a) (ushort) __builtin_bswap16(* ((ushort *) a));
#define fastputword(a, w) *((ushort *) a) = __builtin_bswap16((ushort)w);
#define fastdword(a) (uint) __builtin_bswap32(* ((uint *) a));
#define fastputdword(a, d) *((uint *) a) = __builtin_bswap32((uint)d);
int main()
{
unsigned short s1, s2, s3;
unsigned int i1, i2, i3;
s1 = 0x1234;
putword(&s2, s1);
s3 = word(&s2);
i1 = 0x12345678;
putdword(&i2, i1);
i3 = dword(&i2);
printf("s1=%x, s2=%x, s3=%x, i1=%x, i2=%x, i3=%x\n", s1, s2, s3, i1, i2, i3);
s1 = 0x1234;
fastputword(&s2, s1);
s3 = fastword(&s2);
i1 = 0x12345678;
fastputdword(&i2, i1);
i3 = fastdword(&i2);
printf("s1=%x, s2=%x, s3=%x, i1=%x, i2=%x, i3=%x\n", s1, s2, s3, i1, i2, i3);
}

I would just use htons, htonl and friends. They're a lot more portable, and it's very likely that the authors of any given libc will have implemented them as inline functions or macros that invoke __builtin intrinsics or inline asm or whatever, resulting in what should be a nearly-optimal implementation for that specific machine. See what is generated in godbolt's setup, which I think is some flavor of Linux/glibc.
You do need to compile with optimizations for them to be inlined, otherwise it generates an ordinary function call. But even -Og gets them inlined and should not mess up your debugging as much. Anyway, if you're compiling without optimizations altogether, your entire program will be so inefficient that the extra couple instructions to call htons must surely be the least of your worries.

Related

Modifying Individual Bytes in Rust [duplicate]

This question already has answers here:
Converting number primitives (i32, f64, etc) to byte representations
(5 answers)
Closed 2 years ago.
In C, if I want to modify the most significant byte of an integer, I can do it in two different ways:
#include <stdint.h>
#include <stdio.h>
int main() {
uint32_t i1 = 0xDDDDDDDD;
printf("i1 original: 0x%X\n", i1);
uint32_t i1msb = 0xAA000000;
uint32_t i1_mask = 0x00FFFFFF;
uint32_t i1msb_mask = 0xFF000000;
i1 = (i1 & i1_mask) | (i1msb & i1msb_mask);
printf("i1 modified: 0x%X\n", i1);
uint32_t i2 = 0xDDDDDDDD;
printf("i2 original: 0x%X\n", i2);
uint32_t *i2ptr = &i2;
char *c2ptr = (char *) i2ptr;
c2ptr += 3;
*c2ptr = 0xAA;
printf("i2 modified: 0x%X\n", i2);
}
i1 original: 0xDDDDDDDD
i1 modified: 0xAADDDDDD
i2 original: 0xDDDDDDDD
i2 modified: 0xAADDDDDD
The masking approach works in both C and Rust, but I don't have not found any way to do the direct byte manipulation approach in (safe) Rust. Although it has some endianness issues that masking does not, I think these can all be resolved at compile time to provide a safe, cross-platform interface, so why is direct byte manipulation not allowed in safe Rust? Or if it is, please correct me as I have been unable to find a function for it - I am thinking something like
fn change_byte(i: &u32, byte_num: usize, new_val: u8) -> Result<(), ErrorType> { ... }
which checks that byte_num is within range before performing the operation, and in which byte_num always goes from least significant byte to most significant.
Rust doesn't have a function to modify bytes directly. It does, however, have methods to convert integers to byte arrays, and byte arrays to integers. For example, one might write
fn set_low_byte(number:u32, byte:u8)->u32{
let mut arr = number.to_be_bytes();
arr[3] = byte;
u32::from_be_bytes(arr)
}
There are also methods for little and native endian byte order. The bitmask approach is also perfectly doable, and the pointer stuff will also work if you don't mind unsafe.
playground link

why typedef throwing error :(S) Initializer must be a valid constant expression

in f1.h header using typedef for structure. sample code snippet shown below
typedef struct{
int a;
union u
{
int x;
char y;
}xyz;
}mystruct;
In f2.h header using the structure mysturct to get the offset. Code snippet shown below
static mystruct ktt
//#define OFFSET_T(b, c) ((int*)((&((mystruct*)0)->b)) - (int*)((&((mystruct*)0)->c)))
#define OFFSET_T(b, c) ((char*) &ktt.b - (char *) &ktt.c)
static struct Mystruct1{
int n;
}mystruct1 = {OFFSET_T(xyz,a)};
when i'm doing compilation in AIX machine using xlc compiler it is throwing the error as "1506-221(S) Initializer must be a valid constant expression".
i tried both the macro's but both are getting same error. Is there anything wrong in f2.h macro while performing size of structure to get offset ??
The expression in question needs to be an arithmetic constant expression in order to be portable. Neither macro qualifies, since operands of pointer type are involved and arithmetic constant expressions are restricted such that those operands are not allowed. In C11, this is found in subclause 6.6 paragraph 8.
That said, the code using the first macro (source reproduced below) does compile on multiple versions of the xlc compiler on AIX.
typedef struct{
int a;
union u
{
int x;
char y;
}xyz;
}mystruct;
static mystruct ktt;
#define OFFSET_T(b, c) ((int*)((&((mystruct*)0)->b)) - (int*)((&((mystruct*)0)->c)))
//#define OFFSET_T(b, c) ((char*) &ktt.b - (char *) &ktt.c)
static struct Mystruct1{
int n;
}mystruct1 = {OFFSET_T(xyz,a)};
The compiler invocation I used was:
xlc offsetcalc.c -c -o /dev/null
The version information for one of the older versions I tried is:
IBM XL C/C++ for AIX, V10.1
Version: 10.01.0000.0021
The version information for one of the newest versions I tried is:
IBM XL C/C++ for AIX, V13.1.3 (5725-C72, 5765-J07)
Version: 13.01.0003.0004

DLL Floating point results differ according to caller

This is a follow up question to my earlier one asked yesterday
The problems were occurring in a MSVS 2008 C++ DLL that has over 4000 lines of code, but I have managed to produce a simple case that demonstrates the problem as it occurs on my CPU (an AMD Phenom II X6 1050T).
Will it show the problem occurring on another system? I'd really like to know!
Here is a simple class (Point.cpp), it needs to be compiled as a DLL:
#include <math.h>
#define EXPORT extern "C" __declspec(dllexport)
namespace Test {
struct Point {
double x;
double y;
/* Constructor for a Point object */
Point(double xx, double yy) : x(xx), y(yy) {}
/* Copy constructor */
Point(const Point &rhs) : x(rhs.x), y(rhs.y) {}
double mag() const;
Point norm() const;
};
double Point::mag() const {return sqrt(x*x + y*y);}
Point Point::norm() const {
double m = mag();
return Point(x/m, y/m);
}
EXPORT void __stdcall GetNorm(double x, double y, double *nx, double *ny)
Point P = Point(x, y);
Point N = P.norm();
*nx = N.x;
*ny = N.y;
}
}
Here is the test program (TestPoint.c), which needs to be linked to the lib created for the DLL:
#include <stdio.h>
#define IMPORT extern __declspec(dllimport)
IMPORT void __stdcall GetNorm(double x, double y, double *nx, double *ny);
void dhex(double x) { // double to hex
union {
unsigned long n[2];
double d;
} value;
value.d = x;
printf("(0x%0x%0x)\n", value.n[1], value.n[0]);
}
double i64tod(unsigned long long n) { // hex to double
double *DP = (double *) &n;
return *DP;
}
int main(int argc, char **argv) {
double vx, vy;
double ux, uy;
vx = i64tod(0xbfc7a30f3a53d351);
vy = i64tod(0xc01b578b34e3ce1d);
GetNorm(vx, vy, &ux, &uy);
printf(" vx = %20.18f ", vx); dhex(vx);
printf(" vy = %20.18f ", vy); dhex(vy);
printf("\n");
printf(" ux = %20.18f ", ux); dhex(ux);
printf(" uy = %20.18f ", uy); dhex(uy);
return 0;
}
On my system, with TestPoint compiled with VC++, the output is:
vx = -0.18466368053455054 (0xbfc7a30f3a53d351)
vy = -6.8354919685403077 (0xc01b578b34e3ce1d)
ux = -0.027005566159023012 (0xbf9ba758ddda1454,
uy = -0.99963528318903927 (0xbfeffd032227301b)
However, if the same code is compiled with gcc, or indeed, it seems, ANY equivalent program (eg VB6, PowerBasic), the results (ux and uy) are subtly but definitely different (the last hex digit):
vx = -0.184663680534550540 (0xbfc7a30f3a53d351)
vy = -6.835491968540307700 (0xc01b578b34e3ce1d)
ux = -0.027005566159023008 (0xbf9ba758ddda1453)
uy = -0.999635283189039160 (0xbfeffd032227301a)
This might seem an insignificant difference, but when it occurs in a physics engine, these differences accumulate in an alarming fashion. .
If the engine is going to get different results depending on who calls it I might have to abandon the use of VC++ altogether and try g++ instead.
Ok, I think I know how this happens. Looking at a disassembler listing of Point.dll, I noticed that the GetNorm function was pretty much what you'd expect, a couple of FMUL's and FDIV's. What was not present was an FLDCW instruction.
There weren't any FLDCW's in the MSVC calling program either, but I found FLDCW's in both the gcc and a PowerBasic versions of the calling program.
So I tweaked one of the executables (the PowerBasic EXE was the easiest to find the right place to tweak), and hey presto, I then got answers that matched MSVC. Presumably the FLDCW had changed the FPU rounding mode, hence the difference in the least significant bits.

Generating a comprehensive callgraph using GCC & Egypt

I am trying to generate a comprehensive callgraph (complete with low level calls to Linux, runtime, the lot).
I have statically compiled my source files with "-fdump-rtl-expand" and created RTL files, which I passed to a PERL script called Egypt (which I believe is Graphviz/Dot) and generated a PDF file of the callgraph. This works perfectly, no problems at all.
Except, there are calls being made into some libraries that are getting shown as built-in. I was looking to see if there is a way for the callgraph not to be printed as and instead the real calls made into the libraries ?
Please let me know if the question is unclear.
http://i.imgur.com/sp58v.jpg
Basically, I am trying to avoid the callgraph from generating < built-in >
Is there a way to do that ?
-------- CODE ---------
#include <cilk/cilk.h>
#include <stdio.h>
#include <stdlib.h>
unsigned long int t0, t5;
unsigned int NOSPAWN_THRESHOLD = 32;
int fib_nospawn(int n)
{
if (n < 2)
return n;
else
{
int x = fib_nospawn(n-1);
int y = fib_nospawn(n-2);
return x + y;
}
}
// spawning fibonacci function
int fib(long int n)
{
long int x, y;
if (n < 2)
return n;
else if (n <= NOSPAWN_THRESHOLD)
{
x = fib_nospawn(n-1);
y = fib_nospawn(n-2);
return x + y;
}
else
{
x = cilk_spawn fib(n-1);
y = cilk_spawn fib(n-2);
cilk_sync;
return x + y;
}
}
int main(int argc, char *argv[])
{
int n;
long int result;
long int exec_time;
n = atoi(argv[1]);
NOSPAWN_THRESHOLD = atoi(argv[2]);
result = fib(n);
printf("%ld\n", result);
return 0;
}
I compiled the Cilk Library from source.
I might have found the partial solution to the problem:
You need to pass the following option to egypt
--include-external
This produced a slightly more comprehensive callgraph, although there still is the " visible
http://i.imgur.com/GWPJO.jpg?1
Can anyone suggest if I get more depth in the callgraph ?
You can use the GCC VCG Plugin: A gcc plugin, which can be loaded when debugging gcc, to show internal structures graphically.
gcc -fplugin=/path/to/vcg_plugin.so -fplugin-arg-vcg_plugin-cgraph foo.c
Call-graph is place to store data needed
for inter-procedural optimization. All datastructures
are divided into three components:
local_info that is produced while analyzing
the function, global_info that is result
of global walking of the call-graph on the end
of compilation and rtl_info used by RTL
back-end to propagate data from already compiled
functions to their callers.

Weird SIGSEGV segmentation fault in std::string::assign() method from libstdc++.so.6

My program recently encountered a weird segfault when running. I want to know if somebody had met this error before and how it could be fixed. Here is more info:
Basic info:
CentOS 5.2, kernal version is 2.6.18
g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50)
CPU: Intel x86 family
libstdc++.so.6.0.8
My program will start multiple threads to process data. The segfault occurred in one of the threads.
Though it's a multi-thread program, the segfault seemed to occur on a local std::string object. I'll show this in the code snippet later.
The program is compiled with -g, -Wall and -fPIC, and without -O2 or other optimization options.
The core dump info:
Core was generated by `./myprog'.
Program terminated with signal 11, Segmentation fault.
#0 0x06f6d919 in __gnu_cxx::__exchange_and_add(int volatile*, int) () from /usr/lib/libstdc++.so.6
(gdb) bt
#0 0x06f6d919 in __gnu_cxx::__exchange_and_add(int volatile*, int) () from /usr/lib/libstdc++.so.6
#1 0x06f507c3 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::assign(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /usr/lib/libstdc++.so.6
#2 0x06f50834 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::operator=(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /usr/lib/libstdc++.so.6
#3 0x081402fc in Q_gdw::ProcessData (this=0xb2f79f60) at ../../../myprog/src/Q_gdw/Q_gdw.cpp:798
#4 0x08117d3a in DataParser::Parse (this=0x8222720) at ../../../myprog/src/DataParser.cpp:367
#5 0x08119160 in DataParser::run (this=0x8222720) at ../../../myprog/src/DataParser.cpp:338
#6 0x080852ed in Utility::__dispatch (arg=0x8222720) at ../../../common/thread/Thread.cpp:603
#7 0x0052c832 in start_thread () from /lib/libpthread.so.0
#8 0x00ca845e in clone () from /lib/libc.so.6
Please note that the segfault begins within the basic_string::operator=().
The related code:
(I've shown more code than that might be needed, and please ignore the coding style things for now.)
int Q_gdw::ProcessData()
{
char tmpTime[10+1] = {0};
char A01Time[12+1] = {0};
std::string tmpTimeStamp;
// Get the timestamp from TP
if((m_BackFrameBuff[11] & 0x80) >> 7)
{
for (i = 0; i < 12; i++)
{
A01Time[i] = (char)A15Result[i];
}
tmpTimeStamp = FormatTimeStamp(A01Time, 12); // Segfault occurs on this line
And here is the prototype of this FormatTimeStamp method:
std::string FormatTimeStamp(const char *time, int len)
I think such string assignment operations should be a kind of commonly used one, but I just don't understand why a segfault could occurr here.
What I have investigated:
I've searched on the web for answers. I looked at here. The reply says try to recompile the program with _GLIBCXX_FULLY_DYNAMIC_STRING macro defined. I tried but the crash still happens.
I also looked at here. It also says to recompile the program with _GLIBCXX_FULLY_DYNAMIC_STRING, but the author seems to be dealing with a different problem with mine, thus I don't think his solution works for me.
Updated on 08/15/2011
Here is the original code of this FormatTimeStamp. I understand the coding doesn't look very nice(too many magic numbers, for instance..), but let's focus on the crash issue first.
string Q_gdw::FormatTimeStamp(const char *time, int len)
{
string timeStamp;
string tmpstring;
if (time) // It is guaranteed that "time" is correctly zero-terminated, so don't worry about any overflow here.
tmpstring = time;
// Get the current time point.
int year, month, day, hour, minute, second;
#ifndef _WIN32
struct timeval timeVal;
struct tm *p;
gettimeofday(&timeVal, NULL);
p = localtime(&(timeVal.tv_sec));
year = p->tm_year + 1900;
month = p->tm_mon + 1;
day = p->tm_mday;
hour = p->tm_hour;
minute = p->tm_min;
second = p->tm_sec;
#else
SYSTEMTIME sys;
GetLocalTime(&sys);
year = sys.wYear;
month = sys.wMonth;
day = sys.wDay;
hour = sys.wHour;
minute = sys.wMinute;
second = sys.wSecond;
#endif
if (0 == len)
{
// The "time" doesn't specify any time so we just use the current time
char tmpTime[30];
memset(tmpTime, 0, 30);
sprintf(tmpTime, "%d-%d-%d %d:%d:%d.000", year, month, day, hour, minute, second);
timeStamp = tmpTime;
}
else if (6 == len)
{
// The "time" specifies "day-month-year" with each being 2-digit.
// For example: "150811" means "August 15th, 2011".
timeStamp = "20";
timeStamp = timeStamp + tmpstring.substr(4, 2) + "-" + tmpstring.substr(2, 2) + "-" +
tmpstring.substr(0, 2);
}
else if (8 == len)
{
// The "time" specifies "minute-hour-day-month" with each being 2-digit.
// For example: "51151508" means "August 15th, 15:51".
// As the year is not specified, the current year will be used.
string strYear;
stringstream sstream;
sstream << year;
sstream >> strYear;
sstream.clear();
timeStamp = strYear + "-" + tmpstring.substr(6, 2) + "-" + tmpstring.substr(4, 2) + " " +
tmpstring.substr(2, 2) + ":" + tmpstring.substr(0, 2) + ":00.000";
}
else if (10 == len)
{
// The "time" specifies "minute-hour-day-month-year" with each being 2-digit.
// For example: "5115150811" means "August 15th, 2011, 15:51".
timeStamp = "20";
timeStamp = timeStamp + tmpstring.substr(8, 2) + "-" + tmpstring.substr(6, 2) + "-" + tmpstring.substr(4, 2) + " " +
tmpstring.substr(2, 2) + ":" + tmpstring.substr(0, 2) + ":00.000";
}
else if (12 == len)
{
// The "time" specifies "second-minute-hour-day-month-year" with each being 2-digit.
// For example: "305115150811" means "August 15th, 2011, 15:51:30".
timeStamp = "20";
timeStamp = timeStamp + tmpstring.substr(10, 2) + "-" + tmpstring.substr(8, 2) + "-" + tmpstring.substr(6, 2) + " " +
tmpstring.substr(4, 2) + ":" + tmpstring.substr(2, 2) + ":" + tmpstring.substr(0, 2) + ".000";
}
return timeStamp;
}
Updated on 08/19/2011
This problem has finally been addressed and fixed. The FormatTimeStamp() function has nothing to do with the root cause, in fact. The segfault is caused by a writing overflow of a local char buffer.
This problem can be reproduced with the following simpler program(please ignore the bad namings of some variables for now):
(Compiled with "g++ -Wall -g main.cpp")
#include <string>
#include <iostream>
void overflow_it(char * A15, char * A15Result)
{
int m;
int t = 0,i = 0;
char temp[3];
for (m = 0; m < 6; m++)
{
t = ((*A15 & 0xf0) >> 4) *10 ;
t += *A15 & 0x0f;
A15 ++;
std::cout << "m = " << m << "; t = " << t << "; i = " << i << std::endl;
memset(temp, 0, sizeof(temp));
sprintf((char *)temp, "%02d", t); // The buggy code: temp is not big enough when t is a 3-digit integer.
A15Result[i++] = temp[0];
A15Result[i++] = temp[1];
}
}
int main(int argc, char * argv[])
{
std::string str;
{
char tpTime[6] = {0};
char A15Result[12] = {0};
// Initialize tpTime
for(int i = 0; i < 6; i++)
tpTime[i] = char(154); // 154 would result in a 3-digit t in overflow_it().
overflow_it(tpTime, A15Result);
str.assign(A15Result);
}
std::cout << "str says: " << str << std::endl;
return 0;
}
Here are two facts we should remember before going on:
1). My machine is an Intel x86 machine so it's using the Little Endian rule. Therefore for a variable "m" of int type, whose value is, say, 10, it's memory layout might be like this:
Starting addr:0xbf89bebc: m(byte#1): 10
0xbf89bebd: m(byte#2): 0
0xbf89bebe: m(byte#3): 0
0xbf89bebf: m(byte#4): 0
2). The program above runs within the main thread. When it comes to the overflow_it() function, the variables layout in the thread stack looks like this(which only shows the important variables):
0xbfc609e9 : temp[0]
0xbfc609ea : temp[1]
0xbfc609eb : temp[2]
0xbfc609ec : m(byte#1) <-- Note that m follows temp immediately. m(byte#1) happens to be the byte temp[3].
0xbfc609ed : m(byte#2)
0xbfc609ee : m(byte#3)
0xbfc609ef : m(byte#4)
0xbfc609f0 : t
...(3 bytes)
0xbfc609f4 : i
...(3 bytes)
...(etc. etc. etc...)
0xbfc60a26 : A15Result <-- Data would be written to this buffer in overflow_it()
...(11 bytes)
0xbfc60a32 : tpTime
...(5 bytes)
0xbfc60a38 : str <-- Note the str takes up 4 bytes. Its starting address is **16 bytes** behind A15Result.
My analysis:
1). m is a counter in overflow_it() whose value is incremented by 1 at each for loop and whose max value is supposed not greater than 6. Thus it's value could be stored completely in m(byte#1)(remember it's Little Endian) which happens to be temp3.
2). In the buggy line: When t is a 3-digit integer, such as 109, then the sprintf() call would result in a buffer overflow, because serializing the number 109 to the string "109" actually requires 4 bytes: '1', '0', '9' and a terminating '\0'. Because temp[] is allocated with 3 bytes only, the final '\0' would definitely be written to temp3, which is just the m(byte#1), which unfortunately stores m's value. As a result, m's value is reset to 0 every time.
3). The programmer's expectation, however, is that the for loop in the overflow_it() would execute 6 times only, with each time m being incremented by 1. Because m is always reset to 0, the actual loop time is far more than 6 times.
4). Let's look at the variable i in overflow_it(): Every time the for loop is executed, i's value is incremented by 2, and A15Result[i] will be accessed. However, if you compile and run this program, you'll see the i value finally adds up to 24, which means the overflow_it() writes data to the bytes ranging from A15Result[0] to A15Result[23]. Note that the object str is only 16 bytes behind A15Result[0], thus the overflow_it() has "sweeped through" str and destroy it's correct memory layout.
5). I think the correct use of std::string, as it is a non-POD data structure, depends on that that instantiated std::string object must have a correct internal state. But in this program, str's internal layout has been changed by force externally. This should be why the assign() method call would finally cause a segfault.
Update on 08/26/2011
In my previous update on 08/19/2011, I said that the segfault was caused by a method call on a local std::string object whose memory layout had been broken and thus became a "destroyed" object. This is not an "always" true story. Consider the C++ program below:
//C++
class A {
public:
void Hello(const std::string& name) {
std::cout << "hello " << name;
}
};
int main(int argc, char** argv)
{
A* pa = NULL; //!!
pa->Hello("world");
return 0;
}
The Hello() call would succeed. It would succeed even if you assign an obviously bad pointer to pa. The reason is: the non-virtual methods of a class don't reside within the memory layout of the object, according to the C++ object model. The C++ compiler turns the A::Hello() method to something like, say, A_Hello_xxx(A * const this, ...) which could be a global function. Thus, as long as you don't operate on the "this" pointer, things could go pretty well.
This fact shows that a "bad" object is NOT the root cause that results in the SIGSEGV segfault. The assign() method is not virtual in std::string, thus the "bad" std::string object wouldn't cause the segfault. There must be some other reason that finally caused the segfault.
I noticed that the segfault comes from the __gnu_cxx::__exchange_and_add() function, so I then looked into its source code in this web page:
00046 static inline _Atomic_word
00047 __exchange_and_add(volatile _Atomic_word* __mem, int __val)
00048 { return __sync_fetch_and_add(__mem, __val); }
The __exchange_and_add() finally calls the __sync_fetch_and_add(). According to this web page, the __sync_fetch_and_add() is a GCC builtin function whose behavior is like this:
type __sync_fetch_and_add (type *ptr, type value, ...)
{
tmp = *ptr;
*ptr op= value; // Here the "op=" means "+=" as this function is "_and_add".
return tmp;
}
There it is! The passed-in ptr pointer is dereferenced here. In the 08/19/2011 program, the ptr is actually the "this" pointer of the "bad" std::string object within the assign() method. It is the derefenence at this point that actually caused the SIGSEGV segmentation fault.
We could test this with the following program:
#include <bits/atomicity.h>
int main(int argc, char * argv[])
{
__sync_fetch_and_add((_Atomic_word *)0, 10); // Would result in a segfault.
return 0;
}
There are two likely possibilities:
some code before line 798 has corrupted the local tmpTimeStamp
object
the return value from FormatTimeStamp() was somehow bad.
The _GLIBCXX_FULLY_DYNAMIC_STRING is most likely a red herring and has nothing to do with the problem.
If you install debuginfo package for libstdc++ (I don't know what it's called on CentOS), you'll be able to "see into" that code, and might be able to tell whether the left-hand-side (LHS) or the RHS of the assignment operator caused the problem.
If that's not possible, you'll have to debug this at the assembly level. Going into frame #2 and doing x/4x $ebp should give you previous ebp, caller address (0x081402fc), LHS (should match &tmpTimeStamp in frame #3), and RHS. Go from there, and good luck!
I guess there could be some problem inside FormatTimeStamp function, but without source code it's hard to say anything. Try to check your program under Valgrind. Usually this helps to fix such sort of bugs.

Resources