DLL Floating point results differ according to caller - visual-c++

This is a follow up question to my earlier one asked yesterday
The problems were occurring in a MSVS 2008 C++ DLL that has over 4000 lines of code, but I have managed to produce a simple case that demonstrates the problem as it occurs on my CPU (an AMD Phenom II X6 1050T).
Will it show the problem occurring on another system? I'd really like to know!
Here is a simple class (Point.cpp), it needs to be compiled as a DLL:
#include <math.h>
#define EXPORT extern "C" __declspec(dllexport)
namespace Test {
struct Point {
double x;
double y;
/* Constructor for a Point object */
Point(double xx, double yy) : x(xx), y(yy) {}
/* Copy constructor */
Point(const Point &rhs) : x(rhs.x), y(rhs.y) {}
double mag() const;
Point norm() const;
};
double Point::mag() const {return sqrt(x*x + y*y);}
Point Point::norm() const {
double m = mag();
return Point(x/m, y/m);
}
EXPORT void __stdcall GetNorm(double x, double y, double *nx, double *ny)
Point P = Point(x, y);
Point N = P.norm();
*nx = N.x;
*ny = N.y;
}
}
Here is the test program (TestPoint.c), which needs to be linked to the lib created for the DLL:
#include <stdio.h>
#define IMPORT extern __declspec(dllimport)
IMPORT void __stdcall GetNorm(double x, double y, double *nx, double *ny);
void dhex(double x) { // double to hex
union {
unsigned long n[2];
double d;
} value;
value.d = x;
printf("(0x%0x%0x)\n", value.n[1], value.n[0]);
}
double i64tod(unsigned long long n) { // hex to double
double *DP = (double *) &n;
return *DP;
}
int main(int argc, char **argv) {
double vx, vy;
double ux, uy;
vx = i64tod(0xbfc7a30f3a53d351);
vy = i64tod(0xc01b578b34e3ce1d);
GetNorm(vx, vy, &ux, &uy);
printf(" vx = %20.18f ", vx); dhex(vx);
printf(" vy = %20.18f ", vy); dhex(vy);
printf("\n");
printf(" ux = %20.18f ", ux); dhex(ux);
printf(" uy = %20.18f ", uy); dhex(uy);
return 0;
}
On my system, with TestPoint compiled with VC++, the output is:
vx = -0.18466368053455054 (0xbfc7a30f3a53d351)
vy = -6.8354919685403077 (0xc01b578b34e3ce1d)
ux = -0.027005566159023012 (0xbf9ba758ddda1454,
uy = -0.99963528318903927 (0xbfeffd032227301b)
However, if the same code is compiled with gcc, or indeed, it seems, ANY equivalent program (eg VB6, PowerBasic), the results (ux and uy) are subtly but definitely different (the last hex digit):
vx = -0.184663680534550540 (0xbfc7a30f3a53d351)
vy = -6.835491968540307700 (0xc01b578b34e3ce1d)
ux = -0.027005566159023008 (0xbf9ba758ddda1453)
uy = -0.999635283189039160 (0xbfeffd032227301a)
This might seem an insignificant difference, but when it occurs in a physics engine, these differences accumulate in an alarming fashion. .
If the engine is going to get different results depending on who calls it I might have to abandon the use of VC++ altogether and try g++ instead.

Ok, I think I know how this happens. Looking at a disassembler listing of Point.dll, I noticed that the GetNorm function was pretty much what you'd expect, a couple of FMUL's and FDIV's. What was not present was an FLDCW instruction.
There weren't any FLDCW's in the MSVC calling program either, but I found FLDCW's in both the gcc and a PowerBasic versions of the calling program.
So I tweaked one of the executables (the PowerBasic EXE was the easiest to find the right place to tweak), and hey presto, I then got answers that matched MSVC. Presumably the FLDCW had changed the FPU rounding mode, hence the difference in the least significant bits.

Related

using malloc in dgels function of lapacke

i am trying to use dgels function of lapacke:
when i use it with malloc fucntion. it doesnot give correct value.
can anybody tell me please what is the mistake when i use malloc and create a matrix?
thankyou
/* Calling DGELS using row-major order */
#include <stdio.h>
#include <lapacke.h>
#include <conio.h>
#include <malloc.h>
int main ()
{
double a[3][2] = {{1,0},{1,1},{1,2}};
double **outputArray;
int designs=3;
int i,j,d,i_mal;
lapack_int info,m,n,lda,ldb,nrhs;
double outputArray[3][1] = {{6},{0},{0}};*/
outputArray = (double**) malloc(3* sizeof(double*));
for(i_mal=0;i_mal<3;i_mal++)
{
outputArray[i_mal] = (double*) malloc(1* sizeof(double));
}
for (i=0;i<designs;i++)
{
printf("put first value");
scanf("%lf",&outputArray[i][0]);
}
m = 3;
n = 2;
nrhs = 1;
lda = 2;
ldb = 1;
info = LAPACKE_dgels(LAPACK_ROW_MAJOR,'N',m,n,nrhs,*a,lda,*outputArray,ldb);
for(i=0;i<m;i++)
{
for(j=0;j<nrhs;j++)
{
printf("%lf ",outputArray[i][j]);
}
printf("\n");
}
getch();
return (info);
}
The problem may come from outputArray not being contiguous in memory. You may use something like this instead :
outputArray = (double**) malloc(3* sizeof(double*));
outputArray[0]=(double*) malloc(3* sizeof(double));
for (i=0;i<designs;i++){
outputArray[i]=&outputArray[0][i];
}
Don't forget to free the memory !
free(outputArray[0]);
free(outputArray);
Edit : Contiguous means that you have to allocate the memory for all values at once. See http://www.fftw.org/doc/Dynamic-Arrays-in-C_002dThe-Wrong-Way.html#Dynamic-Arrays-in-C_002dThe-Wrong-Way : some packages, like fftw or lapack require this feature for optimization. As you were calling malloc three times, you created three parts and things went wrong.
If you have a single right hand side, there is no need for a 2D array (double**). outputArray[i] is a double*, that is, the start of the i-th row ( row major). The right line may be outputArray[i]=&outputArray[0][i*nrhs]; if you have many RHS.
By doing this in your code, you are building a 3 rows, one column, that is one RHS. The solution, is of size n=2. It should be outputArray[0][0] , outputArray[1][0]. I hope i am not too wrong, check this on simple cases !
Bye,

C++/CLI from tracking reference to (native) reference - wrapping

I need a C# interface to call some native C++ code via the CLI dialect. The C# interface uses the out attribute specifier in front of the required parameters. That translates to a % tracking reference in C++/CLI.
The method I has the following signature and body (it is calling another native method to do the job):
virtual void __clrcall GetMetrics(unsigned int %width, unsigned int %height, unsigned int %colourDepth, int %left, int %top) sealed
{
mRenderWindow->getMetrics(width, height, colourDepth, left, top);
}
Now the code won't compile because of a few compile time errors (all being related to not being able to convert parameter 1 from 'unsigned int' to 'unsigned int &').
As a modest C++ programmer, to me CLI is looking like Dutch to a German speaker. What can be done to make this wrapper work properly in CLI?
Like it was also suggested in a deleted answer, I did the obvious and used local variables to pass the relevant values around:
virtual void __clrcall GetMetrics(unsigned int %width, unsigned int %height, unsigned int %colourDepth, int %left, int %top) sealed
{
unsigned int w = width, h = height, c = colourDepth;
int l = left, t = top;
mRenderWindow->getMetrics(w, h, c, l, t);
width = w; height = h; colourDepth = c; left = l; top = t;
}
It was a bit obvious since the rather intuitive mechanism of tracked references: they're affected by the garbage collector's work and are not really that static/constant as normal &references when they're prone to be put somewhere else in memory. Thus this is the only way reliable enough to overcome the issue. Thanks to the initial answer.
If your parameters use 'out' on the C# side, you need to define your C++/CLI parameters like this: [Out] unsigned int ^%width
Here's an example:
virtual void __clrcall GetMetrics([Out] unsigned int ^%width)
{
width = gcnew UInt32(42);
}
Then on your C# side, you'll get back 42:
ValueType vt;
var res = cppClass.GetMetrics(out vt);
//vt == 42
In order to use the [Out] parameter on the C++/CLI side you'll need to include:
using namespace System::Runtime::InteropServices;
Hope this helps!
You can use pin_ptr so that 'width' doesn't move when native code changes it. The managed side suffers from pin_ptr, but I don't think you can get around that if you want native code directly access it without 'w'.
virtual void __clrcall GetMetrics(unsigned int %width, unsigned int %height, unsigned int %colourDepth, int %left, int %top) sealed
{
pin_ptr<unsigned int> pw = &width; //do the same for height
mRenderWindow->getMetrics(*pw, h, c, l, t);
}

Why I can't use global float constants in device code? [duplicate]

I am using CUDA 5.0. I noticed that the compiler will allow me to use host-declared int constants within kernels. However, it refuses to compile any kernels that use host-declared float constants. Does anyone know the reason for this seeming discrepancy?
For example, the following code runs just fine as is, but it will not compile if the final line in the kernel is uncommented.
#include <cstdio>
#include <cuda_runtime.h>
static int __constant__ DEV_INT_CONSTANT = 1;
static float __constant__ DEV_FLOAT_CONSTANT = 2.0f;
static int const HST_INT_CONSTANT = 3;
static float const HST_FLOAT_CONSTANT = 4.0f;
__global__ void uselessKernel(float * val)
{
*val = 0.0f;
// Use device int and float constants
*val += DEV_INT_CONSTANT;
*val += DEV_FLOAT_CONSTANT;
// Use host int and float constants
*val += HST_INT_CONSTANT;
//*val += HST_FLOAT_CONSTANT; // won't compile if uncommented
}
int main(void)
{
float * d_val;
cudaMalloc((void **)&d_val, sizeof(float));
uselessKernel<<<1, 1>>>(d_val);
cudaFree(d_val);
}
Adding a const number in the device code is OK, but adding a number stored on the host memory in the device code is NOT.
Every reference of the static const int in your code can be replaced with the value 3 by the compiler/optimizer when the addr of that variable is never referenced. In this case, it is like #define HST_INT_CONSTANT 3, and no host memory is allocated for this variable.
But for float var, the host memory is always allocated even it is of static const float. Since the kernel can not access the host memory directly, your code with static const float won't be compiled.
For C/C++, int can be optimized more aggressively than float.
You code runs when the comment is ON can be seen as a bug of CUDA C I think. The static const int is a host side thing, and should not be accessible to the device directly.

Generating a comprehensive callgraph using GCC & Egypt

I am trying to generate a comprehensive callgraph (complete with low level calls to Linux, runtime, the lot).
I have statically compiled my source files with "-fdump-rtl-expand" and created RTL files, which I passed to a PERL script called Egypt (which I believe is Graphviz/Dot) and generated a PDF file of the callgraph. This works perfectly, no problems at all.
Except, there are calls being made into some libraries that are getting shown as built-in. I was looking to see if there is a way for the callgraph not to be printed as and instead the real calls made into the libraries ?
Please let me know if the question is unclear.
http://i.imgur.com/sp58v.jpg
Basically, I am trying to avoid the callgraph from generating < built-in >
Is there a way to do that ?
-------- CODE ---------
#include <cilk/cilk.h>
#include <stdio.h>
#include <stdlib.h>
unsigned long int t0, t5;
unsigned int NOSPAWN_THRESHOLD = 32;
int fib_nospawn(int n)
{
if (n < 2)
return n;
else
{
int x = fib_nospawn(n-1);
int y = fib_nospawn(n-2);
return x + y;
}
}
// spawning fibonacci function
int fib(long int n)
{
long int x, y;
if (n < 2)
return n;
else if (n <= NOSPAWN_THRESHOLD)
{
x = fib_nospawn(n-1);
y = fib_nospawn(n-2);
return x + y;
}
else
{
x = cilk_spawn fib(n-1);
y = cilk_spawn fib(n-2);
cilk_sync;
return x + y;
}
}
int main(int argc, char *argv[])
{
int n;
long int result;
long int exec_time;
n = atoi(argv[1]);
NOSPAWN_THRESHOLD = atoi(argv[2]);
result = fib(n);
printf("%ld\n", result);
return 0;
}
I compiled the Cilk Library from source.
I might have found the partial solution to the problem:
You need to pass the following option to egypt
--include-external
This produced a slightly more comprehensive callgraph, although there still is the " visible
http://i.imgur.com/GWPJO.jpg?1
Can anyone suggest if I get more depth in the callgraph ?
You can use the GCC VCG Plugin: A gcc plugin, which can be loaded when debugging gcc, to show internal structures graphically.
gcc -fplugin=/path/to/vcg_plugin.so -fplugin-arg-vcg_plugin-cgraph foo.c
Call-graph is place to store data needed
for inter-procedural optimization. All datastructures
are divided into three components:
local_info that is produced while analyzing
the function, global_info that is result
of global walking of the call-graph on the end
of compilation and rtl_info used by RTL
back-end to propagate data from already compiled
functions to their callers.

access array from struct in C

In my data.h file I have:
typedef struct {
double ***grid;
} Solver;
In my .c file I have
static Solver _solver;
which first makes a call to a function to do some allocation on grid such as
_solver.grid = malloc(....);
//then makes a call to
GS_init(_solver.grid);
The GS_init function is declared in GS.h as:
void GS_init(double ***grid);
When I try to compile, I get two errors:
the struct "<unnamed>" has no field "grid"
GS_init(_solver.grid)
^
and
too many arguments in function call
GS_init(_solver.grid)
^
Any ideas what is going wrong here?
This code compiles with 'gcc -Wall -Werror -c':
data.h
typedef struct
{
double ***grid;
} Solver;
gs.h
extern void GS_init(double ***grid);
gs.c
#include "data.h"
#include "gs.h"
#include <stdlib.h>
static Solver _solver;
void anonymous(void)
{
_solver.grid = malloc(32 * sizeof(double));
GS_init(_solver.grid);
}
Derek asked:
Why does this work? Is it because of the extern keyword?
The 'extern' is not material to making it work, though I always use it.
When I have to flesh out GS_init() in, say compute.c, would I write void GS_init(double ***grid){ //loop over grid[i][j][k] setting to zero }
Sort of...yes, the GS_init() code could do that if the data structure is set up properly, which is going to need more information than there is currently visible in the structure.
For the compiler to process:
grid[i][j][k] = 0.0;
the code has to know the valid ranges for each of i, j, and k; assume the number of rows in each dimension are Ni, Nj, Nk. The data 'structure' pointed to by grid must be an array of Ni 'double **' values - which must be allocated. Each of those entries must point to Nj 'double *' values. So, you have to do more allocation than a single malloc(), and you have to do more initialization than just setting everything to zero.
If you want to use a single array of doubles only, you will have to write a different expression to access the data:
grid[(i * Ni + j) * Nj + k] = 0.0;
And under this scenario, grid would be a simple double * and not a triple pointer.

Resources