Calculation result: -1.#IND000000000000 in Visual C++ Express 2010 - visual-c++

I got a wrong result in an equation that I used in my following code
dQ_rad = 0.7 * 5.67e-8 * rotor.dRotorOuterArea[iAxle] * (dT1*dT1*dT1*dT1 - dT2*dT2*dT2*dT2);
All the variables are declared as DOUBLE, where
rotor.dRotorOuterArea[iAxle] = 0.052986887100527499
dT1 = 0;
dT2 = 293.0;
dQ_rad will get the result -1.#IND000000000000, which I really don't understand.
Then I used the same equation in "QuickWatch", the correct result can be seen as follows (!?)
0.7 * 5.67e-8 * rotor.dRotorOuterArea[iAxle] * (dT1*dT1*dT1*dT1 - dT2*dT2*dT2*dT2) -15.499582013297069 double
Does anyone know how this error happened and how I can avoid this kind of error?
I use VC Express 2010 and the code is compiled using default MS C-Compiler.
Many thanks

-1.#IND000000000000 is Microsoft's representation of NaN. NaN can result due to a variety of operations, such as sqrt(-1.0), log(-1.0), 0/0, 0*INF, INF/INF. NaN is also propagated, so any operation on a double with value NaN will resut in a NaN. The following article provides more information about this (and other floating point states), and may be valuable in debugging this problem:
http://www.johndcook.com/IEEE_exceptions_in_cpp.html
Unfortunately, I wasn't able to reproduce your error in a trivial test case so it is difficult to provide more detailed information:
#include <stdio.h>
int main(void) {
double rotor = 0.052986887100527499;
double dT1 = 0;
double dT2 = 293.0;
double dQ_rad = 0.7 * 5.67e-8 * rotor * (dT1*dT1*dT1*dT1 - dT2*dT2*dT2*dT2);
//fprintf(stderr, "%.12lf\n", dQ_rad);
return 0;
}
One option you could try is to split your dQ_rad calculation into multiple steps, and verify the result of each operation to make sure it is not NaN. Something like:
double dQ_rad1 = 0.7 * 5.67e-18;
double dQ_rad2 = dQ_rad1 * rotor;
double dQ_rad3 = dT1*dT1*dT1*dT1;
double dQ_rad4 = dT2*dT2*dT2*dT2;
double dQ_rad5 = dQ_rad3 - dQ_rad4;
double dQ_rad = dQ_rad2 * dQ_rad5;
This may help to isolate which part of the calculation is resulting in a NaN.

Related

Why Multiply operator does not work with BigInteger?

Can someone explain why the multiply operator throws an IllegalAccessException exception whereas all other operators works as expected?
BigInteger plus = 10000000000000000000 + 100 // 10000000000000000100
BigInteger minus = 10000000000000000000 - 100 // 9999999999999999900
BigInteger div = 10000000000000000000 / 100 // 100000000000000000
BigInteger mod = 10000000000000000000 % 100 // 0
BigInteger pow = 10000000000000000000 ** 2 // 100000000000000000000000000000000000000
BigInteger star = 10000000000000000000 * 100 // java.lang.IllegalAccessException: Reflection is not allowed on java.math.BigInteger java.math.BigInteger.multiply(long)
Groovy version: 2.4.4
Java Version: 1.7.0
It's true that this does not happen on Groovy 2.4.later (I'm trying on .13 for example). In Groovy 2.4.4 and probably earlier, this works as you would expect:
new BigInteger("10000000000000000000") * new BigInteger("100")
However, there are still overflow risks even in 2.4.13. Consider:
BigInteger b = 100000000 * 100
println b
// outputs 1410065408 because the multiplication of ints happens first, overflows, and the result is converted to BigInteger
While Groovy can convert int to BigInteger by default, mathematical operations can (and maybe should? I did not find clear guidance either way) happen first. In your example I'd guess the first is being converted and the second is not, and that is exposing some bug that has since been fixed.

Increasing mean deviation with increasing sample size on Excel's NORMINV()

I have a strange behaviour in my attempt to code Excel's NORMINV() in C. As norminv() I took this function from a mathematician, it's probably correct since I also tried different ones with same result. Here's the code:
double calculate_probability(double x0, double x1)
{
return x0 + (x1 - x0) * rand() / ((double)RAND_MAX);
}
int main() {
long double probability = 0.0;
long double mean = 0.0;
long double stddev = 0.001;
long double change_percentage = 0.0;
long double current_price = 100.0;
srand(time(0));
int runs = 0;
long double prob_sum = 0.0;
long double price_sum = 0.0;
while (runs < 100000)
{
probability = calculate_probability(0.00001, 0.99999);
change_percentage = mean + stddev * norminv(probability); //norminv(p, mu, sigma) = mu + sigma * norminv(p)
current_price = current_price * (1.0 + change_percentage);
runs++;
prob_sum += probability;
price_sum += current_price;
}
printf("\n\n%f %f\n", price_sum / runs, prob_sum / runs);
return 0;
}
Now I want to simulate Excel's NORMINV(rand(), 0, 0.001) where rand() is a value > 0 and < 1, 0 is the mean and 0.001 would be the standard deviation.
With 1000 values it looks okay:
100.729780 0.501135
With 10000 values it spreads too much:
107.781909 0.502301
And with 100000 values it sometimes spreads even more:
87.876500 0.498738
Now I don't know why that happens. My assumption is that the random number generator has to be normally distributed, too. In my case probability is calculated fine since the mean is pretty much 0.5 all the time. Thus I don't know why the mean deviation is increasing. Can somebody help me?
You're doing something along the lines of a random walk, except your moves are with a multiplicative scaling factor rather than additive steps.
Consider two successive moves, the first of which gives 20% inflation, the second with 20% deflation. Starting with a baseline of 100, after the first step you're at 120. If you now take 80% of 120, you get 96 rather than the original 100. In other words, seemingly symmetric scaling factors are not actually symmetric. While your scaling factors are random, they are still being created symmetrically around 1, so I'm not surprised to see deviations accumulate.

How to calculate integral, numerically, in Rcpp

I've searched for an hour for the methods doing numerical integration. I'm new to Rcpp and rewriting my old programs now. What I have done in R was:
x=smpl.x(n,theta.true)
joint=function(theta){# the joint dist for
#all random variable
d=c()
for(i in 1:n){
d[i]=den(x[i],theta)
}
return(prod(d)*dbeta(theta,a,b)) }
joint.vec=Vectorize(joint)##vectorize the function, as required when
##using integrate()
margin=integrate(joint.vec,0,1)$value # the
##normalizeing constant at the donominator
area=integrate(joint.vec,0,theta.true)$value # the values at the
## numeritor
The integrate() function in R will be slow, and since I am doing the integration for a posterior distribution of a sample of size n, the value of the integration will be huge with large error.
I am trying to rewrite my code with the help of Rcpp, but I don't know how to deal with the integrate. Should I include a c++ h file? Or any suggestions?
You can code your function in C and call it, for instance, via the sourceCpp function and then integrate it in R. In alternative, you can call the integrate function of R within your C code by using the Function macro of Rcpp. See Dirk's book (Seamless R and C++ Integration with Rcpp) on page 56 for an example of how to call R functions from C. Another alternative (which I believe is the best for most cases) is to integrate your function written in C , directly in C, using the RcppGSL package.
As about the huge normalizing constant, sometimes it is better to scale the function at the mode before integrating it (you can find modes with, e.g., nlminb, optim, etc.). Then, you integrate the rescaled function and to recover the original nroming constant multiply the resulting normalizing constant by the rescaling factor. Hope this may help!
after reading your #utobi advice, I felt programming by my own maybe easier. I simply use Simpson formula to approximate the integral:
// [[Rcpp::export]]
double den_cpp (double x, double theta){
return(2*x/theta*(x<=theta)+2*(1-x)/(1-theta)*(theta<x));
}
// [[Rcpp::export]]
double joint_cpp ( double theta,int n,NumericVector x, double a, double b){
double val = 1.0;
NumericVector d(n);
for (int i = 0; i < n; i++){
double tmp = den_cpp(x[i],theta);
val = val*tmp;
}
val=val*R::dbeta(theta,a,b,0);
return(val);
}
// [[Rcpp::export]]
List Cov_rate_raw ( double theta_true, int n, double a, double b,NumericVector x){
//This function is used to test, not used in the fanal one
int steps = 1000;
double s = 0;
double start = 1.0e-4;
std::cout<<start<<" ";
double end = 1-start;
std::cout<<end<<" ";
double h = (end-start)/steps;
std::cout<<"1st h ="<<h<<" ";
double area = 0;
double margin = 0;
for (int i = 0; i < steps ; i++){
double at_x = start+h*i;
double f_val = (joint_cpp(at_x,n,x,a,b)+4*joint_cpp(at_x+h/2,n,x,a,b)+joint_cpp(at_x+h,n,x,a,b))/6;
s = s + f_val;
}
margin = h*s;
s=0;
h=(theta_true-start)/steps;
std::cout<<"2nd h ="<<h<<" ";
for (int i = 0; i < steps ; i++){
double at_x = start+h*i;
double f_val = (joint_cpp(at_x,n,x,a,b)+4*joint_cpp(at_x+h/2,n,x,a,b)+joint_cpp(at_x+h,n,x,a,b))/6;
s = s + f_val;
}
area = h * s;
double r = area/margin;
int cover = (r>=0.025)&&(r<=0.975);
List ret;
ret["s"] = s;
ret["margin"] = margin;
ret["area"] = area;
ret["ratio"] = r;
ret["if_cover"] = cover;
return(ret);
}
I'm not that good at c++, so the two for loops like kind of silly.
It generally works, but there are still several potential problems:
I don't really know how to choose the steps, or how many sub intervals do I need to approximate the integrals. I've taken numerical analysis when I was an undergraduate, I think maybe I need to check my book about the expression of the error term, to decide the step length.
I compared my results with those from R. the integrate() function in R can take care of the integral over the interval [0,1]. That helps me because my function is undefined at 0 or 1, which takes infinite value. In my C++ code, I can only make my interval from [1e-4, 1-1e-4]. I tried different values like 1e-7, 1e-10, however, 1e-4 was the one most close to R's results....What should I do with it?

Initial Conditions in OpenModelica

Will somebody please explain why the initial conditions are properly taken care of in the following openmodelica model compiled and simulated in OMEdit v1.9.1 beta2 in Windows, but if line 5 is commentd and 6 uncommented (x,y) is initialized to (0.5,0)?
Thank you.
class Pendulum "Planar Pendulum"
constant Real PI = 3.141592653589793;
parameter Real m = 1,g = 9.81,L = 0.5;
Real F "Force of the Rod";
output Real x(start=L*sin(PI/4)) ,y(start=-0.35355);
//output Real x(start = L * sin(PI / 4)), y(start=-L*sin(PI/4));
output Real vx,vy;
equation
m * der(vx) = -x / L * F;
m * der(vy) = (-y / L * F) - m * g;
der(x) = vx;
der(y) = vy;
x ^ 2 + y ^ 2 = L ^ 2;
end Pendulum;
The short answer is that initial values are treated merely as hints, you have to add the fixed=true attribute to force them as in:
output Real x(start=L*cos(PI/4),fixed=true);
If initialized variables are constrained, the fixed attribute should not be used on all initialized variables but on a 'proper' subset, in this case on just one.
The long answer can be found here

Spectrogram - Calculating is wrong

Ok, so basically, I am implementing the following algorithm:
1) Slice signal of size 256 with an overlap of 128
2) Multiply each chunk with the Hanning window
3) Get DFT
4) Compute the abs value sqrt(re*re+im*im)
Plotting these values, as a imshow I get the following result:
This looks ok, it's clearly showing some difference, i.e. the spike where the signal has most amplitude shows. However, in Python I get this result:
I know that I'm doing something right, but, also doing something wrong. I just can't seem to find out where which is making me not think I have done it correctly.
Any rough ideas to where I could be going wrong here? I mean, is plotting the abs value the right way here or not?
Thanks
EDIT:
Result after clamping..
UPDATE:
Code:
for(unsigned j=0; (j < stft_temp[i].size()/2); j++)
{
double v = 10 * log10(stft_temp[i][j].re * stft_temp[i][j].re + stft_temp[i][j].im * stft_temp[i][j].im);
double pixe = 1.5 * (v + 100);
STFT[i][j] = (int) pixe;
}
Typically you might want to use a log magnitude and then scale to the required range, which would usually be 0..255. In pseudo-code:
mag_dB = 10 * log10(re * re + im * im); // get log magnitude (dB)
pixel_intensity = 1.5 * (mag_dB + 100); // offset and scale
pixel_intensity = min(pixel_intensity, 255); // clamp to 0..255
pixel_intensity = max(pixel_intensity, 0);

Resources