Why the float cannot give exactly 0 value? - visual-c++

I have a question. Below is my code. I am wonder why, my output for "err3" cannot give a value of 0 ? is it because of the datatype is float?
The output are as below:
the value of err1 is -7.03125e-06
the value of err2 is 7.03125e-06
the value of err3 is -4.54747e-13
the operation between err1 and err2 is + and thus, the value for err3 should be 0.
can anyone help me to explain and solve this problem? I have google but still did not get the result.
Thanks in advance :)
void calnormal()
{
long numcal;
float indexNC0,indexNC1;
float error;
float aa0 = 0, ab0 = 0, ac0 = 0, ad0 = 0;
float bb0 = 0, bc0 = 0, bd0 = 0;
float cc0 = 0, cd0 = 0;
float dd0 = 0;
float aa, ab, ac, ad;
float bb, bc, bd;
float cc, cd;
float dd;
for(int i=0;i<noofvert;i++)
{
numcal = vlist[i].returnsizef();
for(int j=0;j<numcal;j=j+2)
{
indexNC0 = vlist[i].returnindexf(j);
u.ax = vlist[i].returnx() - vlist[indexNC0].returnx();
u.ay = vlist[i].returny() - vlist[indexNC0].returny();
u.az = vlist[i].returnz() - vlist[indexNC0].returnz();
if(j == 0){v0 = u;}
indexNC1 = vlist[i].returnindexf(j+1);
v.ax = vlist[i].returnx() - vlist[indexNC1].returnx();
v.ay = vlist[i].returny() - vlist[indexNC1].returny();
v.az = vlist[i].returnz() - vlist[indexNC1].returnz();
normal.ax = u.ay * v.az - u.az * v.ay;
normal.ay = u.az * v.ax - u.ax * v.az;
normal.az = u.ax * v.ay - u.ay * v.ax;
normal.D = - vlist[i].returnx() * normal.ax - vlist[i].returny() * normal.ay - vlist[i].returnz() * normal.az;
aa = normal.ax * normal.ax;
ab = normal.ax * normal.ay;
ac = normal.ax * normal.az;
ad = normal.ax * normal.D;
bb = normal.ay * normal.ay;
bc = normal.ay * normal.az;
bd = normal.ay * normal.D;
cc = normal.az * normal.az;
cd = normal.az * normal.D;
dd = normal.D * normal.D;
aa0 = aa0 + aa;
ab0 = ab0 + ab; bb0 = bb0 + bb;
ac0 = ac0 + ac; bc0 = bc0 + bc; cc0 = cc0 + cc;
ad0 = ad0 + ad; bd0 = bd0 + bd; cd0 = cd0 + cd; dd0 = dd0 + dd;
}
double err1,err2,err3;
**err1 = cd0 * vlist[i].returnz();
err2 = dd0 ;
err3 = err2 + err1;
cout << err1 << " " << err2 << " " << err3 << endl;**
cout << endl;
}
cout << endl;
}

err3 is very close to zero, 13 zeros. err1 and err2 have 6 zeros.
A float can store about 7 decimal digits of precision.
If you had printed out more decimals of err1 and err2 you probably see they are not exactly the same. Because err1 and err2 is float they store 7 digits of precision. They probably differ a little bit in the last digit (0.5zeros+7digits) and substracting them give a result with 0.000..00x (13 zeros).
A double have roughly a maximum of 16 decimal digits of precision. When you do computations, you lose a little precision for every operation because every result is truncated to about 16 decimal digits. If you in the end end up with 13 digits of precision it is fully normal.

Related

Why does my cs50 sepia filter not compute the right pixel values?

for some reason the math portion of my sepia code does not seem to work. I get errors when I run check50, and it shows all the pixel values as being too high. I triple check the values for the filter but all seems good.
void sepia(int height, int width, RGBTRIPLE image[height][width])
{
float org_red = 0;
float org_green = 0;
float org_blue = 0;
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
org_red = image[i][j].rgbtRed;
org_green = image[i][j].rgbtBlue;
org_blue = image[i][j].rgbtGreen;
long sepiaRed = (.393 * org_red + .769 * org_green + .189 * org_blue);
long sepiaGreen = (.349 * org_red) + .686 * org_green + .168 * org_blue;
long sepiaBlue = (.272 * org_red + .534 * org_green + .131 * org_blue);
if (sepiaRed > 255)
{
sepiaRed = 255;
}
if (sepiaGreen > 255)
{
sepiaGreen = 255;
}
if (sepiaBlue > 255)
{
sepiaBlue = 255;
}
image[i][j].rgbtRed = round(sepiaRed);
image[i][j].rgbtGreen = round(sepiaGreen);
image[i][j].rgbtBlue = round(sepiaBlue);
}
}
return;
}
The error i get says
:( sepia correctly filters single pixel
expected "56 50 39\n", not "84 75 58\n"
:( sepia correctly filters simple 3x3 image
expected "100 89 69\n100...", not "100 88 69\n100..."
:( sepia correctly filters more complex 3x3 image
expected "25 22 17\n66 5...", not "30 27 21\n71 6..."
:( sepia correctly filters 4x4 image
expected "25 22 17\n66 5...", not "30 27 21\n71 6..."

Best data type and rounding function for weight and currency variables

I need to multiply two values ​​- weight and currency (Visual c++, mfc). E.g.:
a=11.121;
b=12.11;
c=a*b;
Next I have to round "с" to 2 digits after point (currency value, e.g. 134.68). What the best data types and rounding function for this variables? The rounding procedure must be mathematically correct.
P.S. The problem was solved by very ugly but working part of code:
CString GetPriceSum(CString weight,CString price)
{
price.Replace(".", "");
price = price + "0";
if (weight.Find(".") == -1) { weight = weight + ".000"; }
weight.Replace(".", "");
unsigned long long int iprice = atoi(price);
unsigned long long int iweight = atoi(weight);
unsigned long long int isum = iprice * iweight;
CString sum = ""; sum.Format("%llu", isum);
CString r1 = sum.Right(1);
if (atoi(r1) >= 5) { isum += 10; }
CString r2 = sum.Mid(sum.GetLength() - 2, 1);
if (atoi(r2) >= 5) { isum += 100; sum.Format("%llu", isum);}
r2 = sum.Mid(sum.GetLength() - 3, 1);
if (atoi(r2) >= 5) { isum += 1000; sum.Format("%llu", isum);}
r2 = sum.Mid(sum.GetLength() - 4, 1);
if (atoi(r2) >= 5) { isum += 10000; sum.Format("%llu", isum);}
CString finsum = ""; finsum.Format("%llu", isum);
finsum.Insert(finsum.GetLength() - 6, ".");
finsum.Delete(finsum.GetLength() - 4, 4);
if (finsum.Left(1) == ".") { finsum = "0" + finsum; }
return finsum;
}
How about this: let's start from
API I use, counts values using some other language. And they round they values mathematically correct.
In your other question, you got those value as strings. You can construct an integer from those digits (remove decimal point). Assuming that the product fits in a 64-bit int, you can multiply them exactly. Now you can manually round to a desired precision and drop unneeded digits.
Code example (you may want to add error checking):
#define _CRT_SECURE_NO_WARNINGS
#include <string>
#include <iostream>
#include <sstream>
int main()
{
std::string a = "40.50";
std::string b = "0.490";
long long l1, dec1, l2, dec2;
sscanf(a.data(), "%lld.%lld", &l1, &dec1);
l1 = l1 * 100 + dec1;
sscanf(b.data(), "%lld.%lld", &l2, &dec2);
l2 = l2 * 1000 + dec2;
long long r = l1 * l2;
r /= 100;
int rem = r % 10;
r /= 10;
if (rem >= 5)
r++;
std::stringstream ss;
ss << r / 100 << "." << std::setw(2) << std::setfill('0') << r % 100;
std::cout << ss.str();
}
You can also use stringstream instead of sscanf to parse the strings.

Unexpected result in color conversion from LAB to RGB

My goal is to create high-resolution images of color charts in LAB.
I'm a beginner in programming and I use Processing because that's the best I know to do it. However, it works only in RGB or HSB so I have to convert LAB to RGB in order to display it.
I used the formulas found on the web (LAB to XYZ and XYZ to RGB)
I included them in my code, then I use a "for" loop to determine the color for each pixel.
I saw a few topics on color conversion, but I'm kind of stuck as I don't know where my problem is coming from ....
So here it is: for a fixed value of L = 100, everything is working perfectly, I'm getting this image as expected :
https://drive.google.com/file/d/0ByjuuWpChE01X3otSFRQNFUyVjA/edit?usp=sharing
But when I try to make another image for a fixed value of a = 0, I get a horizontal line in the bottom, as if there was a problem with lower values of L ... here it is :
https://drive.google.com/file/d/0ByjuuWpChE01RzJWUVZnR2U3VW8/edit?usp=sharing
Here is my code, I hope it will be clear, ask me if you need anything, and thank you very much for your help.
// parameters for the code execution
void setup() {
noLoop();
size(10,10,P2D);
nuancier = createGraphics(taille,taille);
}
// final image file and size
PGraphics nuancier ;
int taille = 1000 ;
// Arrays for color values
float[] colorLAB = new float[3];
float[] colorXYZ = new float[3];
float[] colorRGB = new float[3];
// colors
float X;
float Y;
float Z;
float L;
float a;
float b;
float R;
float G;
float B;
// pixels
int x ;
int y ;
// function to convert Lab to XYZ
float[] LABtoXYZ() {
L = colorLAB[0];
a = colorLAB[1];
b = colorLAB[2];
float ntY = ( L + 16 ) / 116 ;
float ntX = a / 500 + ntY ;
float ntZ = ntY - b / 200 ;
if ( (pow(ntY,3)) > 0.008856 ) {
ntY = (pow(ntY,3)) ;
} else { ntY = ( ntY - 16 / 116 ) / 7.787 ; }
if ( (pow(ntX,3)) > 0.008856 ) {
ntX = (pow(ntX,3)) ;
} else { ntX = ( ntX - 16 / 116 ) / 7.787 ; }
if ( (pow(ntZ,3)) > 0.008856 ) {
ntZ = (pow(ntZ,3)) ;
} else { ntZ = ( ntZ - 16 / 116 ) / 7.787 ; }
X = 95.047 * ntX ; //ref_X = 95.047 Observateur= 2°, Illuminant= D65
Y = 100 * ntY ; //ref_Y = 100.000
Z = 108.883 * ntZ ; //ref_Z = 108.883
colorXYZ[0] = X ;
colorXYZ[1] = Y ;
colorXYZ[2] = Z ;
return colorXYZ ;
}
// function to convert XYZ to RGB
float[] XYZtoRGB() {
X = colorXYZ[0];
Y = colorXYZ[1];
Z = colorXYZ[2];
float ntX = X / 100 ; //X compris entre 0 et 95.047 ( Observateur = 2°, Illuminant = D65 )
float ntY = Y / 100 ; //Y compris entre 0 et 100.000
float ntZ = Z / 100 ; //Z compris entre 0 et 108.883
float ntR = ntX * 3.2406 + ntY * (-1.5372) + ntZ * (-0.4986) ;
float ntG = ntX * (-0.9689) + ntY * 1.8758 + ntZ * 0.0415 ;
float ntB = ntX * 0.0557 + ntY * (-0.2040) + ntZ * 1.0570 ;
if ( ntR > 0.0031308 ) {
ntR = 1.055 * ( pow(ntR,( 1 / 2.4 )) ) - 0.055 ;
} else { ntR = 12.92 * ntR ; }
if ( ntG > 0.0031308 ) {
ntG = 1.055 * ( pow(ntG,( 1 / 2.4 )) ) - 0.055 ;
} else { ntG = 12.92 * ntG ; }
if ( ntB > 0.0031308 ) {
ntB = 1.055 * ( pow(ntB,( 1 / 2.4 )) ) - 0.055 ;
} else { ntB = 12.92 * ntB ; }
R = ntR * 255 ;
G = ntG * 255 ;
B = ntB * 255 ;
colorRGB[0] = R ;
colorRGB[1] = G ;
colorRGB[2] = B ;
return colorRGB ;
}
// I know that with RGB, not every visible color is possible
//so I just made this quick function, to bound RGB values between 0 and 255
float[] arrondirRGB () {
for (int i=0;i<3;i++) {
if (colorRGB[i]>255) {
colorRGB[i]=255 ;
}
if (colorRGB[i]<0) {
colorRGB[i]=0 ;
}
}
return colorRGB;
}
// operating section
void draw () {
nuancier.beginDraw();
nuancier.noSmooth();
nuancier.colorMode(RGB, 255);
nuancier.endDraw();
for (x=0;x<taille;x++) {
for (y=0;y<taille;y++) {
colorLAB[0] = (((taille-y)*100)/taille) ; // --------------------------------------------------------------- valeur 100 // formule ((x*100)/taille)
colorLAB[1] = 0 ; // ----------------------------------------------------------- valeur 0 // formule ((x*256)/taille)-127
colorLAB[2] = (((x)*256)/taille)-127 ; // -------------------------------------------------- valeur 0 // (((taille-y)*256)/taille)-127
println(colorLAB[0]);
LABtoXYZ () ;
XYZtoRGB () ;
arrondirRGB () ;
nuancier.beginDraw();
nuancier.stroke (colorRGB[0],colorRGB[1],colorRGB[2]);
nuancier.point (x,y);
nuancier.endDraw();
}
}
nuancier.save("nuancier.tiff");
println("done !");
}
Ok I found out !
The problem was dividing by integers.
I don't know if it works like that in other languages, but in processing if you write
x = 2/5
the result will be x = 0 instead of x = 0.4 ; it's because with the denominator being an integer, the result will always be an integer .... so
x = 2/5.0
will give x = 0.4 !
I had to put a ".0" after every integer dividing, and turn to float any integer data that would divide.
The result is perfect, no more problems !
https://github.com/processing/processing/wiki/Troubleshooting#Why_does_2_.2F_5_.3D_0_instead_of_0.4.3F

How to multiply hex color codes?

I want to change color from 0x008000 (green) to 0x0000FF (blue).
If I multiply 0x008000 * 256 = 0x800000 (Google search acts as a calculator).
I need to find the correct multiplier so the result would be 0x0000FF.
To answer people below - I am doing this in order to make a color transition on a rectangle in pixi.js.
From what I've gathered, RGB color code is divided into 3 parts - red, green and blue in a scale of 0-FF expressed in hex, or 0-255 in decimal. But how to multiply correctly to get desired result?
If you want linear change from one color to another, i recommend something like this:
int startColor = 0x008000;
int endColor = 0x0000FF;
int startRed = (startColor >> 16) & 0xFF;
int startGreen = (startColor >> 8) & 0xFF;
int startBlue = startColor & 0xFF;
int endRed, endGreen, endBlue; //same code
int steps = 24;
int[] result = new int[steps];
for(int i=0; i<steps; i++) {
int newRed = ( (steps - 1 - i)*startRed + i*endRed ) / (steps - 1);
int newGreen, newBlue; //same code
result[i] = newRed << 16 | newGreen << 8 | newBlue;
}
This is for JavaScript:
var startColor = 0x008000;
var endColor = 0x0000FF;
var startRed = (startColor >> 16) & 0xFF;
var startGreen = (startColor >> 8) & 0xFF;
var startBlue = startColor & 0xFF;
var endRed = (endColor >> 16) & 0xFF;
var endGreen = (endColor >> 8) & 0xFF;
var endBlue = endColor & 0xFF;
var steps = 24;
var result = [];
for (var i = 0; i < steps; i++) {
var newRed = ((steps - 1 - i) * startRed + i * endRed) / (steps - 1);
var newGreen = ((steps - 1 - i) * startGreen + i * endGreen) / (steps - 1);
var newBlue = ((steps - 1 - i) * startBlue + i * endBlue) / (steps - 1);
var comb = newRed << 16 | newGreen << 8 | newBlue;
console.log(i + " -> " + comb.toString(16));
result.push(comb);
}
console.log(result);

Not able to set processor affinity

I'm trying to implement this code on a 8 core cluster. It has 2 sockets each with 4 cores. I am trying to create 8 threads and set affinity using pthread_attr_setaffinity_np function. But when I look at my performance in VTunes , it shows me that 3969 odd threads are being created. I don't understand why and how! Above all, my performance is exactly the same as it was when no affinity was set (OS thread scheduling). Can someone please help me debug this problem? My code is running perfectly fine but I have no control over the threads! Thanks in advance.
--------------------------------------CODE-------------------------------------------
const int num_thrd=8;
bool RCTAlgorithmBackprojection(RabbitCtGlobalData* r)
{
float O_L = r->O_L;
float R_L = r->R_L;
double* A_n = r->A_n;
float* I_n = r->I_n;
float* f_L = r->f_L;*/
cpu_set_t cpu[num_thrd];
pthread_t thread[num_thrd];
pthread_attr_t attr[num_thrd];
for(int i =0; i< num_thrd; i++)
{
threadCopy[i].L = r->L;
threadCopy[i].O_L = r->O_L;
threadCopy[i].R_L = r->R_L;
threadCopy[i].A_n = r->A_n;
threadCopy[i].I_n = r->I_n;
threadCopy[i].f_L = r->f_L;
threadCopy[i].slice= i;
threadCopy[i].S_x = r->S_x;
threadCopy[i].S_y = r->S_y;
pthread_attr_init(&attr[i]);
CPU_ZERO(&cpu[i]);
CPU_SET(i, &cpu[i]);
pthread_attr_setaffinity_np(&attr[i], CPU_SETSIZE, &cpu[i]);
int rc=pthread_create(&thread[i], &attr[i], backProject, (void*)&threadCopy[i]);
if (rc!=0)
{
cout<<"Can't create thread\n"<<endl;
return -1;
}
// sleep(1);
}
for (int i = 0; i < num_thrd; i++) {
pthread_join(thread[i], NULL);
}
//s_rcgd = r;
return true;
}
void* backProject (void* parm)
{
copyStruct* s = (copyStruct*)parm; // retrive the slice info
unsigned int L = s->L;
float O_L = s->O_L;
float R_L = s->R_L;
double* A_n = s->A_n;
float* I_n = s->I_n;
float* f_L = s->f_L;
int slice1 = s->slice;
//cout<<"The size of volume is L= "<<L<<endl;
int from = (slice1 * L) / num_thrd; // note that this 'slicing' works fine
int to = ((slice1+1) * L) / num_thrd; // even if SIZE is not divisible by num_thrd
//cout<<"computing slice " << slice1<< " from row " << from<< " to " << to-1<<endl;
for (unsigned int k=from; k<to; k++)
{
double z = O_L + (double)k * R_L;
for (unsigned int j=0; j<L; j++)
{
double y = O_L + (double)j * R_L;
for (unsigned int i=0; i<L; i++)
{
double x = O_L + (double)i * R_L;
double w_n = A_n[2] * x + A_n[5] * y + A_n[8] * z + A_n[11];
double u_n = (A_n[0] * x + A_n[3] * y + A_n[6] * z + A_n[9] ) / w_n;
double v_n = (A_n[1] * x + A_n[4] * y + A_n[7] * z + A_n[10]) / w_n;
f_L[k * L * L + j * L + i] += (float)(1.0 / (w_n * w_n) * p_hat_n(u_n, v_n));
}
}
}
//cout<<" finished slice "<<slice1<<endl;
return NULL;
}
Alright, so I found out the reason was because of CPU_SETSIZE that I was using as an argument in pthread_attr_setaffinity_np. I replaced it with num_thrd . Apparently CPU_SETSIZE which will be declared inside #define __USE_GNU was not included in my file.!! Sorry if I bothered any of y'all who were trying to debug this thanks again!

Resources