iPhone/Objectivec-C float division incorrect output - ios4

In an IOS program i am trying to divide some float value but the result is incorrect
float a = 179.891891;
float b = 8.994595;
NSLog(#"Result %f",a/b);
On dividing the two (a/b) the output i get is 20.0000 instead of 19.9999989993991 . I have tried using double instead of float but still the same issue . The value of "b" keeps on varying as i obtain it from some calculations . I need the result to be exactly precise as it further is required for some calculations and 20.0000 instead of 19.9999989993991 make a lot of difference in the final output i get .
Any help on this would be really great :).

I dont know in Objective C, but in C, you should do casting: (float)(a/b) .Otherwise it is integer division.

Related

How can I prevent Python from automatically converting int to float when I use pow and mod?

I am trying to encrypt with rsa using the formula c = m^p mod q.
The Problem is if the number is too large, python3 convert it to float when doing modulo.
I tried to stop converting by converting into int
c = int(int((pow(n,p)) % q))
the problem is when p is too big it automatically has decimals and python thinks , that i am trying integer to float. Which leads to this:
OverflowError: int too large to convert to float
Is there a way to solve this ?
This may not solve your problem, but it does address the specific concerns you put forth in your question and suggests possible causes based on what you've told us.
The problem you're having isn't with %. As per the documentation,
The floor division and modulo operators are connected by the following identity: x == (x//y)*y + (x%y).
Given integer x and y, (x//y)*y is always an exact integer, so x - (x//y)*y == x%y must also be an integer.
Since you said you are using the built-in pow function, I suspect that your problem is that your inputs are floats instead of ints. In that case, both pow and ** will try to convert the other of the argument to float, which could be the source of your error. If this is the case, wrapping each argument in int will make the error go away, but your RSA implementation will be incorrect.

How do I get rid of. 0 in Python

a = 28.85
b = 2000
print(a*b)
Result 57700.0
select name from fake limit 57700.0 ,10
This sentence is incorrect.
Multiplying a float with an int naturally gives you a float as an answer.
So, as #Rakesh suggested, truncate it with int(a*b).
Beware that you will lose everything after the dot...
You can keep the result as float if you use format in your print statement and cut all numbers behind the dot.
print("{:0.0f}".format(a * b))

Vulkan - strange mapping of float shader color values to uchar values in a read buffer

I knew that a range of float color value in a shader [0..1] is mapped into range of [0..255] in UCHAR buffer.
According to this, I was expecting for steps of size of 1/255 in shader color values for each change in UCHAR buffer.
But the results were surprisingly different. Here is for the first two steps:
Red float value in Shader -> UCHAR value in a read Buffer
0.000000 -> 0
0.002197 -> 0
0.002198 -> 1
0.006102 -> 1
0.006105 -> 2
The first two steps are around 0.002197 and 0.006102 which are different than the expected steps: 0.00392 and 0.00784.
So what is the mapping formula ?
Unsigned integer normalization is based on the formula f = i/INT_MAX, where f is the floating point value (after clamping to [0, 1]), i is the integer value, and INT_MAX is the maximum integer value for the integer's bitdepth (255) in this case.
So if you have a float, and want the unsigned, normalized integer value of it, you use i = f * INT_MAX. Of course... integers do not have the same precision as floats. So if the result of f * INT_MAX is 0.5, what is the integer value of that? It could be 0, or it could be 1, depending on how things are rounded.
Implementations are permitted to round integer values in any way they prefer. They are encouraged to use nearest rounding (the post-conversion 0.49 would become 0, and 0.5 would become 1), but that is not a requirement. The only requirements are that it must pick one of the two nearest values (it can't turn 0.5 into 3) and that the exact floating-point values of 0.0 and 1.0 (which includes any values clamped to them) must be exactly represented as integer 0 and INT_MAX.
If you have an explicit need to have direct rounding, you can always do the normalization yourself. In fact, GLSL has specific functions to help you. The following assumes that you are trying to write to a texture with the Vulkan format R8G8B8A8_UNORM, and we're assuming you're writing to a storage image, not via outputs from the fragment shader (you can do that too, but you lose blending).
So, step 1 is to change your layout format to be r32ui. That is, you are now writing an unsigned 32-bit value, rather than 4 unsigned 8-bit normalized values. That's perfectly valid.
Step 2 is to employ the packUNorm4x8 function. This function does float-to-integer normalization, but the specification explicitly performs rounding correctly. Use the return value of that function in your imageStore function, and you're fine.
If you want to write to a fragment shader output, that's a bit more complex. There, you will need to use a different image view, one that uses the R32_UINT format. So you're creating a 32-bit unsigned integer view of a 4x8-bit normalized texture. That has to become a render target, so you're going to have to do subpass surgery. From there, just write the result of packUNorm4x8.
Of course, you immediately lose blending and similar operations, since you're writing integers values. And since you had to do that subpass surgery, it's likely that any shader writing to it will need to do this too.
Also, note that in both cases, you will likely need to adjust the order of the components of the value you write. packUNorm4x8 is explicitly defined to be little endian, whereas (I believe?) R8G8B8A8 is specified to be in that order, most-significant to least. So you'll probably need to essentially do endian swapping with packUNorm4x8(value.abgr).

Python3: long int too large to convert to float

I've been checking some topics around here about the same problem I'm getting but they don't seem to help.
My problem is when I try to execute the following code, I get the error found in the title. How do I go around this?
d=2
while(n != 1):
n = 2
d = (math.sqrt(2 + d))
n= (n/d)
f = (f * (n))
print (f)
That's because math.sqrt, as a consequence of using the C sqrt function, works on floating point number which are not unlimited in size. Python is unable to convert the long integer into a floating point number because it is to big.
See this question on ways to square root large integers.
Better, you could use the decimal module, which is an unlimited size number type stored in base-10. Use decimal.Decimal(number).sqrt() to find the sqrt of a number.

Return the float representation of 2 floats being multiplied (not precise value)

Using Sybase ASE 12.5 I have the following situation.
2 values stored in float cols when multiplied give a value.
Converting that value to a varchar (or retrieving it with Java) gives the underlying precise value which the floats approximated to.
My issue is that the value as represented by the floats is correct, but the precise value is causing issues (due to strict rounding rules).
For example
declare #a float,#b float
select #a = 4.047000, #b = 1033000.000000
select #a*#b as correct , str(#a*#b,40,20) as wrong
gives:
correct: 4180551.000000,
wrong: 4180550.9999999995343387
Similarly when
#a = 4.047000, #b = 1
...you get
correct: 4.047000,
wrong: 4.0469999999999997
(same thing happens using convert(varchar(30), #a*#b) and cast(#a*#b, varchar(30) )
I appreciate it would be easy to just round the first example in java but for various business reasons that cannot be done and in any case it wouldn't work for the second.
I also cannot change the float table column datatype.
Is there anyway to get the float representation of the multiplication product either as a string or the actual 'correct' value above?
Thanks
Chris

Resources