I am trying solve some matrices calculations using the MathNet.Numericslibraries. It all works fine with double numbers. However now I want to represent numbers as fractions and want to get the answers to the calculations as fractions. How can I do that?
What I am currently doing is this.
var M = Matrix<double>.Build;
var V = Vector<double>.Build;
double [,] x1 = {
{0, 0, 0},
{1.0/2, 0 , 0},
{1.0/2, 1.0, 1.0}
};
var m = M.DenseOfArray(x1);
These fractions gets converted into doubles and the final answer will be in doubles. I want to retain fractions throughout the calculation.
There are no fractions in your code sample. The expression "1.0/2" in C# is not a fraction but another way to write the double literal "0.5d". In fact there is no fraction data type in the .Net framework at all.
The F# extensions of Math.NET Numerics do provide a BigRational type which implements fractions based on BigIntegers, but Math.NET Numerics does not support vectors or matrices of this value type either. Math.NET Symbolics might support this in the future but it's not there yet.
Related
I knew that a range of float color value in a shader [0..1] is mapped into range of [0..255] in UCHAR buffer.
According to this, I was expecting for steps of size of 1/255 in shader color values for each change in UCHAR buffer.
But the results were surprisingly different. Here is for the first two steps:
Red float value in Shader -> UCHAR value in a read Buffer
0.000000 -> 0
0.002197 -> 0
0.002198 -> 1
0.006102 -> 1
0.006105 -> 2
The first two steps are around 0.002197 and 0.006102 which are different than the expected steps: 0.00392 and 0.00784.
So what is the mapping formula ?
Unsigned integer normalization is based on the formula f = i/INT_MAX, where f is the floating point value (after clamping to [0, 1]), i is the integer value, and INT_MAX is the maximum integer value for the integer's bitdepth (255) in this case.
So if you have a float, and want the unsigned, normalized integer value of it, you use i = f * INT_MAX. Of course... integers do not have the same precision as floats. So if the result of f * INT_MAX is 0.5, what is the integer value of that? It could be 0, or it could be 1, depending on how things are rounded.
Implementations are permitted to round integer values in any way they prefer. They are encouraged to use nearest rounding (the post-conversion 0.49 would become 0, and 0.5 would become 1), but that is not a requirement. The only requirements are that it must pick one of the two nearest values (it can't turn 0.5 into 3) and that the exact floating-point values of 0.0 and 1.0 (which includes any values clamped to them) must be exactly represented as integer 0 and INT_MAX.
If you have an explicit need to have direct rounding, you can always do the normalization yourself. In fact, GLSL has specific functions to help you. The following assumes that you are trying to write to a texture with the Vulkan format R8G8B8A8_UNORM, and we're assuming you're writing to a storage image, not via outputs from the fragment shader (you can do that too, but you lose blending).
So, step 1 is to change your layout format to be r32ui. That is, you are now writing an unsigned 32-bit value, rather than 4 unsigned 8-bit normalized values. That's perfectly valid.
Step 2 is to employ the packUNorm4x8 function. This function does float-to-integer normalization, but the specification explicitly performs rounding correctly. Use the return value of that function in your imageStore function, and you're fine.
If you want to write to a fragment shader output, that's a bit more complex. There, you will need to use a different image view, one that uses the R32_UINT format. So you're creating a 32-bit unsigned integer view of a 4x8-bit normalized texture. That has to become a render target, so you're going to have to do subpass surgery. From there, just write the result of packUNorm4x8.
Of course, you immediately lose blending and similar operations, since you're writing integers values. And since you had to do that subpass surgery, it's likely that any shader writing to it will need to do this too.
Also, note that in both cases, you will likely need to adjust the order of the components of the value you write. packUNorm4x8 is explicitly defined to be little endian, whereas (I believe?) R8G8B8A8 is specified to be in that order, most-significant to least. So you'll probably need to essentially do endian swapping with packUNorm4x8(value.abgr).
Hello I am trying to understant why after this operation:
a = np.array([[1, 2], [3, 4]])
ainv = inv(a)
print(np.dot(a,ainv))
I am getting:
[[1.0000000e+00 0.0000000e+00]
[8.8817842e-16 1.0000000e+00]]
Since I am using the a's inverse matrix I think that I shoud get:
[[1,0],[0,1]]
SO I would like support to understand the result
a = np.array([[1.0, 2.0], [3.0, 4.0]])
ainv = np.linalg.inv(a) #[[-2.0, 1.0],[1.5, -0.5]]
print(np.dot(a,ainv))
Yields as you discovered:
[[1.0000000e+00 0.0000000e+00]
[8.8817842e-16 1.0000000e+00]]
Lets look at the type of the array elements
type(ainv[1][1])
Shows us that the type of the array is
numpy.float64
Lets look at the numpy precision for this type
numpy.finfo(numpy.float64).precision
Numpy says the aproximate number of decimal digits to which this kind of float is precise is 15.
15
For curiosity, we can also look at the machine epsilon for the type;
np.finfo(np.float64).eps
Which yields the smallest number n where 1 +n is indistinguishable from 1
2.220446049250313e-16
So even though the number you get is technically distinguishable from 0 for the datatype, the overall precision is 15 decimals, calculations on large matrices might compound floating point imprecision even further.
That is the identity matrix, almost. You are getting numbers very close to zero instead of zero, which is a common issue with floating point numbers since they are only a finite approximation of real numbers. For all practical purposes 8.8e-16 or 0.00000000000000088 is ~ zero.
I'm using below two statements :-
double foo = 20.00
float bar = 20.00
println foo == bar
And
double foo = 20.01
float bar = 20.01
println foo == bar
It gives the output as :-
true
false
Can anyone know what makes difference between these two statements?
double and float values don't have an exact internal representation for every value. The only decimal values that can be represented as an IEEE-754 binary floating-point for two decimal points are 0, 0.25, 0.5, 0.75 and 1. The rest of representations will always be slightly off, with small differences between doubles and floats creating this inequality behaviour.
This is not just valid for Groovy, but for Java as well.
For example:
double foo = 20.25
float bar = 20.25
println foo == bar
Output:
true
The 0.1 part of 20.01 is infinite repeating in binary; 20.01 =
10100.00000010100011110101110000101000111101011100001010001111010111...
floats are rounded (to nearest) to 24 significant bits; doubles are rounded to 53. That makes the float
10100.0000001010001111011
and the double
10100.000000101000111101011100001010001111010111000011
In decimal, those are
20.0100002288818359375 and
20.010000000000001563194018672220408916473388671875, respectively.
(You could see this directly using my decimal to floating-point converter.)
The Groovy Float aren't kept in the memory precisely. That is the main cause for the differences you have.
In Groovy the definition of the precision by the number of digits after the right side of the dot can be achieved by the following method signature:
public float trunc(int precision)
precision - the number of decimal places to keep.
For more details please follow the Class Float documentation.
It is more prefered to use BigDecimal class as a floating number when using the Groovy language.
The conversion from Number to String is much easier and there is the option to define the precision of the floating number **in the constructor.
BigDecimal(BigInteger unscaledVal, int scale)
Translates a BigInteger unscaled value and an int scale into a BigDecimal.
For more details please follow the Java BigDecimal documentation. As the Groovy language is based on the Java language. More over the BigDecimal will represent the exact value of the number.
So, I've learned quite a few ways to control the precision when I'm dealing with floats.
Here is an example of 3 different techniques:
somefloat=0.0123456789
print("{0:.10f}".format(somefloat))
print("%.5f" % somefloat)
print(Decimal(somefloat).quantize(Decimal(".01")))
This will print:
0.0123456789
0.01235
0.01
In all of the above examples, the precision itself is a fixed value, but how could I turn the precision itself a variable that could be
be entered by the end-user?
I mean, the fixed precision values are now inside quatations marks, and I can't seem to find a way to add any variable there. Is there a way, anyway?
I'm on Python 3.
Using format:
somefloat=0.0123456789
precision = 5
print("{0:.{1}f}".format(somefloat, precision))
# 0.01235
Using old-style string interpolation:
print("%.*f" % (precision, somefloat))
# 0.01235
Using decimal:
import decimal
D = decimal.Decimal
q = D(10) ** -precision
print(D(somefloat).quantize(q))
# 0.01235
Two ways to normalize a Vector3 object; by calling Vector3.Normalize() and the other by normalizing from scratch:
class Tester {
static Vector3 NormalizeVector(Vector3 v)
{
float l = v.Length();
return new Vector3(v.X / l, v.Y / l, v.Z / l);
}
public static void Main(string[] args)
{
Vector3 v = new Vector3(0.0f, 0.0f, 7.0f);
Vector3 v2 = NormalizeVector(v);
Debug.WriteLine(v2.ToString());
v.Normalize();
Debug.WriteLine(v.ToString());
}
}
The code above produces this:
X: 0
Y: 0
Z: 1
X: 0
Y: 0
Z: 0.9999999
Why?
(Bonus points: Why Me?)
Look how they implemented it (e.g. in asm).
Maybe they wanted to be faster and produced something like:
l = 1 / v.length();
return new Vector3(v.X * l, v.Y * l, v.Z * l);
to trade 2 divisions against 3 multiplications (because they thought mults were faster than divs (which is for modern fpus most often not valid)). This introduced one level more of operation, so the less precision.
This would be the often cited "premature optimization".
Don't care about this. There's always some error involved when using floats. If you're curious, try changing to double and see if this still happens.
You should expect this when using floats, the basic reason being that the computer processes in binary and this doesn't map exactly to decimal.
For an intuitive example of issues between different bases consider the fraction 1/3. It cannot be represented exactly in Decimal (it's 0.333333.....) but can be in Terniary (as 0.1).
Generally these issues are a lot less obvious with doubles, at the expense of computing costs (double the number of bits to manipulate). However in view of the fact that a float level of precision was enough to get man to the moon then you really shouldn't obsess :-)
These issues are sort of computer theory 101 (as opposed to programming 101 - which you're obviously well beyond), and if your heading towards Direct X code where similar things can come up regularly I'd suggest it might be a good idea to pick up a basic computer theory book and read it quickly.
You have here an interesting discussion about String formatting of floats.
Just for reference:
Your number requires 24 bits to be represented, which means that you are using up the whole mantissa of a float (23bits + 1 implied bit).
Single.ToString () is ultimately implemented by a native function, so I cannot tell for sure what is going on, but my guess is that it uses the last digit to round the whole mantissa.
The reason behind this could be that you often get numbers that cannot be represented exactly in binary, so you would get a long mantissa; for instance, 0.01 is represented internally as 0.00999... as you can see by writing:
float f = 0.01f;
Console.WriteLine ("{0:G}", f);
Console.WriteLine ("{0:G}", (double) f);
by rounding at the seventh digit, you will get back "0.01", which is what you would have expected.
For what seen above, numbers with only 7 digits will not show this problem, as you already saw.
Just to be clear: the rounding is taking place only when you convert your number to a string: your calculations, if any, will use all the available bits.
Floats have a precision of 7 digits externally (9 internally), so if you go above that then rounding (with potential quirks) is automatic.
If you drop the float down to 7 digits (for instance, 1 to the left, 6 to the right) then it will work out and the string conversion will as well.
As for the bonus points:
Why you ? Because this code was 'eager to blow on you'.
(Vulcan... blow... ok.
Lamest.
Punt.
Ever)
If your code is broken by minute floating point rounding errors, then I'm afraid you need to fix it, as they're just a fact of life.