I use vtk library and vtkRectilinearGridWriter to write output vtk files in my scientific research. But the precision is low. So my question is:
How to specify number of digits after decimal point while writing data by vtkRectilinearGridWriter class? It seems that there is not explicit setter method for that.
Thanks in advance!
the class VtkRectilinearGridWriter inherits from vtkAlgorithm that has an enumeration for set the desire output precision enum vtkAlgorithm::DesiredOutputPrecision you can set it to SINGLE_PRECISION or DOUBLE_PRECISION. See more about vtkAlgorithm here
Related
I've got this request: Using any alphabetical and numerical symbols, define a code in which each symbol of interest is associated with the corresponding binary configuration. The code must use 5 bits, be redundant, and be characterized by a Hamming distance of 1. Identify the cardinality of the defined code. Eventually, does the code exist?
I don't have a clear idea about how to do this, could anyone help?
Thank you!
I created Cassandra table with column type: DataType.FLOAT.
Execute my SQL using CqlSession:
CqlSessionBuilder builder = CqlSession.builder();
builder.addContactPoint(new InetSocketAddress(properties.getHost(), properties.getPort()));
builder.withLocalDatacenter(properties.getDatacenter());
builder.withAuthCredentials(properties.getUsername(), properties.getPassword());
builder.build();
But when I insert float numbers, it's rounded up:
12334.9999 -> 12335.0.
0.999999 -> 0.999999
12345.9999 -> 12346.0
It seems like Cassandra rounds the float and consider the number of all digits, not only after the point.
What are the options to solve this problem? I know that I can use Decimal datatype, but may be you have other solution?
I actually covered this issue with Apache Cassandra and DataStax Astra DB in an article I wrote last month:
The Guerilla Guide to Building E-commerce Product Services with DataStax Astra DB
So the problem here, is that FLOAT is a fixed floating point precision type. This means that when the numeric values are converted from base-10 (decimal) to base-2 (binary), each one of the 32 binary precision points must have a value (zero or one, obviously). It's during this conversion process between base-2 and base-10 that rounding errors occur. The likelihood of a rounding error increases as the value does (on either side of the decimal point).
What are the options to solve this problem? I know that I can use Decimal datatype, but may be you have other solution?
Well, you mentioned the best solution (IMO), which to use a DECIMAL to store the value. This works, because DECIMAL is an arbitrary floating point type. The values in a DECIMAL type are stored in base-10, so there's no conversion necessary and only the required precision is used.
Before arbitrary precision types came along, we used to use INTEGERs for things that had to be accurate. The first E-commerce team I worked on stored product prices in the DB as pennies, to prevent the rounding issue.
Yes, both INT and FLOAT are fixed precision types, but an INT stores whole numbers, and all of its precision points can be used for that. Therefore the usage patterns of the bits are quite different. While both INT and FLOAT allocate a bit for the "sign" (+/-), with floating point numbers the remaining 31 precision points are pre-allocated for the full numeric value and its exponent.
So your example of 12334.9999 is essentially stored in Cassandra like this:
123349999 x 10^-4
And of course, that's stored in binary, which I won't include here for brevity.
tl;dr;
Basically FLOATs use fixed precision to store values as a formula (significand and exponent) in base-2, and the conversion back to base-10 makes rounding errors likely.
You're right, use a DECIMAL type. When you need to be exact, that's the only real solution.
If you're interested, here are two additional SO answers which provide more detail on this topic:
Double vs. BigDecimal?
What is the difference between the float and integer data type when the size is the same?
I implemented my mathematical model using Ilog Cplex ver 2.7. the decimal part of the objective function is very small and cplex returns 0 so the cplex abandons part of the objective function (so the objective function is not really optimized). Is there a way to increase the accuracy so that the cplex takes the max of decimal into account?
I have created a file ops to change the decimal prision from 4 to 10 but cplex always does not take into account the figures after the decimal point for you well understands to see the image below.?
In that part you cannot change the display precision. But as said in this post you may see more in the statistics tab.
Or you may use scripting to get any precision you need.
In python, how could I go about supressing scientific notation with complete precision WITHOUT knowing the length of number?
I need python to dynamically be able to return the number in normal form with exact precision no matter how large the number is, and to do it without any trailing zeros. The numbers will always be integers but they will be getting very large and I need them to be completely accurate. Even a single digit being rounded or changed would mess up my program.
Any ideas?
Use the decimal class.
Unlike hardware based binary floating point, the decimal module has a user alterable
precision (defaulting to 28 places) which can be as large as needed for a given
problem.
From https://docs.python.org/library/decimal.html
I've computed the mean & variance of a set of values, and I want to pass along the value that represents the # of std deviations away from mean for each number in the set. Is there a better term for this, or should I just call it num_of_std_devs_from_mean ...
Some suggestions here:
Standard score (z-value, z-score, normal score)
but "sigma" or "stdev_distance" would probably be clearer
The standard deviation is usually denoted with the letter σ (sigma). Personally, I think more people will understand what you mean if you do say number of standard deviations.
As for a variable name, as long as you comment the declaration you could shorten it to std_devs.
sigma is what you want, I think.
That is normalizing your values. You could just refer to it as the normalized value. Maybe norm_val would be more appropriate.
I've always heard it as number of standard deviations
Deviation may be what you're after. Deviation is the distance between a data point and the mean.