Real value printed with %f is 0.0000, but condition '>0' does not apply (after using $floor in a task) - verilog

I declare my variable:
real meas_diff_div;
I have a task, where I use this variable:
measure_task(meas_diff_div);
After that I filter for an error based on the value of this real:
if(meas_diff_div > 0)`error("error message. %f", meas_diff_div);
Sometimes the error is triggered, even if the printed value is 0.000000
At the task declaration, the 1st line looks like this:
task measure_task(output real output_value);
In the task this real is 'filled' up with (I use $floor in this context to get around the % 'modulo' operator applied to 'real' problemtype):
output_value = realtime_val1 - realtime_val2 * $floor(realtime_val1/realtime_val2);

The problem is how you $display the real value.
%f shows only zeroes because it does not have enough digits of precision to show very small floating point values. Use %g to show the number in scientific format to see it really is non-zero. Similarly, use %f and specify a large enough precision value.
module tb;
real meas_diff_div;
initial begin
meas_diff_div = 0.0000001;
if (meas_diff_div > 0) $display("%f" , meas_diff_div);
if (meas_diff_div > 0) $display("%1.7f", meas_diff_div);
if (meas_diff_div > 0) $display("%g" , meas_diff_div);
end
endmodule
Outputs:
0.000000
0.0000001
1e-07
As you can see, when the signal has a small non-zero value, like 0.0000001, the if evaluates to true since it is larger than 0.
Although not explicitly stated in the IEEE Std 1800-2017, %f seems to behave like %.6f (the default number of digits after the decimal point is 6). Since this syntax was borrowed from C, see also: What is c printf %f default precision?
For your filter you could do something like:
if (meas_diff_div > 0.001) `error("error message. %f", meas_diff_div);
In your code, the problem is not $floor. The difference of 2 real values can produce a non-zero value.

Related

Round Function Erratic -- Micropython

I am working with an MPU-6050 3-axis accelerometer and using this code to read the current Z axis value with 1/10 second between readings:
az=round(imu.accel.z,2) + 0.04 (0.04 is the calibration value)
print(str(az))
Most times the value displayed with the print statement is correct (i.e., 0.84). But sometimes the value printed is the full seven-decimal place value (0.8400001). Is there a way to correct this so the two-decimal place value is displayed consistently?
Simply, perform math with calibration value and round after
az=round(float(imu.accel.z) + 0.04,2)
print(str(az))

Weight function calculation for dynamic output

I am trying to prepare a weight function whose output should lie in (min_output_value, max_output_value) and the output depends on the difference of actual and target value of y, i.e. (y_actual, y_target).
The output value should tend towards the max_output_value if (y_actual - y_target) is more and if the difference is less, the output value should tend to min_output_value.
Any link pointing to the answers are also appreciated.
After some R&D, I came up with a solution, which as follows:
y_diff = absolute(y_target - y_actual)
denominator = (1 + exp(-(min_output_value/10)*y_diff ))
output = (max_output_value/ denominator)
This ensures that the output value always lies in the range [min_output_value, min_output_value]
Rounding is optional.

Strange result from Summation of numbers in Excel and Matlab [duplicate]

I am writing a program where I need to delete duplicate points stored in a matrix. The problem is that when it comes to check whether those points are in the matrix, MATLAB can't recognize them in the matrix although they exist.
In the following code, intersections function gets the intersection points:
[points(:,1), points(:,2)] = intersections(...
obj.modifiedVGVertices(1,:), obj.modifiedVGVertices(2,:), ...
[vertex1(1) vertex2(1)], [vertex1(2) vertex2(2)]);
The result:
>> points
points =
12.0000 15.0000
33.0000 24.0000
33.0000 24.0000
>> vertex1
vertex1 =
12
15
>> vertex2
vertex2 =
33
24
Two points (vertex1 and vertex2) should be eliminated from the result. It should be done by the below commands:
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
After doing that, we have this unexpected outcome:
>> points
points =
33.0000 24.0000
The outcome should be an empty matrix. As you can see, the first (or second?) pair of [33.0000 24.0000] has been eliminated, but not the second one.
Then I checked these two expressions:
>> points(1) ~= vertex2(1)
ans =
0
>> points(2) ~= vertex2(2)
ans =
1 % <-- It means 24.0000 is not equal to 24.0000?
What is the problem?
More surprisingly, I made a new script that has only these commands:
points = [12.0000 15.0000
33.0000 24.0000
33.0000 24.0000];
vertex1 = [12 ; 15];
vertex2 = [33 ; 24];
points = points((points(:,1) ~= vertex1(1)) | (points(:,2) ~= vertex1(2)), :);
points = points((points(:,1) ~= vertex2(1)) | (points(:,2) ~= vertex2(2)), :);
The result as expected:
>> points
points =
Empty matrix: 0-by-2
The problem you're having relates to how floating-point numbers are represented on a computer. A more detailed discussion of floating-point representations appears towards the end of my answer (The "Floating-point representation" section). The TL;DR version: because computers have finite amounts of memory, numbers can only be represented with finite precision. Thus, the accuracy of floating-point numbers is limited to a certain number of decimal places (about 16 significant digits for double-precision values, the default used in MATLAB).
Actual vs. displayed precision
Now to address the specific example in the question... while 24.0000 and 24.0000 are displayed in the same manner, it turns out that they actually differ by very small decimal amounts in this case. You don't see it because MATLAB only displays 4 significant digits by default, keeping the overall display neat and tidy. If you want to see the full precision, you should either issue the format long command or view a hexadecimal representation of the number:
>> pi
ans =
3.1416
>> format long
>> pi
ans =
3.141592653589793
>> num2hex(pi)
ans =
400921fb54442d18
Initialized values vs. computed values
Since there are only a finite number of values that can be represented for a floating-point number, it's possible for a computation to result in a value that falls between two of these representations. In such a case, the result has to be rounded off to one of them. This introduces a small machine-precision error. This also means that initializing a value directly or by some computation can give slightly different results. For example, the value 0.1 doesn't have an exact floating-point representation (i.e. it gets slightly rounded off), and so you end up with counter-intuitive results like this due to the way round-off errors accumulate:
>> a=sum([0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1]); % Sum 10 0.1s
>> b=1; % Initialize to 1
>> a == b
ans =
logical
0 % They are unequal!
>> num2hex(a) % Let's check their hex representation to confirm
ans =
3fefffffffffffff
>> num2hex(b)
ans =
3ff0000000000000
How to correctly handle floating-point comparisons
Since floating-point values can differ by very small amounts, any comparisons should be done by checking that the values are within some range (i.e. tolerance) of one another, as opposed to exactly equal to each other. For example:
a = 24;
b = 24.000001;
tolerance = 0.001;
if abs(a-b) < tolerance, disp('Equal!'); end
will display "Equal!".
You could then change your code to something like:
points = points((abs(points(:,1)-vertex1(1)) > tolerance) | ...
(abs(points(:,2)-vertex1(2)) > tolerance),:)
Floating-point representation
A good overview of floating-point numbers (and specifically the IEEE 754 standard for floating-point arithmetic) is What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.
A binary floating-point number is actually represented by three integers: a sign bit s, a significand (or coefficient/fraction) b, and an exponent e. For double-precision floating-point format, each number is represented by 64 bits laid out in memory as follows:
The real value can then be found with the following formula:
This format allows for number representations in the range 10^-308 to 10^308. For MATLAB you can get these limits from realmin and realmax:
>> realmin
ans =
2.225073858507201e-308
>> realmax
ans =
1.797693134862316e+308
Since there are a finite number of bits used to represent a floating-point number, there are only so many finite numbers that can be represented within the above given range. Computations will often result in a value that doesn't exactly match one of these finite representations, so the values must be rounded off. These machine-precision errors make themselves evident in different ways, as discussed in the above examples.
In order to better understand these round-off errors it's useful to look at the relative floating-point accuracy provided by the function eps, which quantifies the distance from a given number to the next largest floating-point representation:
>> eps(1)
ans =
2.220446049250313e-16
>> eps(1000)
ans =
1.136868377216160e-13
Notice that the precision is relative to the size of a given number being represented; larger numbers will have larger distances between floating-point representations, and will thus have fewer digits of precision following the decimal point. This can be an important consideration with some calculations. Consider the following example:
>> format long % Display full precision
>> x = rand(1, 10); % Get 10 random values between 0 and 1
>> a = mean(x) % Take the mean
a =
0.587307428244141
>> b = mean(x+10000)-10000 % Take the mean at a different scale, then shift back
b =
0.587307428244458
Note that when we shift the values of x from the range [0 1] to the range [10000 10001], compute a mean, then subtract the mean offset for comparison, we get a value that differs for the last 3 significant digits. This illustrates how an offset or scaling of data can change the accuracy of calculations performed on it, which is something that has to be accounted for with certain problems.
Look at this article: The Perils of Floating Point. Though its examples are in FORTRAN it has sense for virtually any modern programming language, including MATLAB. Your problem (and solution for it) is described in "Safe Comparisons" section.
type
format long g
This command will show the FULL value of the number. It's likely to be something like 24.00000021321 != 24.00000123124
Try writing
0.1 + 0.1 + 0.1 == 0.3.
Warning: You might be surprised about the result!
Maybe the two numbers are really 24.0 and 24.000000001 but you're not seeing all the decimal places.
Check out the Matlab EPS function.
Matlab uses floating point math up to 16 digits of precision (only 5 are displayed).

how to use decimal values in Math.pow()

I have a calculation:
(22,582 / 10,000)^1/15 - 1
In C# I am using like this:
double i = Math.Pow(2.2582,1/15) - 1;
Response.Write(i);
But everytime it returns me 0 in i. I know (1/15) is making some disturbance in the calculation, so to solve this I used (.067) in place of (1/15) which gives me result 0.0560927980835855, but I am still far away from my actual result. Can somebody please tell the right approach.
The first calculation should be:
(22.582d / 10.000d) * (1.0d/15.0d) - 1.0d
You use the "d" in literals to tell the compiler that the number should be a double. If you don't use it the compiler thinks that 1/15 is two integers divided resulting in 0.
So the last calculation should be:
double i = Math.Pow(2.2582d, 1.0d/15.0d) - 1.0d;
Response.Write(i);
This means that:
1/15 = 0
and
1.0d/15.0d = 0.06666667
Here 1 and 15 are considered as integers and were calculated to find the integer result 1/15 =0;
not the double result.
Try using 1f/15f instead of 1/15

What is gnuplot's internal representation of floating point numbers?

I'm sure the answer is obvious but I can't find it without looking at the source.
What is gnuplot's internal representation of floating point numbers? Is it the platform's double? Does it use its own internal representation? Can it do arbitrary precision?
A quick google search will turn up that calculations are done in double precision whenever possible, however, there's a little sublety going on here. The range of an IEEE double precision number should go up to more than 1.797e308, however, if you try to give gnuplot a number that big, it chokes:
gnuplot> plot '-' u 0:($1/2.)
input data ('e' ends) > 1.7976931348623157e+308
input data ('e' ends) > 30
input data ('e' ends) > e
Warning: empty x range [1:1], adjusting to [0.99:1.01]
Warning: empty y range [15:15], adjusting to [14.85:15.15]
Now if you show gnuplot's range variables:
gnuplot> show variables all
You'll see some things that are a little strange:
GPVAL_DATA_X2_MIN = 8.98846567431158e+307
GPVAL_DATA_X2_MAX = -8.98846567431158e+307
With this number repeated a few times. (Note that this number is roughly correct):
gnuplot> !python -c 'import sys; print sys.float_info.max/2.'
8.98846567431e+307
**(python's float is the system's double precision).
Now a little playing around:
gnuplot> a = 8.98846567431e+307
gnuplot> a = 8.98846567432e+307
^
undefined value
So presumably gnuplot's floating point numbers go up to the system's maximum for double precision (where possible) divided by 2.

Resources