Why does EXIF geodata need so much precision? - exif

According to spec, EXIF stores latitude and longitude with 192 precision each. But a simple calculation shows that you only need 32 bits to divide the circumference of Earth into segments of 9 mm:
r = 6378 km = 6.378 × 10^6 m
C = 2πr = 4.007 × 10^6 m
stepSize = C / 2^32 = 0.009 m = 9 mm
That's assuming you store the data in steps of equal size, so as an unsigned int. I can understand that would make handling code harder to write, so what the hell: let's use a double. At this precision, we can divide the Earth's circumference into steps of 2 picometers. A Helium atom has a diameter of 62 picometers. So at 64 bits, we have enough precision to divide the Earth's surface at subatomic scales.
What on Earth do we need 192 bits per angle?

The format stores latitude and longitude each as 6 32-bit integer values, which adds up to 192 bits. The 6 integers store each of degrees, minutes and seconds as rational numbers with a numerator and denominator.
Why this format? Presumably it's designed for very simple processors that can't handle floating point, and might not even be able to do division. The format is more than 25 years old (though I'm not sure when GPS data was added), and cameras weren't as smart back then. Cameras needed to be able to store lots of data (pictures are big), but they didn't need to do a lot of mathematical operations on it. So they wasted some bits to make manipulation easier.

Related

python return wrong result when I multiple float with int

I have a multiple in python 3.7.3
when I run 0.58 * 100 it return 57.99999999999999
Then I found that Java have same result. But C can return right number. I don't know what happen with them. Sorry if it look like basic.
Its actually not the wrong answer, just an unexpected one.
If we think a bit about the problem, There are an infinite amount of numbers between 0 and 1. Then we can see that you cannot represent all numbers between 0 and 1 with a finite amount of bytes, as infinite numbers are more then a finite number of numbers. so some numbers just cant be represented (in fact, most numbers of the infinite series between 0 and 1 cannot be represented)
Following the floating point standard (IEEE-754), 0.58 is really 0.5799999999999999289457264239899814128875732421875 which is the closest number to 0.58 that can be represented with 64bit floating points.
check it with python
>>> Decimal(0.58)
Decimal('0.57999999999999996003197111349436454474925994873046875')
If you want 58.0 you can quantize it to two decimals with the Decimal class.
>>> Decimal(100 * 0.58).quantize(Decimal('.01'))
Decimal('58.00')

why divide sample standard deviation by sqrt(sample size) when calculating z-score

I have been following Khan Academy videos to gain understanding of hypothesis testing, and I must confess that all my understanding thus far is based on that source.
Now, the following videos talk about z-score/hypothesis testing:
Hypothesis Testing
Z-statistic vs T-statistic
Now, coming to my doubts, which is all about the denominator in the z-score:
For the z-score formula which is: z = (x – μ) / σ,
we use this directly when the standard deviation of the population(σ), is known.
But when its unknown, and we use a sampling distribution,
then we have z = (x – μ) / (σ / √n); and we estimate σ with σs ; where σs is the standard deviation of the sample, and n is the sample size.
Then z score = (x – μ) / (σs / √n). Why are dividing by √n, when σs is already known?
Even in the video, Hypothesis Testing - Sal divides the sample's standard deviation by √n. Why are we doing this, when σs is directly given?
Please help me understand.
I tried applying this on the following question, and faced the problems below:
Question : Yardley designed new perfumes. Yardley company claimed that an average new
perfume bottle lasts 300 days. Another company randomly selects 35 new perfume bottles from
Yardley for testing. The sampled bottles last an average of 190 days, with a
standard deviation of 50 days. If the Yardley's claim were true,
what is the probability that 35 randomly selected bottles would have an average
life of no more than 190 days ?
So, the above question, when I do the following:
z = (190-300)/(50/√35), we get z = -13.05, which is not a possible score, since
z score should be between +-3.
And when I do, z = (190-110)/50, or rather z = (x – μ) / σ, I seem to be getting an acceptable answer over here.
Please help me figure out what I am missing.
I think the origin of the 1/\sqrt{n} is simply whether you're calculating the standard deviation of the lifetime of a single bottle, or the standard deviation of the (sample) mean of a set of bottles.
The question indicates that 50 days is the standard deviation of the lifetimes of the set of 35 bottles. That implies that the estimated mean age (190 days) will have a margin of error of about 50/\sqrt{35} days. Assuming that this similar margin of error applied to the claimed 300-day lifetime, one can calculate the probability that a set of 35 bottles would be measured to be 190 days or less, using the complementary error function.
Your z=-13.05 looks about right, implying that it is extremely unlikely that claimed 300-day lifetime is consistent with that seen in the 35-bottle experiment.

A more natural color representation: Is it possible to convert RGBA color to a single float value?

Is it possible to represent an RGBA color to a single value that resembles the retinal stimulation? The idea is something like:
0.0 value for black (no stimulation)
1.0 for white (full stimulation)
The RGBA colors in between should be represented by values that capture the amount of stimulation they cause to the eye like:
a very light yellow should have a very high value
a very dark brown should have a low value
Any ideas on this? Is converting to grayscale the only solution?
Thanks in advance!
Assign specific bits of a single number to each part of RGBA to represent your number.
If each part is 8 bits, the first 8 bits can be assigned to R, the second 8 bits to G, the third 8 bits to B, and the final 8 bits to A.
Let's say your RGBA values are= 15,4,2,1. And each one is given 4 bits.
In binary, R is 1111, G is 0100, B is 0010, A is 0001.
In a simple concatenation, your final number would be 1111010000100001 in binary, which is 62497. To get G out of this, 62497 / 256, round it to an integer, then modulo 16. 256 is 16 to the second power because it is the 2nd position past the first from the right(R would need third power, B would need first power). 16 is 2 to the fourth power because I used 4 bits.
62497 / 256 = 244, 244 % 16 = 4.

Normalized values, when summed are more than 1

I have two files:
File 1:
TOPIC:topic_0 1294
aa 234
bb 123
TOPIC:topic_1 2348
aa 833
cc 239
bb 233
File 2:
0.1 0.2 0.3 0.4
This is just the format of my files. Basically, when the second column (omitting the first "TOPIC" line) is summed for each topic, it constitutes to 1 as they are the normalized values. Similarly, in file 2, the values are normalized and hence they also constitute to 1.
I perform multiplication of the values from file 1 and 2. The resulting output file looks like:
aa 231
bb 379
cc 773
The second column when summed of the output file should give 1. But few files have values little over 1 like 1.1, 1.00038. How can I precisely get 1 for the output file? Is it some rounding off that I should do or something?
PS: The formats are just examples, the values and words are different. This is just for understanding purposes. Please help me sort this.
Python stores floating point decimals in base-2.
https://docs.python.org/2/tutorial/floatingpoint.html
This means that some decimals could be terminating in base-10, but are repeating in base-2, hence the floating-point error when you add them up.
This gets into some math, but imagine in base-10 trying to express the value 2/6. When you eliminate the common factors from the numerator and denominator it's 1/3.
It's 0.333333333..... repeating forever. I'll explain why in a moment, but for now, understand that if only store the first 16 digits in the decimal, for example, when you multiply the number by 3, you won't get 1, you'll get .9999999999999999, which is a little off.
This rounding error occurs whenever there's a repeating decimal.
Here's why your numbers don't repeat in base-10, but they do repeat in base-2.
Decimals are in base-10, which prime factors out to 2^1 * 5^1. Therefore for any ratio to terminate in base-10, its denominator must prime factor to a combination of 2's and 5's, and nothing else.
Now let's get back to Python. Every decimal is stored as binary. This means that in order for a ratio's "decimal" to terminate, the denominator must prime factor to only 2's and nothing else.
Your numbers repeat in base-2.
1/10 has (2*5) in the denominator.
2/10 reduces to 1/5 which still has five in the denominator.
3/10... well you get the idea.

How to calculate growth with a positive and negative number?

I am trying to calculate percentage growth in excel with a positive and negative number.
This Year's value: 2434
Last Year's value: -2
formula I'm using is:
(This_Year - Last_Year) / Last_Year
=(2434 - -2) / -2
The problem is I get a negative result. Can an approximate growth number be calculated and if so how?
You could try shifting the number space upward so they both become positive.
To calculate a gain between any two positive or negative numbers, you're going to have to keep one foot in the magnitude-growth world and the other foot in the volume-growth world. You can lean to one side or the other depending on how you want the result gains to appear, and there are consequences to each choice.
Strategy
Create a shift equation that generates a positive number relative to the old and new numbers.
Add the custom shift to the old and new numbers to get new_shifted and old_shifted.
Take the (new_shifted - old_shifted) / old_shifted) calculation to get the gain.
For example:
old -> new
-50 -> 30 //Calculate a shift like (2*(50 + 30)) = 160
shifted_old -> shifted_new
110 -> 190
= (new-old)/old
= (190-110)/110 = 72.73%
How to choose a shift function
If your shift function shifts the numbers too far upward, like for example adding 10000 to each number, you always get a tiny growth/decline. But if the shift is just big enough to get both numbers into positive territory, you'll get wild swings in the growth/decline on edge cases. You'll need to dial in the shift function so it makes sense for your particular application. There is no totally correct solution to this problem, you must take the bitter with the sweet.
Add this to your excel to see how the numbers and gains move about:
shift function
old new abs_old abs_new 2*abs(old)+abs(new) shiftedold shiftednew gain
-50 30 50 30 160 110 190 72.73%
-50 40 50 40 180 130 220 69.23%
10 20 10 20 60 70 80 14.29%
10 30 10 30 80 90 110 22.22%
1 10 1 10 22 23 32 39.13%
1 20 1 20 42 43 62 44.19%
-10 10 10 10 40 30 50 66.67%
-10 20 10 20 60 50 80 60.00%
1 100 1 100 202 203 302 48.77%
1 1000 1 1000 2002 2003 3002 49.88%
The gain percentage is affected by the magnitude of the numbers. The numbers above are a bad example and result from a primitive shift function.
You have to ask yourself which critter has the most productive gain:
Evaluate the growth of critters A, B, C, and D:
A used to consume 0.01 units of energy and now consumes 10 units.
B used to consume 500 units and now consumes 700 units.
C used to consume -50 units (Producing units!) and now consumes 30 units.
D used to consume -0.01 units (Producing) and now consumes -30 units (producing).
In some ways arguments can be made that each critter is the biggest grower in their own way. Some people say B is best grower, others will say D is a bigger gain. You have to decide for yourself which is better.
The question becomes, can we map this intuitive feel of what we label as growth into a continuous function that tells us what humans tend to regard as "awesome growth" vs "mediocre growth".
Growth a mysterious thing
You then have to take into account that Critter B may have had a far more difficult time than critter D. Critter D may have far more prospects for it in the future than the others. It had an advantage! How do you measure the opportunity, difficulty, velocity and acceleration of growth? To be able to predict the future, you need to have an intuitive feel for what constitutes a "major home run" and a "lame advance in productivity".
The first and second derivatives of a function will give you the "velocity of growth" and "acceleration of growth". Learn about those in calculus, they are super important.
Which is growing more? A critter that is accelerating its growth minute by minute, or a critter that is decelerating its growth? What about high and low velocity and high/low rate of change? What about the notion of exhausting opportunities for growth. Cost benefit analysis and ability/inability to capitalize on opportunity. What about adversarial systems (where your success comes from another person's failure) and zero sum games?
There is exponential growth, liner growth. And unsustainable growth. Cost benefit analysis and fitting a curve to the data. The world is far queerer than we can suppose. Plotting a perfect line to the data does not tell you which data point comes next because of the black swan effect. I suggest all humans listen to this lecture on growth, the University of Colorado At Boulder gave a fantastic talk on growth, what it is, what it isn't, and how humans completely misunderstand it. http://www.youtube.com/watch?v=u5iFESMAU58
Fit a line to the temperature of heated water, once you think you've fit a curve, a black swan happens, and the water boils. This effect happens all throughout our universe, and your primitive function (new-old)/old is not going to help you.
Here is Java code that accomplishes most of the above notions in a neat package that suits my needs:
Critter growth - (a critter can be "radio waves", "beetles", "oil temprature", "stock options", anything).
public double evaluate_critter_growth_return_a_gain_percentage(
double old_value, double new_value) throws Exception{
double abs_old = Math.abs(old_value);
double abs_new = Math.abs(new_value);
//This is your shift function, fool around with it and see how
//It changes. Have a full battery of unit tests though before you fiddle.
double biggest_absolute_value = (Math.max(abs_old, abs_new)+1)*2;
if (new_value <= 0 || old_value <= 0){
new_value = new_value + (biggest_absolute_value+1);
old_value = old_value + (biggest_absolute_value+1);
}
if (old_value == 0 || new_value == 0){
old_value+=1;
new_value+=1;
}
if (old_value <= 0)
throw new Exception("This should never happen.");
if (new_value <= 0)
throw new Exception("This should never happen.");
return (new_value - old_value) / old_value;
}
Result
It behaves kind-of sort-of like humans have an instinctual feel for critter growth. When our bank account goes from -9000 to -3000, we say that is better growth than when the account goes from 1000 to 2000.
1->2 (1.0) should be bigger than 1->1 (0.0)
1->2 (1.0) should be smaller than 1->4 (3.0)
0->1 (0.2) should be smaller than 1->3 (2.0)
-5-> -3 (0.25) should be smaller than -5->-1 (0.5)
-5->1 (0.75) should be smaller than -5->5 (1.25)
100->200 (1.0) should be the same as 10->20 (1.0)
-10->1 (0.84) should be smaller than -20->1 (0.91)
-10->10 (1.53) should be smaller than -20->20 (1.73)
-200->200 should not be in outer space (say more than 500%):(1.97)
handle edge case 1-> -4: (-0.41)
1-> -4: (-0.42) should be bigger than 1-> -9:(-0.45)
Simplest solution is the following:
=(NEW/OLD-1)*SIGN(OLD)
The SIGN() function will result in -1 if the value is negative and 1 if the value is positive. So multiplying by that will conditionally invert the result if the previous value is negative.
Percentage growth is not a meaningful measure when the base is less than 0 and the current figure is greater than 0:
Yr 1 Yr 2 % Change (abs val base)
-1 10 %1100
-10 10 %200
The above calc reveals the weakness in this measure- if the base year is negative and current is positive, result is N/A
It is true that this calculation does not make sense in a strict mathematical perspective, however if we are checking financial data it is still a useful metric. The formula could be the following:
if(lastyear>0,(thisyear/lastyear-1),((thisyear+abs(lastyear)/abs(lastyear))
let's verify the formula empirically with simple numbers:
thisyear=50 lastyear=25 growth=100% makes sense
thisyear=25 lastyear=50 growth=-50% makes sense
thisyear=-25 lastyear=25 growth=-200% makes sense
thisyear=50 lastyear=-25 growth=300% makes sense
thisyear=-50 lastyear=-25 growth=-100% makes sense
thisyear=-25 lastyear=-50 growth=50% makes sense
again, it might not be mathematically correct, but if you need meaningful numbers (maybe to plug them in graphs or other formulas) it's a good alternative to N/A, especially when using N/A could screw all subsequent calculations.
You should be getting a negative result - you are dividing by a negative number. If last year was negative, then you had negative growth. You can avoid this anomaly by dividing by Abs(Last Year)
Let me draw the scenario.
From: -303 To 183, what is the percentage change?
-303, -100% 0 183, 60.396% 303, 100%
|_________________ ||||||||||||||||||||||||________|
(183 - -303) / |-303| * 100 = 160.396%
Total Percent Change is approximately 160%
Note: No matter how negative the value is, it is treated as -100%.
The best way to solve this issue is using the formula to calculate a slope:
(y1-y2/x1-x2)
*define x1 as the first moment, so value will be "C4=1"
define x2 as the first moment, so value will be "C5=2"
In order to get the correct percentage growth we can follow this order:
=(((B4-B5)/(C4-C5))/ABS(B4))*100
Perfectly Works!
Simplest method is the one I would use.
=(ThisYear - LastYear)/(ABS(LastYear))
However it only works in certain situations. With certain values the results will be inverted.
It really does not make sense to shift both into the positive, if you want a growth value that is comparable with the normal growth as result of both positive numbers. If I want to see the growth of 2 positive numbers, I don't want the shifting.
It makes however sense to invert the growth for 2 negative numbers. -1 to -2 is mathematically a growth of 100%, but that feels as something positive, and in fact, the result is a decline.
So, I have following function, allowing to invert the growth for 2 negative numbers:
setGrowth(Quantity q1, Quantity q2, boolean fromPositiveBase) {
if (q1.getValue().equals(q2.getValue()))
setValue(0.0F);
else if (q1.getValue() <= 0 ^ q2.getValue() <= 0) // growth makes no sense
setNaN();
else if (q1.getValue() < 0 && q2.getValue() < 0) // both negative, option to invert
setValue((q2.getValue() - q1.getValue()) / ((fromPositiveBase? -1: 1) * q1.getValue()));
else // both positive
setValue((q2.getValue() - q1.getValue()) / q1.getValue());
}
These questions are answering the question of "how should I?" without considering the question "should I?" A change in the value of a variable that takes positive and negative values is fairly meaning less, statistically speaking. The suggestion to "shift" might work well for some variables (e.g. temperature which can be shifted to a kelvin scale or something to take care of the problem) but very poorly for others, where negativity has a precise implication for direction. For example net income or losses. Operating at a loss (negative income) has a precise meaning in this context, and moving from -50 to 30 is not in any way the same for this context as moving from 110 to 190, as a previous post suggests. These percentage changes should most likely be reported as "NA".
Just change the divider to an absolute number.i.e.
A B C D
1 25,000 50,000 75,000 200%
2 (25,000) 50,000 25,000 200%
The formula in D2 is: =(C2-A2)/ABS(A2) compare with the all positive row the result is the same (when the absolute base number is the same). Without the ABS in the formula the result will be -200%.
Franco
Use this code:
=IFERROR((This Year/Last Year)-1,IF(AND(D2=0,E2=0),0,1))
The first part of this code iferror gets rid of the N/A issues when there is a negative or a 0 value. It does this by looking at the values in e2 and d2 and makes sure they are not both 0. If they are both 0 then it will place a 0%. If only one of the cells are a 0 then it will place 100% or -100% depending on where the 0 value falls. The second part of this code (e2/d2)-1 is the same code as (this year - lastyear)/Last year
Please click here for example picture
I was fumbling for answers today, and think this would work...
=IF(C5=0, B5/1, IF(C5<0, (B5+ABS(C5)/1), IF(C5>0, (B5/C5)-1)))
C5 = Last Year, B5 = This Year
We have 3 IF statements in the cell.
IF Last Year is 0, then This Year divided by 1
IF Last Year is less than 0, then This Year + ABSolute value of Last Year divided by 1
IF Last Year is greater than 0, then This Year divided by Last Year minus 1
Use this formula:
=100% + (Year 2/Year 1)
The logic is that you recover 100% of the negative in year 1 (hence the initial 100%) plus any excess will be a ratio against year 1.
Short one:
=IF(D2>C2, ABS((D2-C2)/C2), -1*ABS((D2-C2)/C2))
or confusing one (my first attempt):
=IF(D2>C2, IF(C2>0, (D2-C2)/C2, (D2-C2)/ABS(C2)), IF(OR(D2>0,C2>0), (D2-C2)/C2, IF(AND(D2<0, C2<0), (D2-C2)/ABS(C2), 0)))
D2 is this year, C2 is last year.
Formula should be this one:
=(thisYear+IF(LastYear<0,ABS(LastYear),0))/ABS(LastYear)-100%
The IF value if < 0 is added to your Thisyear value to generate the real difference.
If > 0, the LastYear value is 0
Seems to work in different scenarios checked
This article offers a detailed explanation for why the (b - a)/ABS(a) formula makes sense. It is counter-intuitive at first, but once you play with the underlying arithmetic, it starts to make sense. As you get used to it eventually, it changes the way you look at percentages.
Aim is to get increase rate.
Idea is following:
At first calculate value of absolute increase.
Then value of absolute increase add to both, this and last year values. And then calculate increase rate, based on the new values.
For example:
LastYear | ThisYear | AbsoluteIncrease | LastYear01 | ThisYear01 | Rate
-10 | 20 | 30 = (10+20) | 20=(-10+30)| 50=(20+30) | 2.5=50/20
-20 | 20 | 40 = (20+20) | 20=(-20+40)| 60=(20+40) | 3=60/2
=(This Year - Last Year) / (ABS(Last Year))
This only works reliably if this year and last year are always positive numbers.
For example last_year=-50 this_year = -1. You get -100% growth when in fact the numbers have improved a great deal.

Resources