Removing camera distortion Rust computer vision - rust

I'm trying to remove distortion from a camera. In python you'd use openCV calibrateCamera like here which would generate a camera matrix and distortion coefficients which look something like this:
{
"K": [
[
709.1235670254288,
0.0,
503.3777803177343
],
[
0.0,
762.6123316058502,
325.17231267609463
],
[
0.0,
0.0,
1.0
]
],
"d": [
-0.3387691660064381,
0.038463156829204495,
-0.006743169019715634,
-0.005326145779068505,
0.14363490033086238
],
"width": "800",
"height": "600",
"camera": "0"
},
I want to replicate this in Rust. Alternatively, without using the chess board approach you can manually input at least 6 corresponding points in order to calculate the matrix. In Rust there's dlt crate (link) which can generate (if i understand correctly) this matrix. However it's a different format and i'm unable to figure out how to use it further. TO be honest i've considered it might work with this imageproc::geometric_transformations::Projection:from_matrix but it requires a 9-element array.
Any help would be appreciated. Using the openCV rust bindings are not helpful, they're fairly unstable at this point.
DLT crate matrix output:
[
[0.927944628653911, -0.00396867519830716, -2.9083465040716445e-5],
[0.008799148019952768, 0.9597282376057464, 1.6532613813528404e-5],
[-1.304535025948496e-10, -1.61673678083447e-14, 0.0],
[18.28896299924497, 22.63046353374526, 1.0]
]
edit: transposing that and making near-zero easy to spot:
[[ 0.92794, 0.0088 , -0. , 18.28896],
[-0.00397, 0.95973, -0. , 22.63046],
[-0.00003, 0.00002, 0. , 1. ]]

Related

How to exclude starting point from linspace function under numpy in python?

I want to exclude starting point from a array in python using numpy. How I can execute? For example I want to exclude 0, but want to continue from the very next real number(i.e want to run from greater than 0) following code x=np.linspace(0,2,10)
Kind of an old question but I thought I'll share my solution to the problem.
Assuming you want to get an array
[0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.]
you can make use of the endpoint option in np.linspace() and reverse the direction:
x = np.linspace(2, 0, 10, endpoint=False)[::-1]
the [::-1] reverses the array so that it ends up being in the desired sequence.
x=np.linspace(0,2,10)[1:] #remove the first element by indexing
print(x)
[0.22222222 0.44444444 0.66666667 0.88888889 1.11111111 1.33333333
1.55555556 1.77777778 2. ]

Plotting A0 to Z....var in a GeoJson layer using turfjs and leaflet

I am using turfjs and leaflet to plot a grid and label each square in this fashion:
[A0,A1,...,A23]
[B0,B1,...,B23]
[C0,C1,...,C23]
End goal:
To know what are the coordinates of the corner points of each cell. I mean, I want to know what are the coordinates of the 4 corners of A0 ( and the other cells ).This will then be fed to a json file with something like this:
[
{"A0": [
["x","y"],
["x","y"],
["x","y"],
["x","y"]
]},
{"A1": [
["x","y"],
["x","y"],
["x","y"],
["x","y"]
]}
]
Then, my app will ask the GPS from the device and learn which "square" i'm in.
I have managed to plot the squares ( fiddle, but could not label them or even summon a click to console to find out what are the corner coordinates. I have console'd out the layers but i'm not sure if the plot of the geoJson layer is plotted from left to right. I have concluded each layer spits out 5 coordinates which I suspect that is the information I require but there is a 5th coordinate which does not make sense to be in a square grid cell, unless the 3rd coordinate is the center...
I was able to figure out the mystery of the GeoJson layer in leaflet.
The coordinates are returned like this:
[ 0, 3 , 6 ]
[ 1, 4 , 7 ]
[ 2, 5 , 8 ]
//will label this way:
A0 = 0 ( coordinate sets at 0 )
A1 = 1 ( coordinate sets at 1 )
A2 = 2
B0 = 3
B1 = 4
B2 = 5
...
I still don't know why there is a 5th coordinate in each layer plotted by leaflet. but this is good enough for me. I can know label them as I want.
Thank you for the help.

How to interpret the return result of precisionByThreshold in Spark?

I am aware of the concept of precisionByThreshold method, while when I use SparkML to implement the linear regression binary classification and print out the analysis result of precisionByThreshold. I got results like this:
Threshold: 1.0, Precision: 0.7158351409978309
Threshold: 0.0, Precision: 0.22472464244616144
Why are there only two thresholds? And when the threshold is 1.0, no sample should be classified as Positive, then the precision should be 0. Can anybody explain this result to me and tell me how to add more threshold? Thanks in advance.

FFT Code Break Down

I am trying to implement the FFT in one of my projects. Unfortunately I feel like every site I go to is explaining things way over my head. I have looked at many different sites for clarification but alas it has so far eluded me.
Each of the sites that I have so far went to has either had the code written well with no comments on the variables or other explanation or has explained things at such a level that I cannot grasp it.
I would appreciate it if someone can break down each part of this code and the process in the most descriptive way possible.
First, I know that the input to the FFT is [1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]. What do these numbers represent? Are they in hertz or volts?
Last, I know that the output from the FFT is 4.000 2.613 0.000 1.082 0.000 1.082 0.000 2.613. What do these numbers represent? What is the unit? How can they be used to get the magnitude or frequency from the data set?
Again, I am looking for every step to be explained, so commenting the following FFT code would also be very helpful. I would be eternally grateful if you can explain this well enough that a 5 year old would understand. (I feel about that age sometimes when looking at the articles).
Thanks for all the help in advance. You guys on here have helped me out a TON.
CODE SOURCE: http://rosettacode.org/wiki/Fast_Fourier_transform#Python
CODE:
from cmath import exp, pi
def fft(x):
# I understand that we are taking the length of the array sent
# into the function and assigning it to N. But I do not get why.
N = len(x)
# I get that we are returning itself if it is the only item.
# What does x represent at this point?
if N <= 1: return x
# We are creating an even variable and assigning it to the fft of
# the even terms of x. This is possibly because we can use this
# to take advantage of the symmetry?
even = fft(x[0::2])
# We are now doing the same thing with the odd variable. It is
# going to be the fft of the odd terms of x. Why would we need
# both if we are using it to take advantage of the symmetry?
odd = fft(x[1::2])
T= [exp(-2j*pi*k/N)*odd[k] for k in range(N//2)]
return [even[k] + T[k] for k in range(N//2)] + \
[even[k] - T[k] for k in range(N//2)]
# I understand that we are printing a join formatted as a float
# I get that the float will have 3 numbers after the decimal place and
# will take up a total of 5 spots
# I also understand that the abs(f) is what is being formatted and
# that the absolute value is getting rid of the imaginary portion
# that is often seen returned by the FFT
print( ' '.join("%5.3f" % abs(f)
for f in fft([1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0])) )
RETURNS:
4.000 2.613 0.000 1.082 0.000 1.082 0.000 2.613
An FFT is just a fast way to calculate of a DFT (using a factoring trick).
Perhaps learn what a DFT does first, as the FFT factoring trick might be confusing the issue of what a DFT does. A DFT is just a basis transform (a type of matrix multiply). The units can be completely arbitrary (milliVolts, inches, gallons, dollars, etc.) And any set of frequency results depends on a sample rate of the input data.

Underscore Aggregate Hashes by Keys

I work on machine learning application. I use underscorejs when I need to operate with arrays and hashes.
The question is following, in ML there is a cross-validation approach, when you need to calculate performance for several folds.
For each fold, I have a hash of parameters of performance, like following
{ 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
}
I push all hashes to the array, at the end I have an array of the hashes, like following
[ { 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
},
{ 'F1': 0.5,
'Precision': 0.6,
'Recall':0.4
},
{ 'F1': 0.4,
'Precision': 0.3,
'Recall':0.4
}
]
The question is, at the end I want to calculate the average for each parameter of the hash, i.e. I want to sum up all hashes by parameters and then divide every parameters by the number of folds, in my case 3.
If there are any elegant way to do so with underscore and javascript?
One important point is sometimes I need to do this aggregation, when the hash for fold like the following
{
label1:{ 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
},
label2:{ 'F1': 0.8,
'Precision': 0.7,
'Recall':0.9
},
...
}
The task is the same, average of F1, Precision, Recall for every label among all folds.
Currently I have some ugly solution that run over all hash several times, I would appreciate any help, thank you.
If it is an array, just use the array. If it is not an array, use _.values to turn it into one and use that. Then, we can do a fold (or reduce) over the data:
_.reduce(data, function(memo, obj) {
return {
F1: memo.F1 + obj.F1,
Precision: memo.Precision + obj.Precision,
Recall: memo.Recall + obj.Recall,
count: memo.count + 1
};
}, {F1: 0, Precision: 0, Recall: 0, count: 0})
This returns a hash containing F1, Precision, and Recall, which are sums, and count, which is the number of objects. It should be pretty easy to get an average from those.

Resources