Non-linear optimization for rotation - graphics

I had a chat with an engineer the other day and we both were stumped on a question related to bundle adjustment. For a refresher, here is a good link explaining the problem:
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/ZISSERMAN/bundle/bundle.html
The problem requires optimization over 3n+11m parameters. The camera optimization consists of 5 intrinsic camera parameters, 3 DOF for position (x,y,z), and 3 DOF for rotation (pitch, yaw and roll).
Now, when you actually go about implementing this algorithm, a rotation matrix consists an optimization over 9 numbers. Euler's Axis Theorem says these 9 numbers are related and there are only 3 degrees of freedom overall.
Suppose you represent the rotation using a normalized quaternion. Then you have optimization over 3 numbers. Same DOF.
Is one representation more computationally efficient and better than the other? Will you have less variables to optimize using a rotation quaternion over rotation matrix?

You never optimize over 9 numbers! Of course this would be inefficient. One efficient representation in which you only need 3 parameters is to parametrize your rotation matrix R using the Lie algebra of the groupe SO(3). If you are not familiar with Lie algebra, here's a tutorial that explains everything in an intuitive (but sometimes oversimplified) manner. To explain it in a few short sentences, in this representation, each rotation matrix R is written as expmat(a*G_1+b*G_2+c*G_3) where expmat is the matrix exponential, and the G_i are the "generators" of the lie algebra of SO(3), i.e. the tangent space to SO(3) at the identity. Therefore, to estimate a rotation matrix, you only need to learn the three parameters a,b,c. This is roughly equivalent to decomposing your rotation matrix in three rotations around x,y,z and estimating the three angles of these rotations.

A solution not mentioned yet is to use axis-angle parameterization.
Basically, you represent the rotation as a single 3D vector. The direction v/|v| of the vector is the axis of rotation, and the norm |v| is the angle of rotation around that axis.
This method has 3 DOF directly, unlike quaternions' 4 DOF. So with quaternions, you need to use either constrained optimization or additional parameterization to get down to 3 DOF.
I'm not familiar with #Ash's suggestion, but he does mention in the comment that it only works for small angles. Axis-angle representation doesn't have this limitation.

One option is as relatively_random suggests to optimize over the axis-angle parameterization. The derivative can then, relatively simple, be computed as described in this paper. The only problem might be that some numerical issues might arise for rotations close to the identity.
import numpy as np
def hat(v):
"""
vecotrized version of the hat function, creating for a vector its skew symmetric matrix.
Args:
v (np.array<float>(..., 3, 1)): The input vector.
Returns:
(np.array<float>(..., 3, 3)): The output skew symmetric matrix.
"""
E1 = np.array([[0., 0., 0.], [0., 0., -1.], [0., 1., 0.]])
E2 = np.array([[0., 0., 1.], [0., 0., 0.], [-1., 0., 0.]])
E3 = np.array([[0., -1., 0.], [1., 0., 0.], [0., 0., 0.]])
return v[..., 0:1, :] * E1 + v[..., 1:2, :] * E2 + v[..., 2:3, :] * E3
def exp(v, der=False):
"""
Vectorized version of the exponential map.
Args:
v (np.array<float>(..., 3, 1)): The input axis-angle vector.
der (bool, optional): Wether to output the derivative as well. Defaults to False.
Returns:
R (np.array<float>(..., 3, 3)): The corresponding rotation matrix.
[dR (np.array<float>(3, ..., 3, 3)): The derivative of each rotation matrix.
The matrix dR[i, ..., :, :] corresponds to
the derivative d R[..., :, :] / d v[..., i, :],
so the derivative of the rotation R gained
through the axis-angle vector v with respect
to v_i. Note that this is not a Jacobian of
any form but a vectorized version of derivatives.]
"""
n = np.linalg.norm(v, axis=-2, keepdims=True)
H = hat(v)
with np.errstate(all='ignore'):
R = np.identity(3) + (np.sin(n) / n) * H + ((1 - np.cos(n)) / n**2) * (H # H)
R = np.where(n == 0, np.identity(3), R)
if der:
sh = (3,) + tuple(1 for _ in range(v.ndim - 2)) + (3, 1)
dR = np.swapaxes(np.expand_dims(v, axis=0), 0, -2) * H
dR = dR + hat(np.cross(v, ((np.identity(3) - R) # np.identity(3).reshape(sh)), axis=-2))
dR = dR # R
n = n**2 # redifinition
with np.errstate(all='ignore'):
dR = dR / n
dR = np.where(n == 0, hat(np.identity(3).reshape(sh)), dR)
return R, dR
else:
return R
# generate two sets of points which differ by a rotation
np.random.seed(1001)
n = 100 # number of points
p_1 = np.random.randn(n, 3, 1)
v = np.array([0.3, -0.2, 0.1]).reshape(3, 1) # the axis-angle vector
p_2 = exp(v) # p_1 + np.random.randn(n, 3, 1) * 1e-2
# estimate v with least sqaures, so the objective function becomes:
# minimize v over f(v) = sum_[1<=i<=n] (||p_1_i - exp(v)p_2_i||^2)
# Due to the way least_squres is implemented we have to pass the
# individual residuals ||p_1_i - exp(v)p_2_i||^2 as ||p_1_i - exp(v)p_2_i||.
from scipy.optimize import least_squares
def loss(x):
R = exp(x.reshape(1, 3, 1))
y = p_2 - R # p_1
y = np.linalg.norm(y, axis=-2).squeeze(-1)
return y
def d_loss(x):
R, d_R = exp(x.reshape(1, 3, 1), der=True)
y = p_2 - R # p_1
d_y = -d_R # p_1
d_y = np.sum(y * d_y, axis=-2) / np.linalg.norm(y, axis=-2)
d_y = d_y.squeeze(-1).T
return d_y
x0 = np.zeros((3))
res = least_squares(loss, x0, d_loss)
print('True axis-angle vector: {}'.format(v.reshape(-1)))
print('Estimated axis-angle vector: {}'.format(res.x))

Related

Generating histogram feature of 2D tensor from 3D Tensor feature set

I have a 3D tensor of dimensions (3,4 7) where each element in 2-dim(4) has 7 attributes.
What I want is to take the 4th attribute of all 4 elements and to calculate the histogram having 3 hist values and store those values only. And ending up with a 2D tensor of shape (3,4). I have a small toy example for the task that I am working on. My solution ends up with a Tensor which has shape (1,3). Any hint or guidance will be appreciated.
import torch
torch.manual_seed(1)
feature = torch.randint(1, 50, (3, 4,7))
feature.type(torch.FloatTensor)
attrbute_val = feature[:,:,3:4]
print(attrbute_val.shape)
print(attrbute_val)
histogram_feature = torch.histc(torch.tensor(attrbute_val,dtype=torch.float32), bins=3, min=1, max=50)
print("histogram_feature",histogram_feature)
import torch
torch.manual_seed(1)
bins = 3
feature = torch.randint(1, 50, (3, 4,7))
attrbute_val = feature[:,:,3].float() # read all 4 elements in the 2nd dimension
# and the fourth element in the 3rd dimension.
final_tensor = torch.empty((bins,bins))
tuple_rows = torch.tensor_split(attrbute_val, 3, dim=0)
for i,row in enumerate(tuple_rows):
final_tensor[i] = torch.histc(row, bins=bins, min=1, max=50)
plt.bar(range(bins),final_tensor[i],align='center',color=['forestgreen'])
plt.show()
#final_tensor = tensor([[3., 0., 1.],
# [4., 0., 0.],
# [0., 2., 2.]])
This is want i have come up with a naive solution
attrbute_val = feature[:,:,3:4]
print(attrbute_val.shape)
print(attrbute_val[:,:,0])
final_feature = np.zeros((3,3))
shape = attrbute_val[:,:,0].shape
for row in range(shape[0]):
print("element wise features",attrbute_val[:,:,0][row])
hist,_= np.histogram(attrbute_val[:,:,0][row], bins=3)
print(row,",hist values",hist)
final_feature[row,:] = hist
print("final feature shape", final_feature.shape)
print("final feature",final_feature)
I wish PyTorch has any way to apply customer function on dimensions

Meaning of grad_outputs in PyTorch's torch.autograd.grad

I am having trouble understanding the conceptual meaning of the grad_outputs option in torch.autograd.grad.
The documentation says:
grad_outputs should be a sequence of length matching output containing the “vector” in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be None).
I find this description quite cryptic. What exactly do they mean by Jacobian-vector product? I know what the Jacobian is, but not sure about what product they mean here: element-wise, matrix product, something else? I can't tell from my example below.
And why is "vector" in quotes? Indeed, in the example below I get an error when grad_outputs is a vector, but not when it is a matrix.
>>> x = torch.tensor([1.,2.,3.,4.], requires_grad=True)
>>> y = torch.outer(x, x)
Why do we observe the following output; how was it computed?
>>> y
tensor([[ 1., 2., 3., 4.],
[ 2., 4., 6., 8.],
[ 3., 6., 9., 12.],
[ 4., 8., 12., 16.]], grad_fn=<MulBackward0>)
>>> torch.autograd.grad(y, x, grad_outputs=torch.ones_like(y))
(tensor([20., 20., 20., 20.]),)
However, why this error?
>>> torch.autograd.grad(y, x, grad_outputs=torch.ones_like(x))
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([4]) and output[0] has a shape of torch.Size([4, 4]).
If we take your example we have function f which takes as input x shaped (n,) and outputs y = f(x) shaped (n, n). The input is described as column vector [x_i]_i for i ∈ [1, n], and f(x) is defined as matrix [y_jk]_jk = [x_j*x_k]_jk for j, k ∈ [1, n]².
It is often useful to compute the gradient of the output with respect to the input (or sometimes w.r.t the parameters of f, there are none here). In the more general case though, we are looking to compute dL/dx and not just dy/dx, where dL/dx is the partial derivative of L, computed from y, w.r.t. x.
The computation graph looks like:
x.grad = dL/dx <------- dL/dy y.grad
dy/dx
x -------> y = x*xT
Then, if we look at dL/dx, which is, via the chain rule equal to dL/dy*dy/dx. We have, looking at the interface of torch.autograd.grad, the following correspondences:
outputs <-> y,
inputs <-> x, and
grad_outputs <-> dL/dy.
Looking at the shapes: dL/dx should have the same shape as x (dL/dx can be referred to as the 'gradient' of x), while dy/dx, the Jacobian matrix, would be 3-dimensional. On the other hand dL/dy, which is the incoming gradient, should have the same shape as the output, i.e., y's shape.
We want to compute dL/dx = dL/dy*dy/dx. If we look more closely, we have
dy/dx = [dy_jk/dx_i]_ijk for i, j, k ∈ [1, n]³
Therefore,
dL/dx = [dL/d_x_i]_i, i ∈ [1,n]
= [sum(dL/dy_jk * d(y_jk)/dx_i over j, k ∈ [1, n]²]_i, i ∈ [1,n]
Back to your example, it means for a given i ∈ [1, n]: dL/dx_i = sum(dy_jk/dx_i) over j, k ∈ [1,n]². And dy_jk/dx_i = f(x_j*x_k)/dx_i will equal x_j if i = k, x_k if i = j, and 2*x_i if i = j = k (because of the squared x_i). This being said matrix y is symmetric... So the result comes down to 2*sum(x_i) over i ∈ [1, n]
This means dL/dx is the column vector [2*sum(x)]_i for i ∈ [1, n].
>>> 2*x.sum()*torch.ones_like(x)
tensor([20., 20., 20., 20.])
Stepping back look at this other graph example, here adding an additional operation after y:
x -------> y = x*xT --------> z = y²
If you look at the backward pass on this graph, you have:
dL/dx <------- dL/dy <-------- dL/dz
dy/dx dz/dy
x -------> y = x*xT --------> z = y²
With dL/dx = dL/dy*dy/dx = dL/dz*dz/dy*dy/dx which is in practice computed in two sequential steps: dL/dy = dL/dz*dz/dy, then dL/dx = dL/dy*dy/dx.

Is there a way to generate correlated variable array from an existing array in Python 3? [duplicate]

I have a non-generated 1D NumPy array. For now, we will use a generated one.
import numpy as np
arr1 = np.random.uniform(0, 100, 1_000)
I need an array that will be correlated 0.3 with it:
arr2 = '?'
print(np.corrcoef(arr1, arr2))
Out[1]: 0.3
I've adapted this answer by whuber on stats.SE to NumPy. The idea is to generate a second array noise randomly, and then compute the residuals of a least-squares linear regression of noise on arr1. The residuals necessarily have a correlation of 0 with arr1, and of course arr1 has a correlation of 1 with itself, so an appropriate linear combination of a*arr1 + b*residuals will have any desired correlation.
import numpy as np
def generate_with_corrcoef(arr1, p):
n = len(arr1)
# generate noise
noise = np.random.uniform(0, 1, n)
# least squares linear regression for noise = m*arr1 + c
m, c = np.linalg.lstsq(np.vstack([arr1, np.ones(n)]).T, noise)[0]
# residuals have 0 correlation with arr1
residuals = noise - (m*arr1 + c)
# the right linear combination a*arr1 + b*residuals
a = p * np.std(residuals)
b = (1 - p**2)**0.5 * np.std(arr1)
arr2 = a*arr1 + b*residuals
# return a scaled/shifted result to have the same mean/sd as arr1
# this doesn't change the correlation coefficient
return np.mean(arr1) + (arr2 - np.mean(arr2)) * np.std(arr1) / np.std(arr2)
The last line scales the result so that the mean and standard deviation are the same as arr1's. However, arr1 and arr2 will not be identically distributed.
Usage:
>>> arr1 = np.random.uniform(0, 100, 1000)
>>> arr2 = generate_with_corrcoef(arr1, 0.3)
>>> np.corrcoef(arr1, arr2)
array([[1. , 0.3],
[0.3, 1. ]])

Computation of gradients

I want to compute the gradient in the following scenario:
y = w_0x+w_1 and z = w_2x + (dy/dx)^2
w = torch.tensor([2.,1.,3.], requires_grad=True)
x = torch.tensor([0.5], requires_grad=True)
y = w[0]*x + w[1]
y.backward()
l = x.grad
l.requires_grad=True
w.grad.zero_()
z = w[2]*x + l**2
z.backward()
I expect [4, 0, 0.5] instead I get [0, 0, 0.5]. I know in this case I can replace l by w_0 but, l can be a complex function of x in which case it is important that I compute the gradients numerically instead of changing the expression for z. Please let me know what changes I need to do get the correct gradient w.r.t w
You should print your gradients along the way, it would be easier this way.
I will comment out what's going on in code:
import torch
w = torch.tensor([2.0, 1.0, 3.0], requires_grad=True)
x = torch.tensor([0.5], requires_grad=True)
y = w[0] * x + w[1]
y.backward()
l = x.grad
l.requires_grad = True
print(w.grad) # [0.5000, 1.0000, 0.0000] as expected
w.grad.zero_()
print(w.grad) # [0., 0., 0.] as you cleared the gradient
z = w[2] * x + l ** 2
z.backward()
print(w.grad) # [0., 0., 0.5] - see below
Last print(w.grad) works like that because your are using last element of tensor and it's the only taking part in equation z, it's multiplied by x which is 0.5 hence gradient is 0.5. You cleared the gradient before by issuing w.grad_zero_(). I can't see how could you get [4., 0., 0.5]. If you didn't clear the gradient, you would get: tensor([0.5000, 1.0000, 0.5000]), the first two being from the first y equation, the second one and the last from the z equation.

Apply an affine transform to a bounding rectangle

I am working on a pedestrian tracking algorithm using Python3 & OpenCV.
We can use SIFT keypoints as an identifier of a pedestrian silhouette on a frame and then perform brute force matching between two sets of SIFT keypoints (i.e. between one frame and the next one) to find the pedestrian in the next frame.
To visualize this on the sequence of frames, we can draw a bounding rectangle delimiting the pedestrian. This is what it looks like :
The main problem is about characterizing the motion of the pedestrian using the keypoints. The idea here is to find an affine transform (that is translation in x & y, rotation & scaling) using the coordinates of the keypoints on 2 successives frames. Ideally, this affine transform somehow corresponds to the motion of the pedestrian. To track this pedestrian, we would then just have to apply the same affine transform on the bounding rectangle coordinates.
That last part doesn’t work well. The rectangle consistently shrinks over several frames to inevitably disappear or drifts away from the pedestrian, as you see below or on the previous image :
To specify, we characterize the bounding rectangle with 2 extreme points :
There are some built-in cv2 functions that can apply an affine transform to an image, like cv2.warpAffine(), but I want to apply it only to the bounding rectangle coordinates (i.e 2 points or 1 point + width & height).
To find the affine transform between the 2 sets of keypoints, I’ve written my own function (I can post the code if it helps), but I’ve observed similar results when using cv2.getAffineTransform() for instance.
Do you know how to properly apply an affine transform to this bounding rectangle ?
EDIT : here’s some explanation & code for better context :
The pedestrian detection is done with the pre-trained SVM classifier available in openCV : hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) & hog.detectMultiScale()
Once a first pedestrian is detected, the SVM returns the coordinates of the associated bounding rectangle (xA, yA, w, h) (we stop using the SVM after the 1st detection as it is quite slow, and we are focusing on one pedestrian for now)
We select the corresponding region of the current frame, with image[yA: yA+h, xA: xA+w] and search for SURF keypoints within with surf.detectAndCompute()
This returns the keypoints & their associated descriptors (an array of 64 characteristics for each keypoint)
We perform brute force matching, based on the L2-norm between the descriptors and the distance in pixels between the keypoints to construct pairs of keypoints between the current frame & the previous one. The code for this function is pretty long, but should be similar to cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
Once we have the matched pairs of keypoints, we can use them to find the affine transform with this function :
previousKpts = previousKpts[:5] # select 4 best matches
currentKpts = currentKpts[:5]
# build A matrix of shape [2 * Nb of keypoints, 4]
A = np.ndarray(((2 * len(previousKpts), 4)))
for idx, keypoint in enumerate(previousKpts):
# Keypoint.pt = (x-coord, y-coord)
A[2 * idx, :] = [keypoint.pt[0], -keypoint.pt[1], 1, 0]
A[2 * idx + 1, :] = [keypoint.pt[1], keypoint.pt[0], 0, 1]
# build b matrix of shape [2 * Nb of keypoints, 1]
b = np.ndarray((2 * len(previousKpts), 1))
for idx, keypoint in enumerate(currentKpts):
b[2 * idx, :] = keypoint.pt[0]
b[2 * idx + 1, :] = keypoint.pt[1]
# convert the numpy.ndarrays to matrix :
A = np.matrix(A)
b = np.matrix(b)
# solution of the form x = [x1, x2, x3, x4]' = ((A' * A)^-1) * A' * b
x = np.linalg.inv(A.T * A) * A.T * b
theta = math.atan2(x[1, 0], x[0, 0]) # outputs rotation angle in [-pi, pi]
alpha = math.sqrt(x[0, 0] ** 2 + x[1, 0] ** 2) # scaling parameter
bx = x[2, 0] # translation along x-axis
by = x[3, 0] # translation along y-axis
return theta, alpha, bx, by
We then just have to apply the same affine transform to the corner points of the bounding rectangle :
# define the 4 bounding points using xA, yA
xB = xA + w
yB = yA + h
rect_pts = np.array([[[xA, yA]], [[xB, yA]], [[xA, yB]], [[xB, yB]]], dtype=np.float32)
# warp the affine transform into a full perspective transform
affine_warp = np.array([[alpha*np.cos(theta), -alpha*np.sin(theta), tx],
[alpha*np.sin(theta), alpha*np.cos(theta), ty],
[0, 0, 1]], dtype=np.float32)
# apply affine transform
rect_pts = cv2.perspectiveTransform(rect_pts, affine_warp)
xA = rect_pts[0, 0, 0]
yA = rect_pts[0, 0, 1]
xB = rect_pts[3, 0, 0]
yB = rect_pts[3, 0, 1]
return xA, yA, xB, yB
Save the updated rectangle coordinates (xA, yA, xB, yB), all current keypoints & descriptors, and iterate over the next frame : select image[yA: yB, xA: xA] using (xA, yA, xB, yB) we previously saved, get SURF keypoints etc.
As Micka suggested, cv2.perspectiveTransform() is an easy way to accomplish this. You'll just need to turn your affine warp into a full perspective transform (homography) by adding a third row at the bottom with the values [0, 0, 1]. For example, let's put a box with w, h = 100, 200 at the point (10, 20) and then use an affine transformation to shift the points so that the box is moved to (0, 0) (i.e. shift 10 pixels to the left and 20 pixels up):
>>> xA, yA, w, h = (10, 20, 100, 200)
>>> xB, yB = xA + w, yA + h
>>> rect_pts = np.array([[[xA, yA]], [[xB, yA]], [[xA, yB]], [[xB, yB]]], dtype=np.float32)
>>> affine_warp = np.array([[1, 0, -10], [0, 1, -20], [0, 0, 1]], dtype=np.float32)
>>> cv2.perspectiveTransform(rect_pts, affine_warp)
array([[[ 0., 0.]],
[[ 100., 0.]],
[[ 0., 200.]],
[[ 100., 200.]]], dtype=float32)
So that works perfectly as expected. You could also just simply transform the points yourself with matrix multiplication:
>>> rect_pts.dot(affine_warp[:, :2]) + affine_warp[:, 2]
array([[[ 0., 0.]],
[[ 100., 0.]],
[[ 0., 200.]],
[[ 100., 200.]]], dtype=float32)

Resources