How to apply a translation to a coordinate vector? - graphics

I am trying to implement and understand how to perform a simple translation in GLSL.
In order to do that, I am making a simple test in Octave to ensure that I am understanding the transformation itself.
I have the following vector that represents a 2D coordinates embedded into a 4 dimension vector:
candle = [1586266800, 11812, 0, 0]
Which means that the point has locations x=1586266800 and y=11812.
I am trying to apply a translation using the following values:
priceBottom = 11800
timestampOrigin = 1586266800
Which means that the new origin of coordinates will be x=1586266800 and y=11800.
I build the following translation matrix:
[ 1 0 0 tx ]
[ 0 1 0 ty ]
[ 0 0 1 tz ]
[ 0 0 0 1 ]
translation1 = [1, 0, 0, -timestampOrigin; 0, 1, 0, -priceBottom; 0, 0, 1, 0; 0, 0, 0, 1]
Is this matrix correct?
How shall I apply it to the vector?
I have tried:
>> candle * translation1
ans =
1.5863e+009 1.1812e+004 0.0000e+000 -2.5162e+018
Which obviously does not work.

Your translation is wrong. From a mathematical point of view, the transformation you're after is:
i.e. you need to 'augment' your vector with another dimension with the value 1, so that it can be used to add the 'translation' information to each row during matrix multiplication.
So, if I understood your example correctly
Initial_position = [1586266800; 11812; 0; 0] # note: vertical vector
Augmented_vector = [Initial_position; 1]
Translation_vector = [0 ; -12 ; 0; 0] # note: vertical vector
Transformation = eye(5);
Transformation( 1:4, 5 ) = Translation_vector
Translated_vector = Transformation * Augmented_vector;
Translated_vector = Translated_vector( 1:4, 1 )

Related

Coin Change problem using Memoization (Amazon interview question)

def rec_coin_dynam(target,coins,known_results):
'''
INPUT: This funciton takes in a target amount and a list of possible coins to use.
It also takes a third parameter, known_results, indicating previously calculated results.
The known_results parameter shoud be started with [0] * (target+1)
OUTPUT: Minimum number of coins needed to make the target.
'''
# Default output to target
min_coins = target
# Base Case
if target in coins:
known_results[target] = 1
return 1
# Return a known result if it happens to be greater than 1
elif known_results[target] > 0:
return known_results[target]
else:
# for every coin value that is <= than target
for i in [c for c in coins if c <= target]:
# Recursive call, note how we include the known results!
num_coins = 1 + rec_coin_dynam(target-i,coins,known_results)
# Reset Minimum if we have a new minimum
if num_coins < min_coins:
min_coins = num_coins
# Reset the known result
known_results[target] = min_coins
return min_coins
This runs perfectly fine but I have few questions about it.
We give it the following input to run:
target = 74
coins = [1,5,10,25]
known_results = [0]*(target+1)
rec_coin_dynam(target,coins,known_results)
why are we initalising the know result with zeros of length target+1? why can't we just write
know_results = []
Notice that the code contains lines such as:
known_results[target] = 1
return known_results[target]
known_results[target] = min_coins
Now, let me demonstrate the difference between [] and [0]*something in the python interactive shell:
>>> a = []
>>> b = [0]*10
>>> a
[]
>>> b
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>>
>>> a[3] = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: list assignment index out of range
>>>
>>> b[3] = 1
>>>
>>> a
[]
>>> b
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
The exception IndexError: list assignment index out of range was raised because we tried to access cell 3 of list a, but a has size 0; there is no cell 3. We could put a value in a using a.append(1), but then the 1 would be at position 0, not at position 3.
There was no exception when we accessed cell 3 of list b, because b has size 10, so any index between 0 and 9 is valid.
Conclusion: if you know in advance the size that your array will have, and this size never changes during the execution of the algorithm, then you might as well begin with an array of the appropriate size, rather than with an empty array.
What is the size of known_results? The algorithm needs results for values ranging from 0 to target. How many results is that? Exactly target+1. For instance, if target = 2, then the algorithm will deal with results for 0, 1 and 2; that's 3 different results. Thus known_results must have size target+1. Note that in python, just like in almost every other programming language, a list of size n has n elements, indexed 0 to n-1. In general, in an integer interval [a, b], there are b-a+1 integers. For instance, there are three integers in interval [8, 10] (those are 8, 9 and 10).

MiniZinc select a subset of products -problem category

after having watched the coursera Basic Modelling course I am trying to categorize my problem so that to choose the suitable Model representation on MiniZinc.
I have a range of 10 products, each of them with its 4 special features/attributes, (a table 4x10). This table has fixed values. The user will give as input 4 parameters.
The constraints will be created in a way that the user input parameters will determine the product's attribute values.
The decision variable will be the subset of the products that match user's input.
from my understanding this is a problem of selecting a subset from a set of Objects, is there any example suggestion available that corresponds to the above Minizinc model description to have a look?
I'm (still) not completely sure about the exact specification of the problem, but here is a model that identifies all the products that are "nearest" the input data. I've defined "nearest" simply as the sum of absolute differences between each feature of a product and the input array (calculated by the score function).
int: k; % number of products
int: n; % number of features
array[1..k, 1..n] of int: data;
array[1..n] of int: input;
% decision variables
array[1..k] of var int: x; % the closeness score for each product
array[1..k] of var 0..1: y; % 1: this products is nearest (as array)
% var set of 1..k: y; % products closest to input (as set)
var int: z; % the minimum score
function var int: score(array[int] of var int: a, array[int] of var int: b) =
let {
var int: t = sum([abs(a[i]-b[i]) | i in index_set(a)])
} in
t
;
solve minimize z;
constraint
forall(i in 1..k) (
x[i] = score(data[i,..], input) /\
(y[i] = 1 <-> z = x[i]) % array
% (i in y <-> x[i] = z) % set
)
/\
minimum(z, x)
;
output [
"input: \(input)\n",
"z: \(z)\n",
"x: \(x)\n",
"y: \(y)\n\n"
]
++
[
% using array representation of y
if fix(y[i]) = 1 then
"nearest: ix:\(i) \(data[i,..])\n" else "" endif
| i in 1..k
];
% data
k = 10;
n = 4;
% random features for the products
data = array2d(1..k,1..n,
[
3,6,7,5,
3,5,6,2,
9,1,2,7,
0,9,3,6,
0,5,2,4, % score 5
1,8,7,9,
2,0,2,3, % score 5
7,5,9,2,
2,8,9,7,
3,6,1,2]);
input = [1,2,3,4];
% input = [7,5,9,2]; % exact match for product 8
The output is:
input: [1, 2, 3, 4]
z: 5
x: [11, 10, 13, 10, 5, 15, 5, 17, 16, 10]
y: [0, 0, 0, 0, 1, 0, 1, 0, 0, 0]
nearest: ix:5 [0, 5, 2, 4]
nearest: ix:7 [2, 0, 2, 3]

How to filter a numpy array based on two conditions: one depending on the other?

I am using the cv2 library to detect key points of 2 stereo images and converted the resulting dmatches objects to a numpy array:
kp_left, des_left = sift.detectAndCompute(im_left, mask_left)
matches = bf.match(des_left, des_right) # according to assignment pdf
np_matches = dmatch2np(matches)
Then I want to filter matches if the key points are filtering, after y-direction, which should not differ bigger than 3 pixels:
ind = np.where(np.abs(kp_left[np_matches[:, 0], 1] - kp_right[np_matches[:, 1], 1]) < 4)
AND those key points should also not have a difference smaller than < 0. Then it means the key point is behind the camera.
ind = np.where((kp_left[np_matches[ind[0], 0], 0] - kp_right[np_matches[ind[0], 1], 0]) >= 0)
How to combine those 2 conditions?
The general form is this:
condition1 = x < 4
condition2 = y >= 100
result = np.where(condition1 & condition2)
The even more general form:
conditions = [...] # list of bool arrays
result = np.where(np.logical_and.reduce(conditions))

vtk projection matrix: from world to display

I'm trying to obtain a 4x4 projection matrix that transforms a point in the world to the display coordinates.
Having a pixel (x, y) and the corresponding z-value (from the zbuffer), I obtain its 3D world coordinates with vtkWorldPointPicker class. Let's denote the result by x.
According to documentation, I can compute the view coordinates of the world point by applying the matrix GetCompositeProjectionTransformMatrix to x. Next, I'm using the transformation from the view to the initial display coordinates by using the code found in vtkViewport::ViewToDisplay (*):
dx = (v[0] + 1.0) * (sizex*(v[2]-v[0])) / 2.0 + sizex*v[0];
dy = (v[1] + 1.0) * (sizey*(v[3]-v[1])) / 2.0 + sizey*v[1];
where sizex and sizey are the width and height of the image in pixels, and v are the computed view coordinates.
Unfortunately, the values I get back do not match the original:
display [0, 0, 0.716656] // x,y-pixel coordinates and the zbuffer
x = [0.0255492, -0.0392383, 0.00854707] // world coordinates (using vtkWorldPointPicker)
// camera->GetCompositeProjectionTransformMatrix
P = [
-1.84177 0 0 0
0 1.20317 1.39445 0
0 -757.134 653.275 -9.9991
0 -0.757126 0.653268 0 ]
v = [-0.0470559, -0.0352919, 25.2931, 0.0352919] // P*x
a = [7697.18, -0.597848] // using (*)
Is this approach (in general) correct, or is there a more conventional way to do this? Thanks for any help.
Edit: the provided snippet from vtkViewport::ViewToDisplay is incorrect. It should read:
dx = (v[0] + 1.0) * (sizex*(vp[2]-vp[0])) / 2.0 + sizex*vp[0];
dy = (v[1] + 1.0) * (sizey*(vp[3]-vp[1])) / 2.0 + sizey*vp[1];
Note, that v refers to the normalised view coordinates, vp is the viewport (by default, vp := [0, 0, 1, 1])!
The conversion is indeed valid, although there might be built-in ways to obtain the final matrix.
Assuming only one (default) viewport is used, the matrix converting the view into display coordinates is:
M = [X/2, 0, 0, X/2,
0, Y/2, 0, Y/2,
0, 0, 1, 0,
0, 0, 0, 1]
where X and Y is the width and height of the image in pixels.
Hence, given a point x in the world coordinates, the display coordinates in homogeneous form are:
c = M * P * x;
where P is the CompositeProjectionTransformMatrix. After normalising (c[i] /= c[3], i = 0,1,2) we arrive at the original pixel values.

Fabricjs: how to get the starting point of a path?

Is there any built-in method in fabricjs that returns the coordinates of the starting point of a path ? I do not need the coordinates of the bounding rectangle.
Thanks!
To get the starting point you have to extract the point and calculate its actual position on canvas. As of fabric 1.6.0 you have all functions to do that, for the previous version you need a bit more logic:
example path:
var myPath = new fabric.Path('M 25 0 L 300 100 L 200 300 z');
point:
var x = myPath.path[0][1];
var y = myPath.path[0][2];
var point = {x: x, y: y};
Logic:
1) calculate path transformation matrix:
needs: path.getCenterPoint(); path.angle, path.scaleX, path.scaleY, path.skewX, path.skewY, path.flipX, path.flipY.
var degreesToradians = fabric.util.degreesToradians,
multiplyMatrices = fabric.util.multiplyTransformMatrices,
center = this.getCenterPoint(),
theta = degreesToRadians(path.angle),
cos = Math.cos(theta),
sin = Math.sin(theta),
translateMatrix = [1, 0, 0, 1, center.x, center.y],
rotateMatrix = [cos, sin, -sin, cos, 0, 0],
skewMatrixX = [1, 0, Math.tan(degreesToRadians(path.skewX)), 1],
skewMatrixY = [1, Math.tan(degreesToRadians(path.skewY)), 0, 1],
scaleX = path.scaleX * (apth.flipX ? -1 : 1),
scaleY = path.scaleY * (path.flipY ? -1 : 1),
scaleMatrix = [path.scaleX, 0, 0, path.scaleY],
matrix = path.group ? path.group.calcTransformMatrix() : [1, 0, 0, 1, 0, 0];
matrix = multiplyMatrices(matrix, translateMatrix);
matrix = multiplyMatrices(matrix, rotateMatrix);
matrix = multiplyMatrices(matrix, scaleMatrix);
matrix = multiplyMatrices(matrix , skewMatrixX);
matrix = multiplyMatrices(matrix , skewMatrixY);
// at this point you have the transform matrice.
Now take the rendering path process:
the canvas is transformed by matrix, then the point of path ar drawn with an offset that you can find in path.pathOffset.x and path.pathOffset.y.
So take your first point, subtract the offset.
point.x -= path.pathOffset.x;
point.y -= path.pathOffset.y;
Then
var finalpoint = fabric.util.transformPoint(point, matrix);
In new fabric 1.6.0 all the logic is in a function, you can just run:
var matrix = path.calcTransformMatrix(); and then proceed with transform Point logic.
Checkout the Path.path property. It is a 2D array containing an element for each path command. The second array holds the command in the first element e.g. 'M' for move, the following elements contain the coordinates.
var myPath = new fabric.Path('M 25 0 L 300 100 L 200 300 z');
var startX = myPath.path[0][1];
var startY = myPath.path[0][2];

Resources