how can I rotate line with only changing theta? - python-3.x

I was making animation with manim and I tried to make line that one end is fixed, and one end moves along the arc. The way I wanted to solve this problem was to define each point with theta, and let the point move along the arc as it moves. However, even if the setta was changed, the line did not move, so the rotate was used, but the length of the line did not change, so the line deviated from the arc and failed. I wonder how to make the line move smoothly with the size of the angle when theta value is changed to an arbitrary value with theta as a parameter.
from manim import *
import numpy as np
class RotateVector(Scene):
def construct(self):
theta = 1/9*np.pi
semicircle = Arc(angle=PI, radius=4, stroke_color = GREY, arc_center=np.array([0, -1.5, 0]))
bottom_line = Line(start=np.array([-4.0,-1.5,0]), end=np.array([4.0,-1.5,0]), stroke_color = GREY)
dot_A = np.array([-4.0,-1.5,0])
dot_A_text = Text('A', font_size=35).next_to(dot_A, LEFT+DOWN, buff=0.05)
dot_P = np.array([4*np.cos(theta*2),4*np.sin(theta*2)-1.5,0])
dot_P_text = Text('P', font_size=35).next_to(dot_P, RIGHT+UP, buff=0.05)
line_AP = Line(start=np.array([-4.0,-1.5,0]), end=np.array([4*np.cos(2*theta),4*np.sin(2*theta)-1.5,0]), stroke_color = GREY)
self.play(Create(semicircle), Create(bottom_line), Create(dot_A_text))
self.play(Create(line_AP), Create(dot_P_text))
self.wait(3)

Your mobjects do not change because the dependency on Theta is not retained after initializing the mobject -- but it doesn't take too much to make it work!
The solution to your Problem is using a ValueTracker for the value of theta (to easily allow animating changing it) and updater functions for the mobjects you would like to change. In other words:
theta_vt = ValueTracker(theta)
dot_P_text.add_updater(lambda mobj: mobj.next_to([4*np.cos(theta_vt.get_value()*2),4*np.sin(theta_vt.get_value()*2)-1.5,0]))
line_AP.add_updater(lambda mobj: mobj.put_start_and_end_on([-4, 1.5, 0], [4*np.cos(theta_vt.get_value()*2),4*np.sin(theta_vt.get_value()*2)-1.5,0]))
self.play(theta_vt.animate.set_value(2*PI))
You could even avoid the value tracker and use an auxiliary (empty) mobject for positioning:
P = Mobject().move_to([4*np.cos(theta*2),4*np.sin(theta*2)-1.5,0])
and then change the updaters to use the position of P instead of theta, and make the point move along the desired arc. (That point actually is the one that becomes slightly more complicated, while the updaters simplify a bit.)

Related

Ultimate struggle with a full 3d space controller

Sorry if i'm stupid or something, but i having a deep dread from a work on a "full 3d" space movement.
I'm trying to make a "space ship" KinematicBody controller which using basis vectors as a rotation point and have ability to strafe/move left,right,up,down based on it's facing direction.
The issue is i'm having that i want to use a Vector3 as a storage of all input variables, an input strength in particular, but i can't find a convenient way to orient or use this vector's variables to apply it to velocity.
I got a sort of cheap solution which i don't like with applying a rotation to an input vector so it will "corresponds" to one of the basis, but it's starting to brake at some angels.
Could please somebody suggest what i can change in my logic or maybe there is a way to
use quaternion/matrix related methods/formulas?
I'm not sure I fully understand what you want to do, but I can give you something to work with.
I'll assume that you already have the input as a Vector3. If not, you want to see Input.get_action_strength, Input.get_axis and Input.get_vector.
I'm also assuming that the braking situations you encountered are a case of gimbal lock. But since you are asking about applying velocity not rotation, I'll not go into that topic.
Since you are using a KinematicBody, I suppose you would be using move_and_slide or similar method, which work in global space. But you want the input to have to be based on the current orientation. Thus, you would consider your Vector3 which represents the input to be in local space. And the issue is how to go from that local space to the global space that move_and_slide et.al. need.
Transform
You might be familiar with to_local and to_global. Which would interpret the Vector3 as a position:
var global_input_vector:Vector3 = to_global(input_vector)
And the opposite operation would be:
input_vector = to_local(global_input_vector)
The problem with these is that since these consider the Vector3 to be positions, they will translate the vector depending where the KinematicBody is. We can undo that translation:
var global_vec:Vector3 = to_global(local_vec) - global_transform.orign
And the opposite operation would be:
local_vec = to_local(global_vec + global_transform.orign)
By the way this is another way to write the same code:
var global_vec:Vector3 = (global_transform * local_vec) - global_transform.orign
And the opposite operation would be:
local_vec = global_transform.affine_inverse() * (global_vec + global_transform.orign)
Which I'm mentioning because I want you to see the similarity with the following approach.
Basis
I would rather not consider the Vector3 to be positions. Just free vectors. So, we would transform it with only the Basis, like this:
var global_vec:Vector3 = global_transform.basis * local_vec
And the opposite operation would be:
local_vec = global_transform.affine_inverse().basis * global_vec
This approach will not have the translation problem.
You can think of the Basis as a 3 by 3 matrix, and the Transform is that same matrix augmented with a translation vector (origin).
Quat
However, if you only want rotation, let us se quaternions instead:
var global_vec:Vector3 = global_transform.basis.get_rotation_quat() * local_vec
And the opposite operation would be:
local_vec = global_transform.affine_inverse().basis.get_rotation_quat() * global_vec
Well, actually, let us invert just the quaternion:
local_vec = global_transform.basis.get_rotation_quat().inverse() * global_vec
These will only rotate the vector (no scaling, or any other transformation, just rotation) according to the current orientation of the KinematicBody.
Rotating a Transform
If you are trying to rotate a Transform, either…
Do this (quaternion):
transform = Transform(transform.basis * Basis(quaternion), transform.origin)
Or this (quaternion):
transform = transform * Transform(Basis(quaternion), Vector3.ZERO)
Or this (axis-angle):
transform = Transform(transform.basis.rotated(axis, angle), transform.origin)
Or this (axis-angle):
transform = transform * Transform.Identity.rotated(axis, angle)
Or this (Euler angles):
transform = Transform(transform.basis * Basis(pitch, yaw, roll), transform.origin)
Or this (Euler angles):
transform = transform * Transform(Basis(pitch, yaw, roll), Vector3.ZERO)
Avoid this:
transform = transform.rotated(axis, angle)
The reason is that this rotation is always before translation (i.e. this rotates around the global origin instead of the current position), and you will end up with an undesirable result.
And yes, you could use rotate_x, rotate_y and rotate_z, or set rotation of a Spatial. But sometimes you need to work with a Transform directly.
See also:
Godot/Gdscript rotate + translate from local to world space.
How to LERP between 2 angles going the longest route or path in Godot.

How to match a geometric template of 2D boxes to fit another set of 2D boxes

I'm trying to find a match between a set of 2D boxes with coordinates (A) (from a template with known sizes and distances between boxes) to another set of 2D boxes with coordinates (B) (which may contain more boxes than A). They should match in terms of each box from A corresponds to a single Box in B. The boxes in A together form a "stamp" which is assymmetrical in atleast one dimension.
Illustration of problem
explanation: "Stanz" in the illustration is a box from set A.
One might even think of the Set A as only 2D points (the centerpoint of the box) to make it simpler.
The end result will be to know which A box corresponds to which B box.
I can only think of very specific ways of doing this, tailored to a specific layout of boxes, is there any known generic ways of dealing with this forms of matching/search problems and what are they called?
Edit: Possible solution
I have come up with one possible solution, looking for all the possible rotations at each possible B center position for a single box from set A. Here all of the points in A would be rotated and compared against the distance to B centers. Not sure if this is a good way.
Looking for the possible rotations at each B centerpoint- solution
In your example, the transformation between the template and its presence in B can be entirely defined (actually, over-defined) by two matching points.
So here's a simple approach which is kind of performant. First, put all the points in B into a kD-tree. Now, pick a canonical "first" point in A, and hypothesize matching it to each of the points in B. To check whether it matches a particular point in B, pick a canonical "second" point in A and measure its distance to the "first" point. Then, use a standard kD proximity-bounding query to find all the points in B which are roughly that distance from your hypothesized matched "first" point in B. For each of those, determine the transformation between A and B, and for each of the other points in A, determine whether there's a point in A at roughly the right place (again, using the kD-tree), early-outing with the first unmatched point.
The worst-case performance there can get quite bad with pathological cases (O(n^3 log n), I think) but in general I would expect roughly O(n log n) for well-behaved data with a low threshold. Note that the thresholding is a bit rough-and-ready, and the results can depend on your choice of "first" and "second" points.
This is more of an idea than an answer, but it's too long for a comment. I asked some additional questions in a comment above, but the answers may not be particular relevant, so I'll go ahead and offer some thoughts in the meantime.
As you may know, point matching is its own problem domain, and if you search for 'point matching algorithm', you'll find various articles, papers, and other resources. It seems though that an ad hoc solution might be appropriate here (one that's simpler than more generic algorithms that are available).
I'll assume that the input point set can only be rotated, and not also flipped. If this idea were to work though, it should also work with flipping - you'd just have to run the algorithm separately for each flipped configuration.
In your example image, you've matched a point from set A with a point from set B so that they're coincident. Call this shared point the 'anchor' point. You'd need to do this for every combination of a point from set A and a point from set B until you found a match or exhausted the possibilities. The problem then is to determine if a match can be made given one of these matched point pairs.
It seems that for a given anchor point, a necessary but not sufficient condition for a match is that a point from set A and a point from set B can be found that are approximately the same distance from the anchor point. (What 'approximately' means would depend on the input, and would need to be tuned appropriately given that you're using integers.) This condition is met in your example image in that the center point of each point set is (approximately) the same distance from the anchor point. (Note that there could be multiple pairs of points that meet this condition, in which case you'd have to examine each such pair in turn.)
Once you have such a pair - the center points in your example - you can use some simple trigonometry and linear algebra to rotate set A so that the points in the pair coincide, after which the two point sets are locked together at two points and not just one. In your image that would involve rotating set A about 135 degrees clockwise. Then you check to see if every point in set B has a point in set A with which it's coincident, to within some threshold. If so, you have a match.
In your example, this fails of course, because the rotation is not actually a match. Eventually though, if there's a match, you'll find the anchor point pair for which the test succeeds.
I realize this would be easier to explain with some diagrams, but I'm afraid this written explanation will have to suffice for the moment. I'm not positive this would work - it's just an idea. And maybe a more generic algorithm would be preferable. But, if this did work, it might have the advantage of being fairly straightforward to implement.
[Edit: Perhaps I should add that this is similar to your solution, except for the additional step to allow for only testing a subset of the possible rotations.]
[Edit: I think a further refinement may be possible here. If, after choosing an anchor point, matching is possible via rotation, it should be the case that for every point p in B there's a point in A that's (approximately) the same distance from the anchor point as p is. Again, it's a necessary but not sufficient condition, but it allows you to quickly eliminate cases where a match isn't possible via rotation.]
Below follows a finished solution in python without kD-tree and without early outing candidates. A better way is to do the implementation yourself according to Sneftel but if you need anything quick and with a plot this might be useful.
Plot shows the different steps, starts off with just the template as a collection of connected lines. Then it is translated to a point in B where the distances between A and B points fits the best. Finally it is rotated.
In this example it was important to also match up which of the template positions was matched to which boundingbox position, so its an extra step in the end. There might be some deviations in the code compared to the outline above.
import numpy as np
import random
import math
import matplotlib.pyplot as plt
def to_polar(pos_array):
x = pos_array[:, 0]
y = pos_array[:, 1]
length = np.sqrt(x ** 2 + y ** 2)
t = np.arctan2(y, x)
zip_list = list(zip(length, t))
array_polar = np.array(zip_list)
return array_polar
def to_cartesian(pos):
# first element radius
# second is angle(theta)
# Converting polar to cartesian coordinates
radius = pos[0]
theta = pos[1]
x = radius * math.cos(theta)
y = radius * math.sin(theta)
return x,y
def calculate_distance_points(p1,p2):
return np.sqrt((p1[0]-p2[0])**2+(p1[1]-p2[1])**2)
def find_closest_point_inx(point, neighbour_set):
shortest_dist = None
closest_index = -1
# Find the point in the secondary array that is the closest
for index,curr_neighbour in enumerate(neighbour_set):
distance = calculate_distance_points(point, curr_neighbour)
if shortest_dist is None or distance < shortest_dist:
shortest_dist = distance
closest_index = index
return closest_index
# Find the sum of distances between each point in primary to the closest one in secondary
def calculate_agg_distance_arrs(primary,secondary):
total_distance = 0
for point in primary:
closest_inx = find_closest_point_inx(point, secondary)
dist = calculate_distance_points(point, secondary[closest_inx])
total_distance += dist
return total_distance
# returns a set of <primary_index,neighbour_index>
def pair_neighbours_by_distance(primary_set, neighbour_set, distance_limit):
pairs = {}
for num, point in enumerate(primary_set):
closest_inx = find_closest_point_inx(point, neighbour_set)
if calculate_distance_points(neighbour_set[closest_inx], point) > distance_limit:
closest_inx = None
pairs[num]=closest_inx
return pairs
def rotate_array(array, angle,rot_origin=None):
if rot_origin is not None:
array = np.subtract(array,rot_origin)
# clockwise rotation
theta = np.radians(angle)
c, s = np.cos(theta), np.sin(theta)
R = np.array(((c, -s), (s, c)))
rotated = np.matmul(array, R)
if rot_origin is not None:
rotated = np.add(rotated,rot_origin)
return rotated
# Finds out a point in B_set and a rotation where the points in SetA have the best alignment towards SetB.
def find_stamp_rotation(A_set, B_set):
# Step 1
anchor_point_A = A_set[0]
# Step 2. Convert all points to polar coordinates with anchor as origin
A_anchor_origin = A_set - anchor_point_A
anchor_A_polar = to_polar(A_anchor_origin)
print(anchor_A_polar)
# Step 3 for each point in B
score_tuples = []
for num_anchor, B_anchor_point_try in enumerate(B_set):
# Step 3.1
B_origin_rel_point = B_set-B_anchor_point_try
B_polar_rp_origin = to_polar(B_origin_rel_point)
# Step 3.3 select arbitrary point q from Ap
point_Aq = anchor_A_polar[1]
# Step 3.4 test each rotation, where pointAq is rotated to each B-point (except the B anchor point)
for try_rot_point_B in [B_rot_point for num_rot, B_rot_point in enumerate(B_polar_rp_origin) if num_rot != num_anchor]:
# positive rotation is clockwise
# Step 4.1 Rotate Ap by the angle between q and n
angle_to_try = try_rot_point_B[1]-point_Aq[1]
rot_try_arr = np.copy(anchor_A_polar)
rot_try_arr[:,1]+=angle_to_try
cart_rot_try_arr = [to_cartesian(e) for e in rot_try_arr]
cart_B_rp_origin = [to_cartesian(e) for e in B_polar_rp_origin]
distance_score = calculate_agg_distance_arrs(cart_rot_try_arr, cart_B_rp_origin)
score_tuples.append((B_anchor_point_try,angle_to_try,distance_score))
# Step 4.3
lowest=None
for b_point,angle,distance in score_tuples:
print("point:{} angle(rad):{} distance(sum):{}".format(b_point,360*(angle/(2*math.pi)),distance))
if lowest is None or distance < lowest[2]:
lowest = b_point, 360*angle/(2*math.pi), distance
return lowest
def test_example():
ax = plt.subplot()
ax.grid(True)
plt.title('Fit Template to BBoxes by translation and rotation')
plt.xlim(-20, 20)
plt.ylim(-20, 20)
ax.set_xticks(range(-20,20), minor=True)
ax.set_yticks(range(-20,20), minor=True)
template = np.array([[-10,-10],[-10,10],[0,0],[10,-10],[10,10], [0,20]])
# Test Bboxes are Rotated 40 degree, translated 2,2
rotated = rotate_array(template,40)
rotated = np.subtract(rotated,[2,2])
# Adds some extra bounding boxes as noise
for i in range(8):
rotated = np.append(rotated,[[random.randrange(-20,20), random.randrange(-20,20)]],axis=0)
# Scramble entries in array and return the position change.
rnd_rotated = rotated.copy()
np.random.shuffle(rnd_rotated)
element_positions = []
# After shuffling, looks at which index the "A"-marks has ended up at. For later comparison to see that the algo found the correct answer.
# This is to represent the actual case, where I will get a bunch of unordered bboxes.
rnd_map = {}
indexes_translation = [num2 for num,point in enumerate(rnd_rotated) for num2,point2 in enumerate(rotated) if point[0]==point2[0] and point[1]==point2[1]]
for num,inx in enumerate(indexes_translation):
rnd_map[num]=inx
# algo part 1/3
b_point,angle,_ = find_stamp_rotation(template,rnd_rotated)
# Plot for visualization
legend_list = np.empty((0,2))
leg_template = plt.plot(template[:,0],template[:,1],c='r')
legend_list = np.append(legend_list,[[leg_template[0],'1. template-pattern']],axis=0)
leg_bboxes = plt.scatter(rnd_rotated[:,0],rnd_rotated[:,1],c='b',label="scatter")
legend_list = np.append(legend_list,[[leg_bboxes,'2. bounding boxes']],axis=0)
leg_anchor = plt.scatter(b_point[0],b_point[1],c='y')
legend_list = np.append(legend_list,[[leg_anchor,'3. Discovered bbox anchor point']],axis=0)
# algo part 2/3
# Superimpose A onto B by A[0] to b_point
offset = b_point - template[0]
super_imposed_A = template + offset
# Plot superimposed, but not yet rotated
leg_s_imposed = plt.plot(super_imposed_A[:,0],super_imposed_A[:,1],c='k')
#plt.legend(rubberduckz, "superimposed template on anchor")
legend_list = np.append(legend_list,[[leg_s_imposed[0],'4. Templ superimposed on Bbox']],axis=0)
print("Superimposed A on B by A[0] to {}".format(b_point))
print(super_imposed_A)
# Rotate, now the template should match pattern of bboxes
# algo part 3/4
super_imposed_rotated_A = rotate_array(super_imposed_A,-angle,rot_origin=super_imposed_A[0])
# Show the beautiful match in a last plot
leg_s_imp_rot = plt.plot(super_imposed_rotated_A[:,0],super_imposed_rotated_A[:,1],c='g')
legend_list = np.append(legend_list,[[leg_s_imp_rot[0],'5. final fit']],axis=0)
plt.legend(legend_list[:,0], legend_list[:,1],loc="upper left")
plt.show()
# algo part 4/4
pairs = pair_neighbours_by_distance(super_imposed_rotated_A, rnd_rotated, 10)
print(pairs)
for inx in range(len(pairs)):
bbox_num = pairs[inx]
print("template id:{}".format(inx))
print("bbox#id:{}".format(bbox_num))
#print("original_bbox:{}".format(rnd_map[bbox_num]))
if __name__ == "__main__":
test_example()
Result on actual image with bounding boxes. Here it can be seen that the scaling is incorrect which makes the template a bit off but it will still be able to pair up and thats the desired end-result in my case.

how to use vtkImageReSlice?

I am trying to use vtkImageReSlicer to extract a 2d slice from a 3d
vtkImageData object. But I can't seem to get the recipe right. Am I doing it right?
I am also a bit confused about ResliceAxes Matrix. Does it represent a cutting plane? If
I move the ReSliceAxes origin will it also move the cutting plane? When I
call Update on the vtkImageReSlicer, the program crashes. But when I don't
call it, the output is empty.
Here's what I have so far.
#my input is any vtkactor that contains a closed curve of type vtkPolyData
ShapePolyData = actor.GetMapper().GetInput()
boundingBox = ShapePolyData.GetBounds()
for i in range(0,6,2):
delta = boundingBox[i+1]-boundingBox[i]
newBoundingBox.append(boundingBox[i]-0.5*delta)
newBoundingBox.append(boundingBox[i+1]+0.5*delta)
voxelizer = vtk.vtkVoxelModeller()
voxelizer.SetInputData(ShapePolyData)
voxelizer.SetModelBounds(newBoundingBox)
voxelizer.SetScalarTypeToBit()
voxelizer.SetForegroundValue(1)
voxelizer.SetBackgroundValue(0)
voxelizer.Update()
VoxelModel =voxelizer.GetOutput()
ImageOrigin = VoxelModel.GetOrigin()
slicer = vtk.vtkImageReslice()
#Am I setting the cutting axis here. x axis set at 1,0,0 , y axis at 0,1,0 and z axis at 0,0,1
slicer.SetResliceAxesDirectionCosines(1,0,0,0,1,0,0,0,1)
#if I increase the z value, will the cutting plane move up?
slicer.SetResliceAxesOrigin(ImageOrigin[0],ImageOrigin[1],ImageOrigin[2])
slicer.SetInputData(VoxelModel)
slicer.SetInterpolationModeToLinear()
slicer.SetOutputDimensionality(2)
slicer.Update() #this makes the code crash
voxelSurface = vtk.vtkContourFilter()
voxelSurface.SetInputConnection(slicer.GetOutputPort())
voxelSurface.SetValue(0, .999)
voxelMapper = vtk.vtkPolyDataMapper()
voxelMapper.SetInputConnection(voxelSurface.GetOutputPort())
voxelActor = vtk.vtkActor()
voxelActor.SetMapper(voxelMapper)
Renderer.AddActor(voxelActor)
I have never used vtkImageReslice, but I have used vtkExtractVOI for vtkImageData, which allows you to achieve a similar result, I think. Here is your example modified with the latter, instead:
ImageOrigin = VoxelModel.GetOrigin()
slicer = vtk.vtkExtractVOI()
slicer.SetInputData(VoxelModel)
#With the setVOI method you can define which slice you want to extract
slicer.SetVOI(xmin, xmax, ymin, ymax, zslice, zslice)
slicer.SetSampleRate(1, 1, 1)
slicer.Update()
voxelSurface = vtk.vtkContourFilter()
voxelSurface.SetInputConnection(slicer.GetOutputPort())
voxelSurface.SetValue(0, .999)
voxelMapper = vtk.vtkPolyDataMapper()
voxelMapper.SetInputConnection(voxelSurface.GetOutputPort())
voxelActor = vtk.vtkActor()
voxelActor.SetMapper(voxelMapper)
Renderer.AddActor(voxelActor)

Filtering signal: how to restrict filter that last point of output must equal the last point of input

Please help my poor knowledge of signal processing.
I want to smoothen some data. Here is my code:
import numpy as np
from scipy.signal import butter, filtfilt
def testButterworth(nyf, x, y):
b, a = butter(4, 1.5/nyf)
fl = filtfilt(b, a, y)
return fl
if __name__ == '__main__':
positions_recorded = np.loadtxt('original_positions.txt', delimiter='\n')
number_of_points = len(positions_recorded)
end = 10
dt = end/float(number_of_points)
nyf = 0.5/dt
x = np.linspace(0, end, number_of_points)
y = positions_recorded
fl = testButterworth(nyf, x, y)
I am pretty satisfied with results except one point:
it is absolutely crucial to me that the start and end point in returned values equal to the start and end point of input. How can I introduce this restriction?
UPD 15-Dec-14 12:04:
my original data looks like this
Applying the filter and zooming into last part of the graph gives following result:
So, at the moment I just care about the last point that must be equal to original point. I try to append copy of data to the end of original list this way:
the result is as expected even worse.
Then I try to append data this way:
And the slice where one period ends and next one begins, looks like that:
To do this, you're always going to cheat somehow, since the true filter applied to the true data doesn't behave the way you require.
One of the best ways to cheat with your data is to assume it's periodic. This has the advantages that: 1) it's consistent with the data you actually have and all your changing is to append data to the region you don't know about (so assuming it's periodic as as reasonable as anything else -- although may violate some unstated or implicit assumptions); 2) the result will be consistent with your filter.
You can usually get by with this by appending copies of your data to the beginning and end of your real data, or just small pieces, depending on your filter.
Since the FFT assumes that the data is periodic anyway, that's often a quick and easy approach, and is fully accurate (whereas concatenating the data is an estimation of an infinitely periodic waveform). Here's an example of the FFT approach for a step filter.
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 128)
y = (np.sin(.22*(x+10))>0).astype(np.float)
# filter
y2 = np.fft.fft(y)
f0 = np.fft.fftfreq(len(x))
y2[(f0<-.25) | (f0>.25)] = 0
y3 = abs(np.fft.ifft(y2))
plt.plot(x, y)
plt.plot(x, y3)
plt.xlim(-10, 140)
plt.ylim(-.1, 1.1)
plt.show()
Note how the end points bend towards each other at either end, even though this is not consistent with the periodicity of the waveform (since the segments at either end are very truncated). This can also be seen by adjusting waveform so that the ends are the same (here I used x+30 instead of x+10, and here the ends don't need to bend to match-up so they stay at level with the end of the data.
Note, also, to have the endpoints actually be exactly equal you would have to extend this plot by one point (at either end), since it periodic with exactly the wavelength of the original waveform. Doing this is not ad hoc though, and the result will be entirely consistent with your analysis, but just representing one extra point of what was assumed to be infinite repeats all along.
Finally, this FFT trick works best with waveforms of length 2n. Other lengths may be zero padded in the FFT. In this case, just doing concatenations to either end as I mentioned at first might be the best way to go.
The question is how to filter data and require that the left endpoint of the filtered result matches the left endpoint of the data, and same for the right endpoint. (That is, in general, the filtered result should be close to most of the data points, but not necessarily exactly match any of them, but what if you need a match at both endpoints?)
To make the filtered result exactly match the endpoints of a curve, one could add a padding of points at either end of the curve and adjust the y-position of this padding so that the endpoints of the valid part of the filter exactly matched the end points of the original data (without the padding).
In general, this can be done by either iterating towards a solution, adjusting the padding y-position until the ends line up, or by calculating a few values and then interpolating to determine the y-positions that would be required for the matched endpoints. I'll do the second approach.
Here's the code I used, where I simulated the data as a sine wave with two flat pieces on either side (note, that these flat pieces are not the padding, but I'm just trying to make data that looks a bit like the OPs).
import numpy as np
from scipy.signal import butter, filtfilt
import matplotlib.pyplot as plt
#### op's code
def testButterworth(nyf, x, y):
#b, a = butter(4, 1.5/nyf)
b, a = butter(4, 1.5/nyf)
fl = filtfilt(b, a, y)
return fl
def do_fit(data):
positions_recorded = data
#positions_recorded = np.loadtxt('original_positions.txt', delimiter='\n')
number_of_points = len(positions_recorded)
end = 10
dt = end/float(number_of_points)
nyf = 0.5/dt
x = np.linspace(0, end, number_of_points)
y = positions_recorded
fx = testButterworth(nyf, x, y)
return fx
### simulate some data (op should have done this too!)
def sim_data():
t = np.linspace(.1*np.pi, (2.-.1)*np.pi, 100)
y = np.sin(t)
c = np.ones(10, dtype=np.float)
z = np.concatenate((c*y[0], y, c*y[-1]))
return z
### code to find the required offset padding
def fit_with_pads(v, data, n=1):
c = np.ones(n, dtype=np.float)
z = np.concatenate((c*v[0], data, c*v[1]))
fx = do_fit(z)
return fx
def get_errors(data, fx):
n = (len(fx)-len(data))//2
return np.array((fx[n]-data[0], fx[-n]-data[-1]))
def vary_padding(data, span=.005, n=100):
errors = np.zeros((4, n)) # Lpad, Rpad, Lerror, Rerror
offsets = np.linspace(-span, span, n)
for i in range(n):
vL, vR = data[0]+offsets[i], data[-1]+offsets[i]
fx = fit_with_pads((vL, vR), data, n=1)
errs = get_errors(data, fx)
errors[:,i] = np.array((vL, vR, errs[0], errs[1]))
return errors
if __name__ == '__main__':
data = sim_data()
fx = do_fit(data)
errors = vary_padding(data)
plt.plot(errors[0], errors[2], 'x-')
plt.plot(errors[1], errors[3], 'o-')
oR = -0.30958
oL = 0.30887
fp = fit_with_pads((oL, oR), data, n=1)[1:-1]
plt.figure()
plt.plot(data, 'b')
plt.plot(fx, 'g')
plt.plot(fp, 'r')
plt.show()
Here, for the padding I only used a single point on either side (n=1). Then I calculate the error for a range of values shifting the padding up and down from the first and last data points.
For the plots:
First I plot the offset vs error (between the fit and the desired data value). To find the offset to use, I just zoomed in on the two lines to find the x-value of the y zero crossing, but to do this more accurately, one could calculate the zero crossing from this data:
Here's the plot of the original "data", the fit (green) and the adjusted fit (red):
and zoomed in the RHS:
The important point here is that the red (adjusted fit) and blue (original data) endpoints match, even though the pure fit doesn't.
Is this a valid approach? Of the various options, this seems the most reasonable since one isn't usually making any claims about the data that isn't being shown, and also for show region has an accurately applied filter. For example, FFTs usually assume the data is zero or periodic beyond the boundaries. Certainly, though, to be precise one should explain what was done.

Finding the original position of a point on an image after rotation

I have the x, y co-ordinates of a point on a rotated image by certain angle. I want to find the co-ordinates of the same point in the original, non-rotated image.
Please check the first image which is simpler:
UPDATED image, SIMPLIFIED:
OLD image:
Let's say the first point is A, the second is B and the last is C. I assume you have the rotation matrice R (see Wikipedia Rotation Matrix if not) et the translation vector t, so that B = R*A and C = B+t.
It comes C = R*A + t, and so A = R^1*(C-t).
Edit: If you only need the non rotated new point, simply do D = R^-1*C.
First thing to do is defining the reference system (how "where the points lies with respect to each image" will be translated into numbers). I guess that you want to rely on a basic 2D reference system, given by a single point (a couple of X/Y values). For example: left/lower corner (min. X and min. Y).
The algorithm is pretty straightforward:
Getting the new defining reference point associated with the
rotated shape (min. X and min. Y), that is, determining RefX_new and
RefY_new.
Applying a basic conversion between reference systems:
X_old = X_new + (RefX_new - RefX_old)
Y_old = Y_new + (RefY_new -
RefY_old)
----------------- UPDATE TO RELATE FORMULAE TO NEW CAR PIC
RefX_old = min X value of the CarFrame before being rotated.
RefY_old = max Y value of the CarFrame before being rotated.
RefX_new = min X value of the CarFrame after being rotated.
RefY_new = max Y value of the CarFrame after being rotated.
X_new = X of the point with respect to the CarFrame after being rotated. For example: if RefX_new = 5 with respect to absolute frame (0,0) and X of the point with respect to this absolute frame is 8, X_new would be 3.
Y_new = Y of the point with respect to CarFrame after being rotated (equivalently to point above)
X_old_C = X_new_C(respect to CarFrame) + (RefX_new(CarFrame_C) - RefX_old(CarFrame_A))
Y_old_C = Y_new_C(respect to CarFrame) + (RefY_new(CarFrame_C) - RefY_old(CarFrame_A))
These coordinates are respect to the CarFrame and thus you might have to update them with respect to the absolute frame (0,0, I guess), as explained above, that is:
X_old_D_absolute_frame = X_old_C + (RefX_new(CarFrame_C) + RefX_global(i.e., 0))
Y_old_D_absolute_frame = Y_old_C + (RefY_new(CarFrame_C) + RefY_global(i.e., 0))
(Although you should do that once the CarFrame is in its "definitive position" with respect to the global frame, that is, on picture D (the point has the same coordinates with respect to the CarFrame in both picture C and D, but different ones with respect to the global frame).)
It might seem a bit complex put in this way; but it is really simple. You have just to think carefully about one case and create the algorithm performing all the actions. The idea is extremely simple: if I am on 8 inside something which starts in 5; I am on 3 with respect to the container.
------------ UPDATE IN THE METHODOLOGY
As said in the comment, these last pictures prove that the originally-proposed calculation of reference (max. Y/min. X) is not right: it shouldn't be the max./min. values of the carFrame but the minimum distances to the closer sides (= perpendicular line from the left/bottom side to the point).
------------ TRIGONOMETRIC CALCS FOR THE SPECIFIC EXAMPLE
The algorithm proposed is the one you should apply in any situation. Although in this specific case, the most difficult part is not moving from one reference system to the other, but defining the reference point in the rotated system. Once this is done, the application to the non-rotated case is immediate.
Here you have some calcs to perform this action (I have done it pretty quickly, thus better take it as an orientation and do it by your own); also I have only considered the case in the pictures, that is, rotation over the left/bottom point:
X_rotated = dx * Cos(alpha)
where dx = X_orig - (max_Y_CarFrame - Y_Orig) * Tan(alpha)
Y_rotated = dy * Cos(alpha)
where dy = Y_orig - X_orig * Tan(alpha)
NOTE: (max_Y_CarFrame - Y_Orig) in dx and X_orig in dy expect that the basic reference system is 0,0 (min. X and min. Y). If this is not the case, you would have to change this variables.
The X_rotated and Y_rotated give the perpendicular distance from the point to the closest side of the carFrame (respectively, left and bottom side). By applying these formulae (I insist: analyse them carefully), you get the X_old_D_absolute_frame/Y_old_D_absolute_frame that is, you have just to add the lef/bottom values from the carFrame (if it is located in 0,0, these would be the final values).

Resources