2D problem: I measure the position of the 3 ends of a triangle in a cartesian system. Now i move the system (triangle) to another cartesian system and measure the position of just two ends.
How can I identify the location of the 3rd end based on this data?
thanks! (and sorry for the bad english as a second angle)
This question is from 8 years ago, but even though it is a little vague, I think it could be answered fairly concisely, and if I have come across it then maybe someone else will come across it and gain some benefit from an actual real answer rather than the one which was accepted. (I apologize for accidentally upvoting the accepted "answer". I had originally downvoted it, but realized that the question was actually a bit vague and tried to reverse my downvote. Sadly, because of my noob rep, that seems to have translated into an actual upvote. It didn't deserve the downvote, but it didn't deserve an upvote, either.)
Lead-Up
So, lets say you have a simple cartesian grid, or reference frame:
And within that 10x10 reference frame you have a triangle object:
I failed to lable the images, but the (a, b, c) co-ordinates of this triangle are obviously a=(0,0), b=(0,4), and c=(4,0).
Now let's say that we move that triangle within our cartesian reference frame (grid):
We've moved the triangle x=x+1 and y=y+1, therefore, given that the new co-ordinates of "b" and "c" are b=(1,5) and c=(5,1), what is "a"?
It's pretty obvious that "a" would be (1,1), but looking at the math we can see that
Δb=b2-b1
Δb=(x2,y2)-(x1,y1)
Δb=(1,5)-(0,4)
Δb=(1-0, 5-4)
∴Δb=(1,1) or (+1,+1)
If we do the same for the two "c" co-ordinates we arrive at the same answer, Δc also equals (1,1), therefore it's a translation (a linear movement) and not a rotation, which means that Δa is also (1,1)! So:
a2=a1+Δa
a2=(0,0)+(+1,+1)
a2=(0+1,0+1)∴
∴a2=(1,1)
And if you take a look at the image you can clearly see that the new position of "a" is at (1,1).
Translation
But that's just the lead-up. Your question was in converting from one cartesian reference frame to another. Consider that your 10x10 reference frame is within a larger reference frame:
We can call your 10x10 grid a "local" reference frame, and it exists within maybe a "global" reference frame. There may actually be a number of other "local" reference frames within this global reference frame:
But to keep it simple, of course we're going to just consider one cartesian reference frame within another:
Now we need to translate that "local" reference frame within the "global" reference frame:
So, "locally", the (a,b,c) co-ordinates of our triangle are still {(0,0),(0,4),(4,0)}, but the origin of our local reference frame is not aligned with the origin of the global reference frame! Our local reference frame has shifted (+3.5,+1.5)!
Now what's the position of our triangle?!
You basically approach it the same way. The relative position of our "local" reference frame is (+3.5,+1.5), which we will call Δf for difference in frames, therefore the triangle relative to the global origin will be ag=al+Δf, bg=bl+Δf, and cg=cl+Δf, where (ag,bg,cg) are the co-ordinates within the global reference frame and (al,bl,cl) are co-ordinates within the local reference frame.
Three-Dimensional Cartesian System
It's exactly the same, you just include the third "z" co-ordinate to the position of the triangle.
Rotation
One of the assumptions I'm making from your original question is that you were actually asking about translation and were not interested in rotation at the time that you asked this question 8 years ago.
Very quickly, however, you need to use trig in order to rotate your 2d object within your reference frame, so you need to first determine from where you're rotating your object, which we call a rotational axis. Then, once you decide where the rotational axis is, recalculate the (x,y) for each of the three points in your triangle:
x = r · cos θ
y = r · sin θ
where θ is the angle at which we're rotating the object, "r" is the distance of that point from the axis of rotation, and "·" just means multiplication.
So, if we were to rotate our triangle 30° counter clockwise around piont "a" it may look similar to this:
But, again, that wasn't your question. Your questions was, "given the locations of two of the points, determine the position of the third".
Without any explanation whatsoever, only because I don't think you were asking about rotation, what you do is you work backwards:
if x = r · cos θ, then θ = arccos(x/r)
Now you have the angle of rotation which you can now apply to the original position of the missing point to find its (x,y) and, as with our original translational example, also works from one cartesian reference frame to another. Meaning, that if your "local" reference frame rotates within the global reference frame, even though it appears that nothing has changed within your local frame, you can plot the locations of the points of your objects within the global frame.
And, again, that also works for 3D reference frames, as well.
And finally, if your cartesian local reference frame is both translated and rotated, which it almost certainly is, then you would apply both methods to plot your points onto the other (global?) cartesian reference frame.
Real-World Application
Oh, so many! Our brains do this so intuitively every day when we're driving or walking down the street that I don't know where to begin!
Translation is easy, but rotation gets a little hairy when crossing axes. One trick to make things easier is to translate the object from one frame of reference to another in order to make the trig more straight-forward.
Telling the story in pictures:
And that's just the start...
I hope that helps.
This is a pretty vague question, but if I'm reading it right, then you need even less information than that. If you have the transformation of the first coordinate system to the second, then apply that to each of the three points to find each of the 3 equivalent points.
Otherwise, if you don't have the transformation, I would think it's impossible. After all, an infinite number of possible transformations of a coordinate system can result in the same two locations of two points yet different locations of the third.
Related
I am dealing with a reverse-engineering problem regarding road geometry and estimation of design conditions.
Suppose you have a set of points obtained from the measurement of positions of a road. This road has straight sections as well as curve sections. Straight sections are, of course, represented by lines, and curves are represented by circles of unknown center and radius. There are, as well, transition sections, which may be clothoids / Euler spirals or any other usual track transition curve. A representation of the track may look like this:
We know in advance that the road / track was designed taking this transition + circle + transition principle into account for every curve, yet we only have the measurement points, and the goal is to find the parameters describing every curve on the track, this is, the transition parameters as well as the circle's center and radius.
I have written some code using a nonlinear optimization algorithm, where a user can select start and end points and fit a circle that to the arc section between them, as it shows in next figure:
However, I don't find a suitable way to take the transition into account. After giving it some thought I came to think that this s because, given a set of discrete points -with their measurement error- representing a full curve, it is not entirely clear where to consider it "begins" and where it "ends" and, moreover, it is less clear where to consider the transition, the proper circle and the exit transition "begin" and "end".
Is there any work on this subject which I may have missed? is there a proper way to fit the whole transition + curve + transition structure into the set of points?
As far as I know, there's no method to fit a sequence clothoid1-circle-clothoid2 into a given set of points.
Basic facts are that two points define a straight, and three points define a unique circle.
The clothoid is far more complex, because you need: The parameter A, the final radius Rf, an initial point px,py, the radius Ri at that point, and the tangent T (angle with X-axis) at that point.
These are 5 data you may use to find the solution.
Due to clothoid coords are calculated by expanded Fresnel integrals (see https://math.stackexchange.com/a/3359006/688039 a little explanation), and then apply a translation & rotation, there's no an easy way to fit this spiral into a set of given points.
When I've had to deal with your issue, what I've done is:
Calculate the radius for triplets of consecutive points: p1p2p3, p2p3p4, p3p4p5, etc
Observe the sequence of radius. Similar values mean a circle, increasing/decreasing values mean a clothoid; Big values would mean a straight.
For each basic element (line, circle) find the most probably characteristics (angles, vertices, radius) by hand or by some regression method. Many times the common sense is the best.
For a spiral you may start with aproximated values, taken from the adjacent elements. These values may very well be the initial angle and point, and the initial and final radius. Then you need to iterate, playing with Fresnel and 'space change' until you find a "good" parameter A. Then repeat with small differences in the other values, those you took from adjacents.
Make the changes you consider as good. For example, many values (A, radius) use to be integers, without decimals, just because it was easier for the designer to type.
If you can make a small applet to do these steps then it's enough. Using a typical roads software helps, but doesn't avoid you the iteration process.
If the points are dense compared to the effective radii of curvature, estimate the local curvature by least square fitting of a circle on a small number of points, taking into account that the curvature is most of the time zero.
You will obtain a plot with constant values and ramps that connect them. You can use an estimate of the slope at the inflection points to figure out the transition points.
I am currently working on a project that involves measuring distances all around a robot with a laser module, the robot then has to move based on the points that he gets.
I currently have access to 360 points that represent the distance from the center for each of the corresponding angles. (a distance for 0°, a distance for 1°, etc)
Here's an example of what the points look like when displayed on a 2D surface:
Circular representation of the points
What I'd like to be able to do is, rather than feeding the robot all 360 points, to feed it segments containing multiple points. For instance, the bottom part of the image would be a single segment even though the points are not completely aligned.
My question to you is, is there an existing algorithm that would help me achieve what I am trying to do?
(I'm working in python but that shouldn't really be a factor)
Thanks a lot.
Assuming your points are ordered:
For each point, look ahead by two points, if the middle point is less than some distance away from the segment between the two points, then push your endpoint 1 pt further, and check that now both of the middle points are still within some distance of your line segment. Proceed to do this until false, at which point roll back one pt and generate a segment, then set the end of that segment as the start of your next segment. Also, you could consider angles instead of just distances as there are some cases where that would be favorable. Also, if no segment can be made from a certain start point for several attempts, push the start point forward one (as not everything is going to simplify into segments)
Alternately, you could convert to Cartesian points and use the hough voting algorithm to detect lines from the resulting point-cloud.
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
This question already has answers here:
An algorithm for inflating/deflating (offsetting, buffering) polygons
(15 answers)
Closed 8 years ago.
This is a question that appears to be easy, but I'm having a hard time getting it to work properly.
I have a (nonconvex) polygon defined by a list of vertices. I would like to create another polygon, where every point is shifted outward by a certain amount. I tried scaling the points and then shifting back to the original origin, but that didn't have the effect I want.
I would like for each point to be "outside" of the original point. But "outside" appears to be very difficult to compute, given only a list of points. Is there an easy way to do this?
It seems that you want an offset of the polygon, that is, the set of all points that are outside the polygon and whose distance to the polygon is some given number. The offset is not a polygon, however,
Perhaps you could just scale all vertices with respect to the centroid of the polygon.
I think you're right that inside and outside are hard to define as a global property. But with each component line segment individually, there is a clear definition of left and right (at least, within the context of traversing the path).
So, I think if you traverse your segments counter-clockwise and add segments offset to the right of the current segment, this may be close to what you want. Or traverse clockwise and add segments offset to the left. It may create degenerate shapes at concavities.
I know this is more high school math(wow been a long time since I was there) but I am trying to solve this programatically so I am reaching out to the collective knowledge of stackoverflow
Given this layout:
Midpoint is my reference point and in an array I have the vector points of all other points (P)
I can get to this state with code of having the light blue area by breaking it into four quadrants and doing a lame bubble sort to find largest(y) or lowest(x) value in each quadrant.
I need to find only the quadrants that outer border fully hits red no white space. For example the lower left and the up right dont have any white space hitting the light blue rectangle.
I am sure my terminology is all off here and im not looking for any specific code but if someone could point me to a more optimized solution for this problem or the next step in what I already have.
Thank you
I might do some BFI solution first, then perhaps look to generalize it or at least reduce it to a table-drive loop.
So, if it's exactly these shapes, and not a general solution, I think you should proceed sort of like this:
Derive the coordinates of the blue rectangle. I suspect one thing that's confusing you is that you have each individual x and y for the blue rect but you can't easily loop through them.
Derive the coordinates of the midpoint of each rectangle edge. You are going to need this because you care about quadrants. It will be trivial to do this once you have done 1.
Write different code for each 1/2 rectangle edge. There is no doubt a more clever way but this will get working code.
Make it more elegant now if you care. I betg you can reduce the rules to an 8-row
table full of things like 1, -1, or something like that.
First, you can't define red area by a single vector, since it's disjoint. You need the same number of vectors as the number of distant red regions.
Second, do we assume that different red figures neither intersect nor share a border? In the next clause I do.
Third, under assumption in point 2, the quadrant will have all red outer side iff there exists a contiguous red figure that intersects both its axes (i.e. rays). To determine this for all quadrants, you should only traverse all (P) points in the order they're given. This takes linear time and solves the problem.