Bezier curve through three points - geometry

I have read similar topics in order to find solution, but with no success.
What I'm trying to do is make the tool same as can be found in CorelDraw, named "Pen Tool". I did it by connecting Bezier cubic curves, but still missing one feature, which is dragging curve (not control point) in order to edit its shape.
I can successfully determine the "t" parameter on the curve where dragging should begin, but don't know how to recalculate control points of that curve.
Here I want to higlight some things related to CorelDraw''s PenTool behaviour that may be used as constaints. I've noticed that when dragging curve strictly vertically, or horizontally, control points of that Bezier curve behave accordingly, i.e. they move on their verticals, or horizontals, respectively.
So, how can I recalculate positions of control points while curve dragging?

Ive just look into Inkspace sources and found such code, may be it help you:
// Magic Bezier Drag Equations follow!
// "weight" describes how the influence of the drag should be distributed
// among the handles; 0 = front handle only, 1 = back handle only.
double weight, t = _t;
if (t <= 1.0 / 6.0) weight = 0;
else if (t <= 0.5) weight = (pow((6 * t - 1) / 2.0, 3)) / 2;
else if (t <= 5.0 / 6.0) weight = (1 - pow((6 * (1-t) - 1) / 2.0, 3)) / 2 + 0.5;
else weight = 1;
Geom::Point delta = new_pos - position();
Geom::Point offset0 = ((1-weight)/(3*t*(1-t)*(1-t))) * delta;
Geom::Point offset1 = (weight/(3*t*t*(1-t))) * delta;
first->front()->move(first->front()->position() + offset0);
second->back()->move(second->back()->position() + offset1);
In you case "first->front()" and "second->back()" would mean two control points

The bezier curve is nothing more then two polynomials: X(t), Y(t).
The cubic one:
x = ax*t^3 + bx*t^2 + cx*t + dx
0 <= t <= 1
y = ay*t^3 + by*t^2 + cy*t + dy
So if you have a curve - you have the poly coefficients. If you move your point and you know it's t parameter - then you can simply recalculate the poly's coefficients - it will be a system of 6 linear equations for coefficients (for each of the point). The system is subdivided per two systems (x and y) and can be solved exactly or using some numerical methods - they are not hard too.
So your task now is to calculate control points of your curve when you know the explicit equation of your curve.
It can be also brought to the linear system. I don't know how to do it for generalized Bezier curve, but it is not hard for cubic or quadric curves.
The cubic curve via control points:
B(t) = (1-t)^3*P0 + 3(1-t)^2*t*P1 + 3(1-t)*t^2*P2 + t^3*P3
Everything you have to do is to produce the standard polynomial form (just open the brackets) and to equate the coefficients. That will provide the final system for control points!

When you clicks on curve, you already know position of current control point. So you can calculate offset X and offset Y from that point to mouse position. In case of mouse move, you would be able to recalculate new control point with help of X/Y offsets.
Sorry for my english

Related

Does Piecewise Bezier curve pass vertical line test?

Consider a piecewise cubic Bezier curve with N segments, defined in terms of 4N control points. How can I determine whether this curve passes the Vertical Line Test? That is: do there exist points x,y1,y2 such that y1!=y2 and both (x,y1) and (x,y2) lie on the curve? Additionally, it would be nice but not essential to return the values of the points x,y1,y2 if such points exist.
In principle, one could just evaluate the curve explicitly at a large number of points, but in my application the curves can have a large number of segments so this is prohibitively slow. Thus I am looking for an algorithm that operates on the control points only, and does not rely on explicitly evaluating the curves at a large number of points.
Closed curves will fail your test by definition, so let's look at open curves: for a curve to fail your vertical line test, at least one segment in your piecewise cubic polyBezier needs to "reverse direction" along the x-axis, which means its x component needs to have extrema in the inclusive interval [0,1].
For example, this curve doesn't (the right side shows the component function for x, with global extrema in red and inflections in purple):
(Also note that we're only drawing the [0,1] interval, but if we were to extend it, those global extrema wouldn't actually be at t=0 and t=1. This is in fact critically important, and we'll see why, below)
This curve also doesn't (but only just):
But this curve does:
As does this one. Twice, in fact:
As does a curve "just past its cusp":
Which is easier to see when we exaggerate it:
This means we need to find the first derivative, find out where it's zero, and then make sure that (because Beziers can have cusps) the sign of that derivative flips at that point (or points, as there can be two).
Turns out, the derivative is trivially not-even-really-computed, so for each segment we have:
w = [x1, x2, x3, x4]
Bx(t) = w[0] * (1-t)³ + 3 * w[1] * t(1-t)² + w[2] * t²(1-t) + w[3] * t³
// bezier form of the derivative:
v = [
3 * (w[1] - w[0]),
3 * (w[2] - w[1]),
3 * (w[3] - w[2]),
]
Bx'(t) = v[0] * (1-t)² + 2 * v[1] * t(1-t) + v[2] * t²
Which we can trivially rewrite to polynomial form:
a = v[0] - 2*v[1] + v[2]
b = 2 * (v[1] - v[0])
c = v[0]
Bx'(t) = a * t² + b * t + c
So we find the roots for that, which is a matter of applying the quadratic formula, which gives us 0, 1, or 2 roots.
if (a == 0) there are no roots
denominator = 2 * a
discriminant = b * b - 2 * denominator * c
if (discriminant < 0) there are no (real) roots
d = sqrt(discriminant)
t1 = -(b + d) / denominator
if (0 ≤ t1 ≤ 1) then t1 is a valid root
t2 = (d - b) / denominator
if (0 ≤ t2 ≤ 1) then t2 is a valid root
If there are no (real) roots, then this segment does not cause the curve to fail your vertical line test, and we move on to the next segment and repeat until we've either found a failure, or we've run out of segments to test.
If it's 1 or 2 roots, we check that the signs of Bx'(t-ε) and Bx'(t+ε) for some very small value of ε differ (because we want to make sure we don't conclude that the above curve with a cusp fails the test: it has a zero derivative but it "keeps going in the same direction" across that root instead of flipping direction). If they do, this segment is (one of) the segment(s) that makes your curve fail your vertical line test.
Also, note that we're testing even if the root is at 0 or 1: the piecewise curve might inflect "across segments", and we can take advantage of the fact that we can evaluate Bx'(t) for t = -ε or t = 1+ε to see if we flip direction, even if we never draw the curve prior to t=0 or after t=1.
Leaving solution here for reference. Not especially elegant but gets the job done. Assume that the curve is parametrized from left to right. The key observation is that the curve intersects a vertical at two points if and only if there is a point at which the derivative x'(t) of the x component is strictly negative. For a cubic Bezier curve the derivative is a quadratic polynomial x'(t)=at^2+bt+c. So we just need to check whether the minimum of this quadratic over the interval 0<=t<=1 is negative, which is straightforward (if a bit tedious) using basic algebra.

What is the focal length and image plane distance from this raytracing formula

I have a 4x4 camera matrix comprised of right, up, forward and position vectors.
I raytrace the scene with the following code that I found in a tutorial but don't really entirely understand it:
for (int i = 0; i < m_imageSize.width; ++i)
{
for (int j = 0; j < m_imageSize.height; ++j)
{
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
Ray ray(cameraPosition, normalize(u*cameraRight + v*cameraUp + 1 / tanf(m_verticalFovAngleRadian) *cameraForward));
I have a couple of questions:
How can I find the focal length of my raytracing camera?
Where is my image plane?
Why cameraForward needs to be multiplied with this 1/tanf(m_verticalFovAngleRadian)?
Focal length is a property of lens systems. The camera model that this code uses, however, is a pinhole camera, which does not use lenses at all. So, strictly speaking, the camera does not really have a focal length. The corresponding optical properties are instead expressed as the field of view (the angle that the camera can observe; usually the vertical one). You could calculate the focal length of a camera that has an equivalent field of view with the following formula (see Wikipedia):
FOV = 2 * arctan (x / 2f)
FOV diagonal field of view
x diagonal of film; by convention 24x36 mm -> x=43.266 mm
f focal length
There is no unique image plane. Any plane that is perpendicular to the view direction can be seen as the image plane. In fact, the projected images differ only in their scale.
For your last question, let's take a closer look at the code:
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
These formulas calculate u/v coordinates between -0.5 and 0.5 for every pixel, assuming that the entire image fits in the box between -0.5 and 0.5.
u*cameraRight + v*cameraUp
... is just placing the x/y coordinates of the ray on the pixel.
... + 1 / tanf(m_verticalFovAngleRadian) *cameraForward
... is defining the depth component of the ray and ultimately the depth of the image plane you are using. Basically, this is making the ray steeper or shallower. Assume that you have a very small field of view, then 1/tan(fov) is a very large number. So, the image plane is very far away, which produces exactly this small field of view (when keeping the size of the image plane constant since you already set the x/y components). On the other hand, if the field of view is large, the image plane moves closer. Note that this notion of image plane is only conceptual. As I said, all other image planes are equally valid and would produce the same image. Another way (and maybe a more intuitive one) to specify the ray would be
u * tanf(m_verticalFovAngleRadian) * cameraRight
+ v * tanf(m_verticalFovAngleRadian) * cameraUp
+ 1 * cameraForward));
As you see, this is exactly the same ray (just scaled). The idea here is to set the conceptual image plane to a depth of 1 and scale the x/y components to adapt the size of the image plane. tan(fov) (with fov being the half field of view) is exactly the size of the half image plane at a depth of 1. Just draw a triangle to verify that. Note that this code is only able to produce square image planes. If you want to allow rectangular ones, you need to take into account the ratio of the side lengths.

How to draw the normal to the plane in PCL

I have the plane equation describing the points belonging to a plane in 3D and the origin of the normal X, Y, Z. This should be enough to be able to generate something like a 3D arrow. In pcl this is possible via the viewer but I would like to actually store those 3D points inside the cloud. How to generate them then ? A cylinder with a cone on top ?
To generate a line perpendicular to the plane:
You have the plane equation. This gives you the direction of the normal to the plane. If you used PCL to get the plane, this is in ModelCoefficients. See the details here: SampleConsensusModelPerpendicularPlane
The first step is to make a line perpendicular to the normal at the point you mention (X,Y,Z). Let (NORMAL_X,NORMAL_Y,NORMAL_Z) be the normal you got from your plane equation. Something like.
pcl::PointXYZ pnt_on_line;
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
pnt_on_line.x = X + distfromstart*NORMAL_X;
pnt_on_line.y = Y + distfromstart*NORMAL_Y;
pnt_on_line.z = Z + distfromstart*NORMAL_Z;
my_cloud.points.push_back(pnt_on_line);
}
Now you want to put a hat on your arrow and now pnt_on_line contains the end of the line exactly where you want to put it. To make the cone you could loop over angle and distance along the arrow, calculate a local x and y and z from that and convert them to points in point cloud space: the z part would be converted into your point cloud's frame of reference by multiplying with the normal vector as with above, the x and y would be multiplied into vectors perpendicular to this normal vectorE. To get these, choose an arbitrary unit vector perpendicular to the normal vector (for your x axis) and cross product it with the normal vector to find the y axis.
The second part of this explanation is fairly terse but the first part may be the more important.
Update
So possibly the best way to describe how to do the cone is to start with a cylinder, which is an extension of the line described above. In the case of the line, there is (part of) a one dimensional manifold embedded in 3D space. That is we have one variable that we loop over adding points. The cylinder is a two dimensional object so we have to loop over two dimensions: the angle and the distance. In the case of the line we already have the distance. So the above loop would now look like:
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
for(double angle=0.0;angle<2*M_PI;angle+=M_PI/8){
//calculate coordinates of point and add to cloud
}
}
Now in order to calculate the coordinates of the new point, well we already have the point on the line, now we just need to add it to a vector to move it away from the line in the appropriate direction of the angle. Let's say the radius of our cylinder will be 0.1, and let's say an orthonormal basis that we have already calculated perpendicular to the normal of the plane (which we will see how to calculate later) is perpendicular_1 and perpendicular_2 (that is, two vectors perpendicular to each other, of length 1, also perpendicular to the vector (NORMAL_X,NORMAL_Y,NORMAL_Z)):
//calculate coordinates of point and add to cloud
pnt_on_cylinder.x = pnt_on_line.x + 0.1 * perpendicular_1.x * 0.1 * cos(angle) + perpendicular_2.x * sin(angle)
pnt_on_cylinder.y = pnt_on_line.y + perpendicular_1.y * 0.1 * cos(angle) + perpendicular_2.y * 0.1 * sin(angle)
pnt_on_cylinder.z = pnt_on_line.z + perpendicular_1.z * 0.1 * cos(angle) + perpendicular_2.z * 0.1 * sin(angle)
my_cloud.points.push_back(pnt_on_cylinder);
Actually, this is a vector summation and if we were to write the operation as vectors it would look like:
pnt_on_line+perpendicular_1*cos(angle)+perpendicular_2*sin(angle)
Now I said I would talk about how to calculate perpendicular_1 and perpendicular_2. Let K be any unit vector that is not parallel to (NORMAL_X,NORMAL_Y,NORMAL_Z) (this can be found by trying e.g. (1,0,0) then (0,1,0)).
Then
perpendicular_1 = K X (NORMAL_X,NORMAL_Y,NORMAL_Z)
perpendicular_2 = perpendicular_1 X (NORMAL_X,NORMAL_Y,NORMAL_Z)
Here X is the vector cross product and the above are vector equations. Note also that the original calculation of pnt_on_line involved a vector dot product and a vector summation (I am just writing this for completeness of the exposition).
If you can manage this then the cone is easy just by changing a couple of things in the double loop: the radius just changes along its length until it is zero at the end of the loop and in the loop distfromstart will not start at 0.

Issues with bullet entry points for "shoulder mounted" guns

I'm making a SHMUP game that has a space ship. That space ship currently fires a main cannon from its center point. The sprite that represents the ship has a center based registration point. 0,0 is center of the ship.
When I fire the main cannon i make a bullet and assign make its x & y coordinates match the avatar and add it to the display list. This works fine.
I then made two new functions called fireLeftCannon, fireRightCannon. These create a bullet and add it to the display list but the x, y values are this.y + 15 and this.y +(-) 10. This creates a sort of triangle of bullet entry points.
Similar to this:
   ▲
▲   ▲
the game tick function will adjust the avatar's rotation to always point at the cursor. This is my aiming method. When I shoot straight up all 3 bullets fire up in the expected pattern. However when i rotate and face the right the entry points do not rotate. This is not an issue for the center point main cannon.
My question is how do i use the current center position ( this.x, this.y ) and adjust them based on my current rotation to place a new bullet so that it is angled correctly.
Thanks a lot in advance.
Tyler
EDIT
OK i tried your solution and it didn't work. Here is my bullet move code:
var pi:Number = Math.PI
var _xSpeed:Number = Math.cos((_rotation - 90) * (pi/180) );
var _ySpeed:Number = Math.sin((_rotation - 90) * (pi / 180) );
this.x += (_xSpeed * _bulletSpeed );
this.y += (_ySpeed * _bulletSpeed );
And i tried adding your code to the left shoulder cannon:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation) ) * ( this.x - 10 ) - Math.sin( StaticMath.ToRad(this.rotation)) * ( this.x - 10 );
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * ( this.y + 15 ) + Math.cos( StaticMath.ToRad(this.rotation)) * ( this.y + 15 );
This is placing the shots a good deal away from the ship and sometimes off screen.
How am i messing up the translation code?
What you need to start with is, to be precise, the coordinates of your cannons in the ship's coordinate system (or “frame of reference”). This is like what you have now but starting from 0, not the ship's position, so they would be something like:
(0, 0) -- center
(10, 15) -- left shoulder
(-10, 15) -- right shoulder
Then what you need to do is transform those coordinates into the coordinate system of the world/scene; this is the same kind of thing your graphics library is doing to draw the sprite.
In your particular case, the intervening transformations are
world ←translation→ ship position ←rotation→ ship positioned and rotated
So given that you have coordinates in the third frame (how the ship's sprite is drawn), you need to apply the rotation, and then apply the translation, at which point you're in the first frame. There are two approaches to this: one is matrix arithmetic, and the other is performing the transformations individually.
For this case, it is simpler to skip the matrices unless you already have a matrix library handy already, in which case you should use it — calculate "ship's coordinate transformation matrix" once per frame and then use it for all bullets etc.
I'll now explain doing it directly.
The general method of applying a rotation to coordinates (in two dimensions) is this (where (x1,y1) is the original point and (x2,y2) is the new point):
x2 = cos(angle)*x1 - sin(angle)*y1
y2 = sin(angle)*x1 + cos(angle)*y1
Whether this is a clockwise or counterclockwise rotation will depend on the “handedness” of your coordinate system; just try it both ways (+angle and -angle) until you have the right result. Don't forget to use the appropriate units (radians or degrees, but most likely radians) for your angles given the trig functions you have.
Now, you need to apply the translation. I'll continue using the same names, so (x3,y3) is the rotated-and-translated point. (dx,dy) is what we're translating by.
x3 = dx + x2
y3 = dy + x2
As you can see, that's very simple; you could easily combine it with the rotation formulas.
I have described transformations in general. In the particular case of the ship bullets, it works out to this in particular:
bulletX = shipPosX + cos(shipAngle)*gunX - sin(shipAngle)*gunY
bulletY = shipPosY + sin(shipAngle)*gunX + cos(shipAngle)*gunY
If your bullets are turning the wrong direction, negate the angle.
If you want to establish a direction-dependent initial velocity for your bullets (e.g. always-firing-forward guns) then you just apply the rotation but not the translation to the velocity (gunVelX, gunVelY).
bulletVelX = cos(shipAngle)*gunVelX - sin(shipAngle)*gunVelY
bulletVelY = sin(shipAngle)*gunVelX + cos(shipAngle)*gunVelY
If you were to use vector and matrix math, you would be doing all the same calculations as here, but they would be bundled up in single objects rather than pairs of x's and y's and four trig functions. It can greatly simplify your code:
shipTransform = translate(shipX, shipY)*rotate(shipAngle)
bulletPos = shipTransform*gunPos
I've given the explicit formulas because knowing how the bare arithmetic works is useful to the conceptual understanding.
Response to edit:
In the code you edited into your question, you are adding what I assume is the ship position into the coordinates you multiply by sin/cos. Don't do that — just multiply the offset of the gun position from the ship center by sin/cos and only then add that to the ship position. Also, you are using x x; y y on the two lines, where you should be using x y; x y. Here is your code edited to fix those two things:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation)) * (-10) - Math.sin( StaticMath.ToRad(this.rotation)) * (+15);
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * (-10) + Math.cos( StaticMath.ToRad(this.rotation)) * (+15);
This is the code for a gun at offset (-10, 15).

centroid of contour/object in opencv in c?

is there some good and better way to find centroid of contour in opencv, without using built in functions?
While Sonaten's answer is perfectly correct, there is a simple way to do it: Use the dedicated opencv function for that: moments()
http://opencv.itseez.com/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=moments#moments
It does not only returns the centroid, but some more statistics about your shape. And you can send it a contour or a raster shape (binary image), whatever best fits your need.
EDIT
example (modified) from "Learning OpenCV", by gary bradsky
CvMoments moments;
double M00, M01, M10;
cvMoments(contour,&moments);
M00 = cvGetSpatialMoment(&moments,0,0);
M10 = cvGetSpatialMoment(&moments,1,0);
M01 = cvGetSpatialMoment(&moments,0,1);
centers[i].x = (int)(M10/M00);
centers[i].y = (int)(M01/M00);
What you get in your current piece of code is of course the centroid of your bounding box.
"If you have a bunch of points(2d vectors), you should be able to get the centroid by averaging those points: create a point to add all the other points' positions into and then divide the components of that point with accumulated positions by the total number of points." - George Profenza mentions
This is indeed the right approach for the exact centroid of any given object in two-dimentionalspace.
On wikipedia we have some general forms for finding the centroid of an object.
http://en.wikipedia.org/wiki/Centroid
Personally, I would ask myself what I needed from this program. Do I want a thorough but performance heavy operation, or do I want to make some approximations? I might even be able to find an OpenCV function that deals with this correct and efficiently.
Don't have a working example, so I'm writing this in pseudocode on a simple 5 pixel example on a thorough method.
x_centroid = (pixel1_x + pixel2_x + pixel3_x + pixel4_x +pixel5_x)/5
y_centroid = (pixel1_y + pixel2_y + pixel3_y + pixel4_y +pixel5_y)/5
centroidPoint(x_centroid, y_centroid)
Looped for x pixels
Loop j times *sample (for (int i=0, i < j, i++))*
{
x_centroid = pixel[j]_x + x_centroid
y_centroid = pixel[j]_x + x_centroid
}
x_centroid = x_centroid/j
y_centroid = y_centroid/j
centroidPoint(x_centroid, y_centroid)
Essentially, you have the vector contours of the type
vector<vector<point>>
in OpenCV 2.3. I believe you have something similar in earlier versions, and you should be able to go through each blob on your picture with the first index of this "double vector", and go through each pixel in the inner vector.
Here is a link to documentation on the contour function
http://opencv.itseez.com/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=contours#cv.DrawContours
note: you've tagged your question as c++ visual. I'd suggest that you use the c++ syntax in OpenCV 2.3 instead of c. The first and good reason to use 2.3 is that it is more class based, which in this case means that the class Mat (instead of IplImage) does leak memory. One does not have to write destroy commands all the live long day :)
I hope this shed some light on your problem. Enjoy.
I've used Joseph O'Rourke excellent polygon centroid algorithm to great success.
See http://maven.smith.edu/~orourke/Code/centroid.c
Essentially:
For each point in the contour, find the triangle area from the current index polygon xy to the next 2 polygon xy points e.g.: Math.Abs(((X1 - X0) * (Y2 - Y0) - (X2 - X0) * (Y1 - Y0)) / 2)
Add this triangle area to a list TriAreas
Sum the triangle area, and store in SumT
Find the centroid CTx and CTy from this current triangle: CTx = (X0 + X1 + X2) / 3 and CTy = (Y0 + Y1 + Y2) / 3;
Store these 2 centroid values in 2 other lists CTxs CTys.
Finally after performing this with all points in the contour, find the contours centroid x and y using the 2 triangle x and y lists in 5 which is a weighted sum of signed triangle areas, weighted by the centroid of each triangle:
for (Int32 Index = 0; Index < CTxs.Count; Index++)
{
CentroidPointRet.X += CTxs[Index] * (TriAreas[Index] / SumT);
}
// now find centroid Y value
for (Int32 Index = 0; Index < CTys.Count; Index++)
{
CentroidPointRet.Y += CTys[Index] * (TriAreas[Index] / SumT);
}

Resources