How to move a vector towards an another vector in Cocos Creator? - cocoscreator

How can I move a Vec2 instance towards an another Vec2. Let's say, move it half way between the two positions. I'm looking for an equivalent method (preferably a built in one) to Unity engine's Vector2.MoveTowards method.

That depends on what those Vectors represent.
A vector2 could represent a point in a 2 dimentional space or a line with a defined direction and magnitude.
In te first case, you can move a point (pointA) towards another (pointB) finding the vector between them and adding it to the first point.
Vector2 l_DiffVector = pointB - pointA;
Vector2 result = pointA + l_DiffVector/2f;
note that I divide the l_DiffVector by 2 so the result will be a point in the half way of pointA and pointB.
In the second case where a Vector2 represents a direction, it does not make sense to move one towards another because they does not represent a position in space.
I hope it is clear enough. It's difficult to tell it only with text. If you have any doubt please ask again!
Good luck!

Related

How to create a first-person "space flight" camera

I'm currently attempting to create a first-person space flight camera.
First, allow me to define what I mean by that.
Notice that I am currently using Row-Major matrices in my math library (meaning, the basis vectors in my 4x4 matrices are laid out in rows, and the affine translation part is in the fourth row). Hopefully this helps clarify the order in which I multiply my matrices.
What I have so Far
So far, I have successfully implemented a simple first-person camera view. The code for this is as follows:
fn fps_camera(&mut self) -> beagle_math::Mat4 {
let pitch_matrix = beagle_math::Mat4::rotate_x(self.pitch_in_radians);
let yaw_matrix = beagle_math::Mat4::rotate_y(self.yaw_in_radians);
let view_matrix = yaw_matrix.get_transposed().mul(&pitch_matrix.get_transposed());
let translate_matrix = beagle_math::Mat4::translate(&self.position.mul(-1.0));
translate_matrix.mul(&view_matrix)
}
This works as expected. I am able to walk around and look around with the mouse.
What I am Attempting to do
However, an obvious limitation of this implementation is that since pitch and yaw is always defined relative to a global "up" direction, the moment I pitch more than 90 degrees, getting the world to essentially being upside-down, my yaw movement is inverted.
What I would like to attempt to implement is what could be seen more as a first-person "space flight" camera. That is, no matter what your current orientation is, pitching up and down with the mouse will always translate into up and down in the game, relative to your current orientation. And yawing left and right with your mouse will always translate into a left and right direction, relative to your current orientation.
Unfortunately, this problem has got me stuck for days now. Bear with me, as I am new to the field of linear algebra and matrix transformations. So I must be misunderstanding or overlooking something fundamental. What I've implemented so far might thus look... stupid and naive :) Probably because it is.
What I've Tried so far
The way that I always end up coming back to thinking about this problem is to basically redefine the world's orientation every frame. That is, in a frame, you translate, pitch, and yaw the world coordinate space using your view matrix. You then somehow redefine this orientation as being the new default or zero-rotation. By doing this, you can then, in your next frame apply new pitch and yaw rotations based on this new default orientation, which (by my thinking, anyways), would mean that mouse movement will always translate directly to up, down, left, and right, no matter how you are oriented, because you are basically always redefining the world coordinate space in terms relative to what your previous orientation was, as opposed to the simple first-person camera, which always starts from the same initial coordinate space.
The latest code I have which attempts to implement my idea is as follows:
fn space_camera(&mut self) -> beagle_math::Mat4 {
let previous_pitch_matrix = beagle_math::Mat4::rotate_x(self.previous_pitch);
let previous_yaw_matrix = beagle_math::Mat4::rotate_y(self.previous_yaw);
let previous_view_matrix = previous_yaw_matrix.get_transposed().mul(&previous_pitch_matrix.get_transposed());
let pitch_matrix = beagle_math::Mat4::rotate_x(self.pitch_in_radians);
let yaw_matrix = beagle_math::Mat4::rotate_y(self.yaw_in_radians);
let view_matrix = yaw_matrix.get_transposed().mul(&pitch_matrix.get_transposed());
let translate_matrix = beagle_math::Mat4::translate(&self.position.mul(-1.0));
// SAVES
self.previous_pitch += self.pitch_in_radians;
self.previous_yaw += self.yaw_in_radians;
// RESETS
self.pitch_in_radians = 0.0;
self.yaw_in_radians = 0.0;
translate_matrix.mul(&(previous_view_matrix.mul(&view_matrix)))
}
This, however, does nothing to solve the issue. It actually gives the exact same result and problem as the fps camera.
My thinking behind this code is basically: Always keep track of an accumulated pitch and yaw (in the code that is the previous_pitch and previous_yaw) based on deltas each frame. The deltas are pitch_in_radians and pitch_in_yaw, as they are always reset each frame.
I then start off by constructing a view matrix that would represent how the world was orientated previously, that is the previous_view_matrix. I then construct a new view matrix based on the deltas of this frame, that is the view_matrix.
I then attempt to do a view matrix that does this:
Translate the world in the opposite direction of what represents the camera's current position. Nothing is different here from the FPS camera.
Orient that world according to what my orientation has been so far (using the previous_view_matrix. What I would want this to represent is the default starting point for the deltas of my current frame's movement.
Apply the deltas of the current frame using the current view matrix, represented by view_matrix
My hope was that in step 3, the previous orientation would be seen as a starting point for a new rotation. That if the world was upside-down in the previous orientation, the view_matrix would apply a yaw in terms of the camera's "up", which would then avoid the problem of inverted controls.
I must surely be either attacking the problem from the wrong angle, or misunderstanding essential parts of matrix multiplication with rotations.
Can anyone help pin-point where I'm going wrong?
[EDIT] - Rolling even when you only pitch and yaw the camera
For anyone just stumbling upon this, I fixed it by a combination of the marked answer and Locke's answer (ultimately, in the example given in my question, I also messed up the matrix multiplication order).
Additionally, when you get your camera right, you may stumble upon the odd side-effect that holding the camera stationary, and just pitching and yawing it about (such as moving your mouse around in a circle), will result in your world slowly rolling as well.
This is not a mistake, this is how rotations work in 3D. Kevin added a comment in his answer that explains it, and additionally, I also found this GameDev Stack Exchange answer explaining it in further detail.
The problem is that two numbers, pitch and yaw, provide insufficient degrees of freedom to represent consistent free rotation behavior in space without any “horizon”. Two numbers can represent a look-direction vector but they cannot represent the third component of camera orientation, called roll (rotation about the “depth” axis of the screen). As a consequence, no matter how you implement the controls, you will find that in some orientations the camera rolls strangely, because the effect of trying to do the math with this information is that every frame the roll is picked/reconstructed based on the pitch and yaw.
The minimal solution to this is to add a roll component to your camera state. However, this approach (“Euler angles”) is both tricky to compute with and has numerical stability issues (“gimbal lock”).
Instead, you should represent your camera/player orientation as a quaternion, a mathematical structure that is good for representing arbitrary rotations. Quaternions are used somewhat like rotation matrices, but have fewer components; you'll multiply quaternions by quaternions to apply player input, and convert quaternions to matrices to render with.
It is very common for general purpose game engines to use quaternions for describing objects' rotations. I haven't personally written quaternion camera code (yet!) but I'm sure the internet contains many examples and longer explanations you can work from.
It looks like a lot of the difficulty you are having is due to trying to normalize the transformation to apply the new translation. It seems like this is probably a large part of what is tripping you up. I would suggest changing how you store your position and rotation. Instead, try letting your view matrix define your position.
/// Apply rotation based on the change in mouse position
pub fn on_mouse_move(&mut self, dx: f32, dy: f32) {
// I think this is correct, but it might need tweaking
let rotation_matrix = Mat4::rotate_xy(-y, x);
self.apply_movement(&rotation_matrix, &Vec3::zero())
}
/// Append axis-aligned movement relative to the camera and rotation
pub fn apply_movement(&mut self, rotation: &Mat4<f32>, translation: &Vec3<f32>) {
// Create transformation matrix for translation
let translation = Mat4::translate(translation);
// Append translation and rotation to existing view matrix
self.view_matrix = self.view_matrix * translation * rotation;
}
/// You can get the position from the last column [x, y, z, w] of your view matrix.
pub fn translation(&self) -> Vec3<f32> {
self.view_matrix.column(3).into()
}
I made a couple assumptions about the library:
Mat4 implements Mul<Self> so you do not need to call x.mul(y) explicitly and can instead use x * y. Same goes for Sub.
There exists a Mat4::rotate_xy function. If there isn't one, it would be equivalent to Mat4::rotate_xyz(delta_pitch, delta_yaw, 0.0) or Mat4::rotate_x(delta_pitch) * Mat4::rotate_y(delta_yaw).
I'm somewhat eyeballing the equations so hopefully this is correct. The main idea is to take the delta from the previous inputs and create matrices from that which can then be added on top of the previous view_matrix. If you attempt to take the difference after creating transformation matrices it will only be more work for you (and your processor).
As a side note I see you are using self.position.mul(-1.0). This tells me that your projection matrix is probably backwards. You likely want to adjust your projection matrix by scaling it by a factor of -1 in the z axis.

Determing the direction of face normals consistently?

I'm a newbie to computer graphics so I apologize if some of my language is inexact or the question misses something basic.
Is it possible to calculate face normals correctly, given a list of vertices, and a list of faces like this:
v1: x_1, y_1, z_1
v2: x_2, y_2, z_2
...
v_n: x_n, y_n, z_n
f1: v1,v2,v3
f2: v4,v2,v5
...
f_m: v_j, v_k, v_l
Each x_i, y_i , z_i specifies the vertices position in 3d space (but isn't neccesarily a vector)
Each f_i contains the indices of the three vertices specifying it.
I understand that you can use the cross product of two sides of a face to get a normal, but the direction of that normal depends on the order and choice of sides (from what I understand).
Given this is the only data I have is it possible to correctly determine the direction of the normals? or is it possible to determine them consistently atleast? (all normals may be pointing in the wrong direction?)
In general there is no way to assign normal "consistently" all over a set of 3d faces... consider as an example the famous Möbius strip...
You will notice that if you start walking on it after one loop you get to the same point but on the opposite side. In other words this strip doesn't have two faces, but only one. If you build such a shape with a strip of triangles of course there's no way to assign normals in a consistent way and you'll necessarily end up having two adjacent triangles with normals pointing in opposite directions.
That said, if your collection of triangles is indeed orientable (i.e. there actually exist a consistent normal assignment) a solution is to start from one triangle and then propagate to neighbors like in a flood-fill algorithm. For example in Python it would look something like:
active = [triangles[0]]
oriented = set([triangles[0]])
while active:
next_active = []
for tri in active:
for other in neighbors(tri):
if other not in oriented:
if not agree(tri, other):
flip(other)
oriented.add(other)
next_active.append(other)
active = next_active
In CG its done by polygon winding rule. That means all the faces are defined so the points are in CW (or CCW) order when looked on the face directly. Then using cross product will lead to consistent normals.
However many meshes out there does not comply the winding rule (some faces are CW others CCW not all the same) and for those its a problem. There are two approaches I know of:
for simple shapes (not too much concave)
the sign of dot product of your face_normal and face_center-cube_center will tell you if the normal points inside or outside of the object.
if ( dot( face_normal , face_center-cube_center ) >= 0.0 ) normal_points_out
You can even use any point of face instead of the face center too. Anyway for more complex concave shapes this will not work correctly.
test if point above face is inside or not
simply displace center of face by some small distance (not too big) in normal direction and then test if the point is inside polygonal mesh or not:
if ( !inside( face_center+0.001*face_normal ) ) normal_points_out
to check if point is inside or not you can use hit test.
However if the normal is used just for lighting computations then its usage is usually inside a dot product. So we can use its abs value instead and that will solve all lighting problems regardless of the normal side. For example:
output_color = face_color * abs(dot(face_normal,light_direction))
some gfx apis have implemented this already (look for double sided materials or normals, turning them on usually use the abs value ...) For example in OpenGL:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);

Optimizations for Raycasting

I've been wanting to build a 3D engine starting from scratch as a coding challenge with the end objective of implementing it on a fantasy console.
The best (i.e. most simple?) way I found was raytracing/raycasting. I haven't found much by looking online for raycasting algorithms, only finding point-in-polygon problems (which only tell me whether a ray intersects a polygon or not, not quite my interest since I wouldn't have info about the first intersection nor I'd have the intersection points).
The only solution I could think of is brute forcing the ray by moving it at small intervals and every time check whether that point is occupied by something or not (which would require having filled shapes and wouldn't let me have 2D shapes since they would never be rendered, although none of those is a problem). Still, it looks way too complex performance-wisely.
As far as I know, most of those problems are solved using linear algebra, but I'm not quite as competent as to build up a solution on my own. Does this problem have a practical solution?
EDIT: I just found an algebric solution in 2D which could maybe be expanded in 3D. The idea is:
For each edge, check whether one of the two vertices are in the field of view (i.e. if O is the origin of every ray and P is the vertex, you have to check first that the point is inside the far point of sight, and then whether the angle with the forward vector is less than the angle of vision). If at least one of the two vertices is inside the field of view, add them to an array E.
If we have an array R of rays to shoot and an array of arrays I of info about hit points, we can loop for each ray in R and for each edge in E and then store f(ray, edge) in I, where f is a function that gives us info on whether the ray and the edge collided and where they did.
f uses basic linear algebra: both the ray and the edge are, for all purposes, two segments. Two segments are just parts of two lines. Let's say that if the edge has vertices A and B (AB is the vector that goes from A to B) and if the far point is called P (OP is the vector that goes from O to P). We can create two lines, r and s, defined by A + ηAB and O + λOP. After we do a check to see whether r and s are parallel (check if the absolute value of the dot product of AB and OP is equal to the norm of AB times the norm of OP), it's trivial to get the values for η and λ.
Now, if η < 0 OR η > 1 we have that the two segments are not colliding.
After we've done this for every ray and every edge, we compare every element in each array i in I to see which one had the lowest λ. The lowest λ carries the first collision and hence the data to show on screen.
Everything here is linear algebra, though I fear that it might still be computationally heavy, since there's a lot going on, and it's still only 2D.

RayTracing: When to Normalize a vector?

I am rewriting my ray tracer and just trying to better understand certain aspects of it.
I seem to have down pat the issue regarding normals and how you should multiply them by the inverse of the transpose of a transformation matrix.
What I'm confused about is when I should be normalizing my direction vectors?
I'm following a certain book and sometimes it'll explicitly state to Normalize my vector and other cases it doesn't and I find out that I needed to.
Normalized vector is in the same direction with just unit length 1? So I'm unclear when it is necessary?
Thanks
You never need to normalize a vector unless you are working with the angles between vectors, or unless you are rotating a vector.
That's it.
In the former case, all of your trig functions require your vectors to land on a unit circle, which means the vectors are normalized. In the latter case, you are dividing out the magnitude, rotating the vector, making sure it stays a unit, and then multiplying the magnitude back in. Normalization just goes with the territory.
If someone tells you that coordinate system are defined by n unit vectors, know that i-hat, j-hat, k-hat, and so on can be any arbitrary vector(s) of any length and direction, so long as none of them are parallel. This is the heart of affine transformations.
If someone tries to tell you that the dot product requires normalized vectors, shake your head and smile. The dot product only needs normalized vectors when you are using it to get the angle between two vectors.
But doesn't normalization make the math "simpler"?
Not really -- It adds a magnitude computation and a division. Numbers between 0..1 are no different than numbers between 0..x.
Having said that, you sometimes normalize in order to play well with others. But if you find yourself normalizing vectors as a matter of principle before calling methods, consider using a flag attached to the vector to save yourself a step. Mathematically, it is unimportant, but practically, it can make a huge difference in performance.
So again... it's all about rotating a vector or measuring its angle against another vector. If you aren't doing that, don't waste cycles.
tl;dr: Normalized vectors simplify your math. They also reduce the number of very hard to diagnose visual artifacts in your images.
Normalized vector is in the same direction with just unit length 1? So
I'm unclear when it is necessary?
You almost always want all vectors in a ray tracer to be normalized.
The simplest example is that of the intersection test: where does a bouncing ray hit another object.
Consider a ray where:
p(t) = p_0 + v * t
In this case, a point anywhere along that ray p(t) is defined as an offset from the original point p_0 and an offset along a particular direction v. For every increment of parameter t, the resulting p(t) will move another increment of length equal to the length of the vector v.
Remember, you know p_0 and v. When you are trying to find the point where this ray next hits another object, you have to solve for that t. It is obviously more convenient, if not always obviously necessary, to use normalized vector vs in that representation.
However, that same vector v is used in lighting calculations. Imagine that we have another direction vector u that points towards a lighting source. For the purpose of a very simple shading model, we can define the light at a particular point to be the dot product between those two vectors:
L(p) = v * u
Admittedly, this is a very uninteresting reflection model but it captures the high points of the discussion. A spot on a surface is bright if reflection points towards the light and dim if not.
Now, remember that another way of writing this dot product is the product of the magnitudes of the vectors times the cosine of the angle between them:
L(p) = ||v|| ||u|| cos(theta)
If u and v are of unit length (normalized), then the equation will evaluate to be proportional to the angle between the two vectors. However, if v is not of unit length, say because you didn't bother to normalize after reflecting the vector in the ray model above, now your lighting model has a problem. Spots on the surface using a larger v will be much brighter than spots that do not.
It is necessary to normalize a direction vector whenever you use it in some math that is influenced by its length.
The prime example is the dot product, which is used in most lighting equations. You also sometimes need to normalize vectors that you use in lighting calculations, even if you believe that they are normal.
For example, when using an interpolated normal on a triangle. Common sense tells you that since the normals at the vertices are normal, the vectors you get by interpolating are too. So much for common sense... the truth is that they will be shorter unless they incidentially all point into the same direction. Which means that you will shade the triangle too dark (to make matters worse, the effect is more pronounced the closer the light source gets to the surface, which is a... very funny result).
Another example where a vector might or might not be normalized is the cross product, depending on what you are doing. For example, when using the two cross products to build an orthonormal base, then you must at least normalize once (though if you do it naively, you end up doing it more often).
If you only care about the direction of the resulting "up vector", or about the sign, you don't need to normalize.
I'll answer the opposite question. When do you NOT need to normalize? Almost all calculations related to lighting require unit vectors - the dot product then gives you the cosine of the angle between vectors which is really useful. Some equations can still cope but become more complex (essentially doing the normalization in the equation) That leaves mostly intersection tests.
Equations for many intersection tests can be simplified if you have unit vectors. Some do not require it - for example if you have a plane equation (with a unit normal) you can find the ray-plane intersection without normalizing the ray direction vector. The distance will be in terms of the ray direction vectors length. This might be OK if all you want is to intersect a bunch of those planes (the relative distances will all be correct). But as soon as you want to compare with a different distance - calculated using the normalized ray direction - the distance values will not compare properly.
You might think about normalizing a direction vector AFTER doing some work that does not require it - maybe you have an acceleration structure that can be traversed without a normalized vector. But that isn't relevant either because eventually the ray will hit something and you're going to want to do a lighting/shading calculation with it. So you may as well normalize them from the start...
In other words, any specific calculation may not require a normalized direction vector, but a given direction vector will almost certainly need to be normalized at some point in the process.
Vectors are used to store two conceptually different elements: points in space and directions:
If you are storing a point in space (for example the position of the camera, the origin of the ray, the vertices of triangles) you don't want to normalize, because you would be modifying the value of the vector, and losing the specific position.
If you are storing a direction (for example the camera up, the ray direction, the object normals) you want to normalize, because in this case you are interested not in the specific value of the point, but on the direction it represents, so you don't need the magnitude. Normalization is useful in this case because it simplifies some operations, such as calculating the cosine of two vectors, something that can be done with a dot product if both are normalized.

Decomposition to Convex Polygons

This question is a little involved. I wrote an algorithm for breaking up a simple polygon into convex subpolygons, but now I'm having trouble proving that it's not optimal (i.e. minimal number of convex polygons using Steiner points (added vertices)). My prof is adamant that it can't be done with a greedy algorithm such as this one, but I can't think of a counterexample.
So, if anyone can prove my algorithm is suboptimal (or optimal), I would appreciate it.
The easiest way to explain my algorithm with pictures (these are from an older suboptimal version)
What my algorithm does, is extends the line segments around the point i across until it hits a point on the opposite edge.
If there is no vertex within this range, it creates a new one (the red point) and connects to that:
If there is one or more vertices in the range, it connects to the closest one. This usually produces a decomposition with the fewest number of convex polygons:
However, in some cases it can fail -- in the following figure, if it happens to connect the middle green line first, this will create an extra unneeded polygon. To this I propose double checking all the edges (diagonals) we've added, and check that they are all still necessary. If not, remove it:
In some cases, however, this is not enough. See this figure:
Replacing a-b and c-d with a-c would yield a better solution. In this scenario though, there's no edges to remove so this poses a problem. In this case I suggest an order of preference: when deciding which vertex to connect a reflex vertex to, it should choose the vertex with the highest priority:
lowest) closest vertex
med) closest reflex vertex
highest) closest reflex that is also in range when working backwards (hard to explain) --
In this figure, we can see that the reflex vertex 9 chose to connect to 12 (because it was closest), when it would have been better to connect to 5. Both vertices 5 and 12 are in the range as defined by the extended line segments 10-9 and 8-9, but vertex 5 should be given preference because 9 is within the range given by 4-5 and 6-5, but NOT in the range given by 13-12 and 11-12. i.e., the edge 9-12 elimates the reflex vertex at 9, but does NOT eliminate the reflex vertex at 12, but it CAN eliminate the reflex vertex at 5, so 5 should be given preference.
It is possible that the edge 5-12 will still exist with this modified version, but it can be removed during post-processing.
Are there any cases I've missed?
Pseudo-code (requested by John Feminella) -- this is missing the bits under Figures 3 and 5
assume vertices in `poly` are given in CCW order
let 'good reflex' (better term??) mean that if poly[i] is being compared with poly[j], then poly[i] is in the range given by the rays poly[j-1], poly[j] and poly[j+1], poly[j]
for each vertex poly[i]
if poly[i] is reflex
find the closest point of intersection given by the ray starting at poly[i-1] and extending in the direction of poly[i] (call this lower bound)
repeat for the ray given by poly[i+1], poly[i] (call this upper bound)
if there are no vertices along boundary of the polygon in the range given by the upper and lower bounds
create a new vertex exactly half way between the lower and upper bound points (lower and upper will lie on the same edge)
connect poly[i] to this new point
else
iterate along the vertices in the range given by the lower and upper bounds, for each vertex poly[j]
if poly[j] is a 'good reflex'
if no other good reflexes have been found
save it (overwrite any other vertex found)
else
if it is closer then the other good reflexes vertices, save it
else
if no good reflexes have been found and it is closer than the other vertices found, save it
connect poly[i] to the best candidate
repeat entire algorithm for both halves of the polygon that was just split
// no reflex vertices found, then `poly` is convex
save poly
Turns out there is one more case I didn't anticipate: [Figure 5]
My algorithm will attempt to connect vertex 1 to 4, unless I add another check to make sure it can. So I propose stuffing everything "in the range" onto a priority queue using the priority scheme I mentioned above, then take the highest priority one, check if it can connect, if not, pop it off and use the next. I think this makes my algorithm O(r n log n) if I optimize it right.
I've put together a website that loosely describes my findings. I tend to move stuff around, so get it while it's hot.
I believe the regular five pointed star (e.g. with alternating points having collinear segments) is the counterexample you seek.
Edit in response to comments
In light of my revised understanding, a revised answer: try an acute five pointed star (e.g. one with arms sufficiently narrow that only the three points comprising the arm opposite the reflex point you are working on are within the range considered "good reflex points"). At least working through it on paper it appears to give more than the optimal. However, a final reading of your code has me wondering: what do you mean by "closest" (i.e. closest to what)?
Note
Even though my answer was accepted, it isn't the counter example we initially thought. As #Mark points out in the comments, it goes from four to five at exactly the same time as the optimal does.
Flip-flop, flip flop
On further reflection, I think I was right after all. The optimal bound of four can be retained in a acute star by simply assuring that one pair of arms have collinear edges. But the algorithm finds five, even with the patch up.
I get this:
removing dead ImageShack link
When the optimal is this:
removing dead ImageShack link
I think your algorithm cannot be optimal because it makes no use of any measure of optimality. You use other metrics like 'closest' vertices, and checking for 'necessary' diagonals.
To drive a wedge between yours and an optimal algorithm, we need to exploit that gap by looking for shapes with close vertices which would decompose badly. For example (ignore the lines, I found this on the intertubenet):
concave polygon which forms a G or U shape http://avocado-cad.wiki.sourceforge.net/space/showimage/2007-03-19_-_convexize.png
You have no protection against the centre-most point being connected across the concave 'gap', which is external to the polygon.
Your algorithm is also quite complex, and may be overdoing it - just like complex code, you may find bugs in it because complex code makes complex assumptions.
Consider a more extensive initial stage to break the shape into more, simpler shapes - like triangles - and then an iterative or genetic algorithm to recombine them. You will need a stage like this to combine any unnecessary divisions between your convex polys anyway, and by then you may have limited your possible decompositions to only sub-optimal solutions.
At a guess something like:
decompose into triangles
non-deterministically generate a number of recombinations
calculate a quality metric (number of polys)
select the best x% of the recombinations
partially decompose each using triangles, and generate a new set of recombinations
repeat from 4 until some measure of convergence is reached
but vertex 5 should be given preference because 9 is within the range given by 4-5 and 6-5
What would you do if 4-5 and 6-5 were even more convex so that 9 didn't lie within their range? Then by your rules the proper thing to do would be to connect 9 to 12 because 12 is the closest reflex vertex, which would be suboptimal.
Found it :( They're actually quite obvious.
*dead imageshack img*
A four leaf clover will not be optimal if Steiner points are allowed... the red vertices could have been connected.
*dead imageshack img*
It won't even be optimal without Steiner points... 5 could be connected to 14, removing the need for 3-14, 3-12 AND 5-12. This could have been two polygons better! Ouch!

Resources