I am new to cocos2d-x and I want to get the sprite size. But here are tow functions that do the work: getContentSize and getBoundingBox. What is the difference of this functions? Should I always usegetBoundingBox`?
contentSize refers to the size of the content (ie the texture size) whereas boundingBox also takes into account that the node may be rotated, scaled, skewed.
The bounding box is axis-aligned, which means it forms the rectangle that passes through all 4 corners of the node even when rotated, scaled, skewed, etc. and thus it may be larger than contentSize if any one of these properties has been modified.
However for collision detection of rotated, scaled, skewed, etc nodes the bounding box only provides an "early out" test where not intersecting the bounding box rectangle means there can not be any collision on a more accurate level anyway. If the axis-aligned bounding box intersection test passes you usually go on to perform, for example, an oriented bounding box rectangle intersection test or one where you do a collision mask or polygon intersection test.
Related
I am trying my hand at writing a 3d graphics engine, but I am having some trouble with drawing the shapes in the correct order.
When I translate the points of triangles into window space, i.e. the 2-dimensional space that directly correlates to position on the screen, in addition to an x and y position of each point, I also assign them a depth variable that stores how far away from the viewer that point was in 3d space.
At the moment, the only shapes I am rendering are triangles. My current render order algorithm sorts the triangles by the average depth of their 3 points. I knew when I started it that it would not be perfect, but I wanted a placeholder for testing.
For testing purposes, I constructed a square box with an open top, each side being a different color and made from 2 triangles, as shown below:
As you can see from the image above, the algorithm I am using works most of the time. However, at certain angles and positions, the triangles will be rendered in the wrong order, as show below:
As you can see, one of the cyan triangles on the bottom of the box is being drawn before one of the yellow triangles on the side. Clearly, sorting the triangles by the average depth of their points is not satisfactory.
Is there a better method of ordering shapes so that they are rendered in the correct order?
The standard method to draw 3D in correct depth order is to use a Z-buffer.
Basically, the idea is that for each pixel you set in the color buffer, you also set it's interpolated depth in the z (depth..) buffer. Whenever you're about to paint the next pixel, you first check that z-buffer to validate the new pixel if in front of the already painted pixel.
On top of that you can add various sorts of optimizations, such as sorting triangles in order to minimize the number of times you actually paint the color buffer.
On the other hand, it's sometimes required to do the exact opposite in order to properly handle transparency or other "advanced" effects.
I was just wondering if someone know of any papers or resources on generating synthetic images of growth rings in trees. Im thinking 2d scalar-fields or some other data representation which can then be used to render growth rings like images :)
Thanks!
never done or heard about this ...
If you need simulation then search for biology/botanist sites instead.
If you need just visually close results then I would:
make a polygon covering the cut (circle/oval like shape)
start with circle and when all working try to add some random distortion or use ellipse
create 1D texture with the density
it will be used to fill the polygon via triangle fan. So first find an image of the tree type you want to generate for example this:
Analyze the color and intensity as a function of diameter so extract a pie like piece (or a thin rectangle)
and plot a graph of R,G,B values to see how the rings are shaped
then create function that approximate that (or use piecewise interpolation) and create your own texture as function of tree age. You can interpolate in this way booth the color and density of rings.
My example shows that for this tree the color is the same so only its intensity changes. In this case you do not need to approximate all 3 functions. The bumps are a bit noisy due to another texture layer (ignore this at start). You can use:
intensity=A*|cos(pi*t)| as a start
A is brightness
t is age in years/cycles (and also the x coordinate (scaled) in your 1D texture)
so take base color R,G,B multiply it by A for each t and fill the texture pixel with this color. You can add some randomness to ring period (pi*t) and also the scale can be matched more closely. This is linear growth ,... so you can use exponential instead or interpolate to match bumps per length affected by age (distance form t=0)...
now just render the polygon
mid point is the t=0 coordinate in texture each vertex of polygon is t=full_age coordinate in texture. So render the triangle fan with these texture coordinates. If you need more close match (rings are not the same thickness along the perimeter) then you can convert this to 2D texture
[Notes]
You can also do this incrementally so do just one ring per iteration. Next ring polygon is last one enlarged or scaled by scale>1 and add some randomness, but this needs to be rendered by QUAD STRIP. You can have static texture for single ring so interpolate just the density and overall brightness:
radius(i)=radius(i-1)+ring_width=radius(i-1)*scale
so:
scale=(radius(i-1)+ring_width)/radius(i-1)
In my Android mapping activity, I have a parallelogram shaped area that I want to tell if points (ie:LatLng) are inside. I've tried using the:
bounds = new LatLngBounds.Builder()
.include(latlngNW)
.include(latlngNE)
.include(latlngSW)
.include(latlngSE)
.build();
and later
if (bounds.contains(currentLatLng) {
.....
}
but it is not that accurate. Do I need to create equations for lines connecting the four corners?
Thanks in advance.
The LatLngBounds appears to create a box from the points included. Given the shape that I'm trying to monitor is a parallelogram, you do need to create equations for each of the edges of the shape and use if statements to determine which side of the line a point lies.
Not an easy solution!
If you wish to build a parallelogram-shaped bounding "box" from a collection of points, and you know the desired angles of the parallelogram's sides, your best bet is to probably define a 2d linear shear transform which will one of those angles to horizontal, and the other to vertical. One may then feed the transformed points into normal "bounding box" routines, and feed the corners of the resulting box through the inverse of the above transform to get a bounding parallelogram.
Note that this approach is generally only suitable for parallelograms, not trapezoids. There are a few special cases where it could be used to find bounding trapezoids [e.g. if the top and bottom were horizontal, and the sides were supposed to converge at a known point (x0-y0), one could map x' = (x-x0)/(y-y0)] but for many kinds of trapezoids, the trapezoid formed by inverse mapping the corners of a horizontal/vertical bounding rectangle may not properly bound the points that are supposed to be within it.
I'm trying to implement graphic pipeline in software level. I have some problems with clipping and culling now.
Basically, there are two main concerns:
When should back-face culling take place? Eye coordinate, clipping coordinate or window coordinate? I initially made culling process in eye coordinate, thinking this way could relieve the burden of clipping process since many back-facing vertices have already been discarded. But later I realized that in this way vertices need to take 2 matrix multiplications , namely left multiply model-view matrix --> culling --> left multiply perspective matrix, which increases the overhead to some extent.
How do I do clipping and reconstruct triangle? As far as I know, clipping happens in clipping coordinate(after perspective transformation), in another word homogeneous coordinate in which every vertex is being determined whether no not it should be discarded by comparing its x, y, z components with w component. So far so good, right? But after that I need to reconstruct those triangles which have one or two vertices been discarded. I googled that Liang-Barsky algorithm would be helpful in this case, but in clipping coordinate what clipping plane should I use? Should I just record clipped triangles and reconstruct them in NDC?
Any idea will be helpful. Thanks.
(1)
Back-face culling can occur wherever you want.
On the 3dfx hardware, and probably the other cards that rasterised only, it was implemented in window coordinates. As you say that leaves you processing some vertices you don't ever use but you need to weigh that up against your other costs.
You can also cull in world coordinates; you know the location of the camera so you know a vector from the camera to the face — just go to any of the edge vertices. So you can test the dot product of that against the normal.
When I was implementing a software rasteriser for a z80-based micro I went a step beyond that and transformed the camera into model space. So you get the inverse of the model matrix (which was cheap in this case because they were guaranteed to be orthonormal, so the transpose would do), apply that to the camera and then cull from there. It's still a vector difference and a dot product but if you're using the surface normals only for culling then it saves having to transform each and every one of them for the benefit of the camera. For that particular renderer I was then able to work forward from which faces are visible to determine which vertices are visible and transform only those to window coordinates.
(2)
A variant on Sutherland-Cohen is the thing I remember seeing most often. You'd do a forward scan around the outside of the polygon checking each edge in turn and adjusting appropriately.
So e.g. you start with the convex polygon between points (V1, V2, V3). For each clipping plane in turn you'd do something like:
for(Vn in input vertices)
{
if(Vn is on the good side of the plane)
add Vn to output vertices
if(edge from Vn to Vn+1 intersects plane) // or from Vn to 0 if this is the last edge
{
find point of intersection, I
add I to output vertices
}
}
And repeat for each plane. If you're worried about repeated costs then you either need to adopt a structure with an extra level of indirection between faces and edges or just keep a cache. You'd probably do something like dash round the vertices once marking them as in or out, then cache the point of intersection per edge, looked up via the key (v1, v2). If you've set yourself up with the extra level of indirection then store the result in the edge object.
I need to create a (large) set of spatial polygons for test purposes. Is there an algorithm that will create a randomly shaped polygon staying within a bounding envelope? I'm using OGC Simple stuff so a routine to create the well known text is the most useful, Language of choice is C# but it's not that important.
Here you can find two examples of how to generate random convex polygons. They both are in Java, but should be easy to rewrite them to C#:
Generate Polygon example from Sun
from JTS mailing list, post Minimum Area bounding box by Michael Bedward
Another possible approach based on generating set of random points and employ Delaunay tessellation.
Generally, problem of generating proper random polygons is not trivial.
Do they really need to be random, or would some real WKT do? Because if it will, just go to http://koordinates.com/ and download a few layers.
What shape is your bounding envelope ? If it's a rectangle, then generate your random polygon as a list of points within [0,1]x[0,1] and scale to the size of your rectangle.
If the envelope is not a rectangle things get a little more tricky. In this case you might get best performance simply by generating points inside the unit square and rejecting any which lie in the part of the unit square which does not scale to the bounding envelope of your choice.
HTH
Mark
Supplement
If you wanted only convex polygons you'd use one of the convex hull algorithms. Since you don't seem to want only convex polygons your suggestion of a circular sweep would work.
But you might find it simpler to sweep along a line parallel to either the x- or y-axis. Assume the x-axis.
Sort the points into x-order.
Select the leftmost (ie first) point. At the y-coordinate of this point draw an imaginary horizontal line across the unit square. Prepare to create a list of points along the boundary of the polygon above the imaginary line, and another list along the boundary below it.
Select the next point. Add it to the upper or lower boundary list as determined by it's y-coordinate.
Continue until you're out of points.
This will generate convex and non-convex polygons, but the non-convexity will be of a fairly limited form. No inlets or twists and turns.
Another Thought
To avoid edge crossings and to avoid a circular sweep after generating your random points inside the unit square you could:
Generate random points inside the unit circle in polar coordinates, ie (r, theta).
Sort the points in theta order.
Transform to cartesian coordinates.
Scale the unit circle to a bounding ellipse of your choice.
Off the top of my head, that seems to work OK