Collision detection & resolution: circle in a playfield of other circles and polygons - geometry

I am working on a game that has a player sprite surrounded by a collision circle of a known radius. The player sprite can move about a playfield that consists of other sprites with their own collision circles and other obstacles made up of polygons. The other obstacles are rectangles at a 45 degree angle.
In addition, I want the player to adjust its movement when it does collide. I want the player to try to "push through" past the object instead of being stopped by it.
For example, if the player were to collide with another sprite's bounding circle, it would be stopped if its vector was exactly perpendicular to the tangent of the two circles' intersection.
However, if not perfectly perpendicular, the player would be, slowly at first, then faster, pushed along the tangent of the circle until it can continue past it unimpeded.
This works similarly when encountering one of the 45 degree rectangles.
What I need help with is the following: I am trying to find an analytic solution to detect both other sprites and obsticles, have the player's movement adjusted, and possibly stopped when adjusted to wedge between two or more objects.
I can do the collision detection and deflection for one object type at a time, but am struggling to put everything together into a comprehensive algorithm. I am currently working on an iterative pairwise resolution approach that "tries" different locations to result in a best-guess solution, but I really want a mathematically analytic solution. I'm hoping to have a function something like what appears in this psuedocode.
x = [player's x location]
y = [player's y location]
r = [player's collision radius]
// Array of other sprites on the playfield,
spr = [other sprites array]
// which contains 3 parameters, x, y, r. E.g., spr[3].x or spr[3].r,
// for the x position or collision radius for the fourth sprite in the
// array.
// Array of 45 degree rectangles on the playfield,
rect = [array of rectangles]
// which contain 4 parameters, x1, y1, x2, y2, the two opposite points
// of the rectangle. E.g., rect[0].x1, for the x position of the first
// point of the first rectangle.
// For simplicity, assume the above variables are all directly accessable
// in the function below.
// requestX and requestY is the position to which the player would
// like to move the player sprite.
definefunction collisionAdjustor(requestX, requestY) {
// Here I'd like to adjust the requested position if needed because
// of an intersection with one or more other sprites or rectangles.
// Finally return the location at which the player will actually be
// arriving.
return destinationX, destinationY
}
Any advice or suggestions would be much appreciated.
--Richard

Related

Why does the projection of an image over 3d points show this distortion?

I have a question regarding the projection of an image over a set of 3D points. The image is given to me as a JPG, together with position and attitude information of the camera relative to a cartesian coordinate system (Xc,Yc,Zc and yaw, pitch, roll), as well as the horizontal and vertical field of view (in degrees).
Points are given using solely their 3d position in the same coordinate system (Xp,Yp,Zp).
In my coordinate system, Z is up. To project the image onto the points, I
compute the vector from camera to each point
Vector3 c2p = (Xp,Yp,Zp)-(Xc,Yc,Zc);
rotate c2p according to my camera's attitude (quaternion):
Vector3 c2pCamFrame = getCamQuaternion().conjugate().rotate(c2p);
compute azimuth and elevation from the camera's "center ray" to the point:
float azimuth = atan2(c2pCamFrame.x(),c2pCamFrame.y()));
float elevation = atan2(c2pCamFrame.z(),sqrt(pow(c2pCamFrame.x(),2)+pow(c2pCamFrame.y(),2)));
if azimuth and elevation are within the field of view, I assign the color of the corresponding pixel to the point.
This works almost perfectly, and the "almost" motivates my question. Let me show you:
I cannot figure out why the elevation of the projection is distorted. In the bottom right of the image, you can see that points outside the frustum (exceeding the elevation) actually become colored - and this distortion is null at an azimuth of 0 degrees and peaks at the left and right edges of the image, creating the pillow distortion.
Why does this distortion appear? I'd love to understand this problem both in geometrical as well as mathematical terms. Thank you!
The field of view angles are only valid on the principal axes. But you can do it the other way around. I.e. calculate the x/y bounds from the angles:
maxX = tan(horizontal_fov / 2)
maxY = tan(vertical_fov / 2)
And check
if(abs(c2pCamFrame.x() / c2pCamFrame.z()) <= maxX
&& abs(c2pCamFrame.y() / c2pCamFrame.z()) <= maxY)
Additionally, you might want to check if the points are in front of the camera:
... && c2pCamFrame.z() > 0
This assumes a left-handed coordinate system.

Graphics: Creating a 3D cylinder

I have a problem with creating 3D cylinders (without OpenGL). I understand that a mesh is used to create the cylinder surface and triangle fans are used to create the top and bottom caps. I have already implemented the mesh but not the planar triangle fans, so currently my 3D object looks like a cylinder without the bottom and top cap.
I believe this is what I need to do in order to create the bottom and top caps. First, find the center point of the cylinder mesh. Second, find the vertices of the mesh. Third, using the center point and the 2 vertex points, create the triangle. Fourth, repeat the steps until a planar circle is created.
Are the above steps a sufficient way of creating the caps or is there a better way? And how do I find the vertices of the mesh so I can create the triangle fans?
First some notes:
you did not specify your platform
gfx interface
language
not enough info about your cylinder either
is it axis aligned?
what coordinate system (Cartesian/orthogonal/orthonormal)?
need additional dimensions like color or texture coordinates?
So I can provide just generic info then
Axis aligned cylinder
choose the granularity N
number of points along your cap's circle
usually 20-36 is OK but if you need higher precision then sometimes you need even 1000 points or more
all depends on the purpose,zoom, angle and distance of view ...
and performance issues
for now let N=32
you need BR (boundary representation)
you did not specify gfx interface but your text implies BR model (surface polygons)
also no pivot point position so I will choose middle point of cylinder to be (0,0,0)
z axis will be the height of cylinder
and the caps will be coplanar with xy plane
so for cylinder is enough set of 2 rings (caps)
so the points can be defined in C++ like this:
const int N=32; // mesh complexity
double p0[N][3],p1[N][3]; // rings`
double a,da,c,s,r,h2; // some temp variables
int i;
r =50.0; // cylinder radius
h2=100.0*0.5; // half height of cyliner
da=M_PI/double(N-1);
for (a=0.0,i=0;i<N;i++,a+=da)
{
c=r*cos(a);
s=r*sin(a);
p0[i][0]=c;
p0[i][1]=s;
p0[i][2]=+h2;
p1[i][0]=c;
p1[i][1]=s;
p1[i][2]=-h2;
}
the ring points are as closed loop (p0[0]==p0[N-1])
so you do not need additional lines to handle it...
now how to draw
cant write the code for unknown api but
'mesh' is something like QUAD_STRIP I assume
so just add points to it in this order:
QUAD_STRIP = { p0[0],p1[0],p0[1],p1[1],...p0[N-1],p1[N-1] };
if you have inverse normal problem then swap p0/p1
now for the fans
you do not need the middle point (unless you have interpolation aliasing issues)
so similar:
TRIANGLE_FAN0 = { p0[0],p0[1],...p0[N-1] };
TRIANGLE_FAN1 = { p1[0],p1[1],...p1[N-1] };
if you still want the middle point then:
TRIANGLE_FAN0 = { (0.0,0.0,+h2),p0[0],p0[1],...p0[N-1] };
TRIANGLE_FAN1 = { (0.0,0.0,-h2),p1[0],p1[1],...p1[N-1] };
if you have inverse normal problem then reverse the points order (middle point stays where it is)
Not axis aligned cylinder?
just use transform matrix on your p0[],p1[] point lists to translate/rotate to desired position
the rest stays the same

How to get penetration vector of two rects?

Suppose if I have two rects with x,y,w,h and one is stationary and the other is moving at vx, vy. I already calculated that they overlap each other and I know the overlap rect as well. What I am interested in finding out is the red vector in the below graph:
This is different than a minimum adjustment vector because as you can see, the minimum adjustment would just move rect A leftwards, whereas the red vector moves it leftwards and upwards. Is there an efficient way to calculate this?
The movement vector V0 and penetration vector V1 are anti-parallel
so you can exploit that:
where dx,dy is the overlap area size so
if (|V0.x|>=|V0.y|)
{
V1.x=-sign(V0.x)*|dx|
V1.y=-sign(V0.y)*|dx*V0.y/V0.x|
}
if (|V0.x|<|V0.y|)
{
V1.y=-sign(V0.y)*|dy|
V1.x=-sign(V0.x)*|dy*V0.x/V0.y|
}
Hope I did not make some silly mistake but anyway the idea behind should be strait-forward. If not make the parametric line equation of V0 ... You can also exploit the dot product for this but that should lead to the same results ...

Find all pixels a given radius from a point, confined within an arc

I'm working on an autonomous rover that navigates partially by ultrasound proximity sensors. Before we implement the hardware we want to do some testing of our algorithms with a simulator, which I am now writing.
One task that I'm having some trouble with is that the ultrasound sensor has a 60 degree field of view. If an object is detected, any point along that 60 degree arc at that radius may have an object, but all points below that radius are guaranteed not to have an object.
What I need to do is write a function that is given an (x,y) coordinate and a bearing (I'm restricting this to the 4 cardinals for now) and have it return to me a list of pixels within a radius and a list of pixels at that radius. With repeated scans from multiple locations and bearings all objects can be found.
My initial thought was to work iterative-ly. Start at the row in front of the sensor and sweep back and forth in progressively wider scans (1,1,3,3,5,5,7,7,etc). However eventually the radii stop aligning with the rows. My new search path would be to figure out how to draw an arc with pixels, then step the radius up to the first collision.
This question asks a similar question, but is only interested in specific points so I believe it is a fundamentally different problem.
how to calculate all points(longitude,latitude) within a given radius from given point (longitude,latitude)?
You can use any Floodfill method to get all integer points in the sector.
Precalculate starting and ending angles as
S_Angle = Center_Bearing - Pi/6
E_Angle = Center_Bearing + Pi/6
Important values:
S_Cos = Cos(S_Angle)
S_Sin = Sin(S_Angle)
E_Cos = Cos(E_Angle)
E_Sin = Sin(E_Angle)
Border conditions for sector floodfill:
(x-x0)*S_Sin-(y-y0)*S_Cos >= 0 //point is left to starting ray
(x-x0)*E_Sin-(y-y0)*E_Cos <= 0 //point is right to ending ray
(x-x0)^2+(y-y0)^2 <= R^2 //point is in the range
(probably you may need to exchange >= and <= in the first inequalities pair)

shade border of 2D polygon differently

we are programming a 2D game in XNA. Now we have polygons which define our level elements. They are triangulated such that we can easily render them. Now I would like to write a shader which renders the polygons as outlined textures. So in the middle of the polygon one would see the texture and on the border it should somehow glow.
My first idea was to walk along the polygon and draw a quad on each line segment with a specific texture. This works but looks strange for small corners where the textures are forced to overlap.
My second approach was to mark all border vertices with some kind of normal pointing out of the polygon. Passing this to the shader would interpolate the normals across edges of the triangulation and I could use the interpolated "normal" as a value for shading. I could not test it yet but would that work? A special property of the triangulation is that all vertices are on the border so there are no vertices inside the polygon.
Do you guys have a better idea for what I want to achieve?
Here A picture of what it looks right now with the quad solution:
You could render your object twice. A bigger stretched version behind the first one. Not that ideal since a complex object cannot be streched uniformly to create a border.
If you have access to your screen buffer you can render your glow components into a rendertarget and align a full-screen quad to your viewport and add a fullscreen 2D silhouette filter to it.
This way you gain perfect control over the edge by defining its radius, colour, blur. With additional output values such as the RGB values from the object render pass you can even have different advanced glows.
I think rendermonkey had some examples in their shader editor. Its definetly a good starting point to work with and try out things.
Propaply you want calclulate new border vertex list (easy fill example with triangle strip with originals). If you use constant border width and convex polygon its just:
B_new = B - (BtoA.normalised() + BtoC.normalised()).normalised() * width;
If not then it can go more complicated, there is my old but pretty universal solution:
//Helper function. To working right, need that v1 is before v2 in vetex list and vertexes are going to (anti???) cloclwise!
float vectorAngle(Vector2 v1, Vector2 v2){
float alfa;
if (!v1.isNormalised())
v1.normalise();
if (!v2.isNormalised())
v2.normalise();
alfa = v1.dotProduct(v2);
float help = v1.x;
v1.x = v1.y;
v1.y = -help;
float angle = Math::ACos(alfa);
if (v1.dotProduct(v2) < 0){
angle = -angle;
}
return angle;
}
//Normally dont use directly this!
Vector2 calculateBorderPoint(Vector2 vec1, Vector2 vec2, float width1, float width2){
vec1.normalise();
vec2.normalise();
float cos = vec1.dotProduct(vec2); //Calculates actually cosini of two (normalised) vectors (remember math lessons)
float csc = 1.0f / Math::sqrt(1.0f-cos*cos); //Calculates cosecant of angle, This return NaN if angle is 180!!!
//And rest of the magic
Vector2 difrence = (vec1 * csc * width2) + (vec2 * csc * width1);
//If you use just convex polygons (all angles < 180, = 180 not allowed in this case) just return value, and if not you need some more magic.
//Both of next things need ordered vertex lists!
//Output vector is always to in side of angle, so if this angle is.
if (Math::vectorAngle(vec1, vec2) > 180.0f) //Note that this kind of function can know is your function can know that angle is over 180 ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
difrence = -difrence;
//Ok and if angle was 180...
//Note that this can fix your situation ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
if (difrence.isNaN()){
float width = (width1 + width2) / 2.0; //If angle is 180 and border widths are difrent, you cannot get perfect answer ;)
difrence = vec1 * width;
//Just turn vector -90 degrees
float swapHelp = difrence.y
difrence.y = -difrence.x;
difrence.x = swapHelp;
}
//If you don't want output to be inside of old polygon but outside, just: "return -difrence;"
return difrence;
}
//Use this =)
Vector2 calculateBorderPoint(Vector2 A, Vector2 B, Vector2 C, float widthA, float widthB){
return B + calculateBorderPoint(A-B, C-B, widthA, widthB);
}
Your second approach can be possible...
mark the outer vertex (in border) with 1 and the inner vertex (inside) with 0.
in the pixel shader you can choose to highlight, those that its value is greater than 0.9f or 0.8f.
it should work.

Resources