I have read dozens of questions here on SO (and not only) regarding arkanoid collision detection, namely, moving circle collision against a stationery rectangle, but all of them ask how to detect a collision or how to detect which side of the rectangle the ball hits. My question is a bit different - it concerns the calculation of the new speed direction in case when the ball hits the angle of the rectangle.
For simplicity's sake let's assume that Vx >= 0 and Vy <= 0, that is, the ball is coming from the below from the left upward and rightward and also suppose I know it's gonna hit the lower side of the rectangle. The green arrow shows the approximate direction of the ball and the blue dot shows the first point on the line containing the lower side of the rectangle that the ball hits. If that point lies strictly within the lower side of the rectangle then all is trivial - just change Vy to -Vy. However when that point lies outside the lower side it means that the first point of the rectangle that the ball will touch is going to be its lower-left corner, in which case I don't think that changing the Vy to -Vy is correct. I think that the new velocity angle must be dependent on the distance of the blue point to the corner. Also I think that not only Vy but also Vx must change (preserving, possibly, the length of the V vector).
So, how do we calculate the new Vx and Vy when we hit an angle? If you know any good links that address this question I'd be delighted to know them. Also note that I am more interested in the absolute physical model of this rather than easy-to-code optimized approximations. You can assume there is no rotation involved. Thank you very much in advance
You know how to bounce off a horizontal wall. Do you know how to bounce off a wall that is at some other angle?
When the circle hits the wall, it makes contact at a single point. That single point is all the circle "knows" about the wall; the location of that point gives you enough information to calculate the new V. When the circle hits the corner, it makes contact at a single point (i.e. the corner) so it bounces just as if it hit a wall at that point.
Is that enough to go on, or would you like some math? (And if so, how comfortable are you with vector algebra?)
A simpliciation that is often made when modelling rock blocks in rock mechanics is to assume that the corner of the rectangle has a small radius. The calculation is then one of two curved surfaces contacting. This approach tends to give more consistent behaviour than modelling corners as right angles an so is preferred where consistent results are needed.
Related
I 'program' simple hyper casual mobile games in my free time using a sudo programming language software called construct 3, as I am still learning actual languages and can't yet use them well enough to make games.
Essentially I am writing my own super simple bouncing ball physics engine. I have up to 3 balls in this little pinball game of mine at any time. I have given each ball an x velocity and y velocity instance variable.
Here is my question: how do the x and y velocities change when the ball bounces off of a surface with any angle? I know that if the floor is flat and it hits that, x stays the same and y flips it's polarity. I know the opposite happens with hitting a wall. But I have no idea how to calculate any other angle besides the 4 main axes. I'm sure it is a simple trig function. Oh, and dumb your answer down to the most simple sudo-code response you can make.
For any collision of an object against a flat surface of an angle alpha, your object will bounce back with an angle -alpha. Also, your have what's called a conservation of momentum, which means if your surface doesn't move and does not absorb anything, the total velocity of your object will not change either.
That being said, "all you need to do" is to parameter both the angle of your surface to the horizontal and the angle of your object incoming to your surface, so you can easily register an angle alpha. This way, you will be able to get a -alpha angle between your object and the surface after the collision in the frame of your surface, and you will then need to go back to the "horizontal frame" by simply adding the angle of your surface.
As far as your implementation should go, this is what I suggest:
Start with a function horizontalToAngularFrame that will takes one or more parameter depending if you're in 2D or 3D, so you can define the angle
Code another function AngularFrameToHorizontal with the same number of parameter
When an object enters in collision, just treat is as you would treat an object in the horizontal frame, and use the 2 previously coded functions to bring the angles back to your horizontal frame
My app captures the shape of a room by having the user point a camera at floor corners, and then doing a bunch of math, eventually ending up with a polygon.
The assumption is that the walls are straight (not curved). The majority of the corners are formed by walls at right angles to each other, but in some cases might not be.
Depending on how accurately the user points the camera, the (x,y) coordinates I derive for the corner might be beyond the actual corner, or in front of the actual camera, or, less likely, to the left or right. Obviously, in this case, when I connect the dots, I get weird parallelogram or rhomboid shapes. See example.
I am looking for a program or algorithm to normalize or regularize these shapes, provided we know which corners are supposed to be right angles.
My initial attempt involved finding segments which had angles which were "close" to each other, adjust them all to the same angle, and then recalculate the vertices. However, this algorithm proved to be unstable.
My current thinking is to find angles which are most obtuse (as would be caused by a point mistakenly placed beyond the actual corner), or most acute (as would be caused by a point mistakenly placed in front of the actual corner), and find the corner point which would make it a right angle. The problem, however, is that such as adjustment could have side-effects on other corners, such as making them even further away from right angles. I sense I need some kind of algorithm which takes all the information and optimizes/solves it at once--is this a kind of linear programming problem?--but I am stuck.
There is not a unique solution.
For example, take the perpendicular from the middle point of an edge to the two neighboring edges. This will give you two new corners.
Or take the perpendicular from the end point of an edge to other edges.
Or compute the average of angles in the end points of an edge. Use this average and the middle point of the edge to compute new corners.
Or...
To get the most faithful compliance, capture (or calculate) distances from each corner to the other three. Build triangles with those distances. Then use the average of the coordinates you compute for a corner from 2 or 3 triangles.
Resulting angles will not be exactly 90 degrees, but the polygon will represent the room fairly.
I'm trying to infer an object's direction of movement using dense optical flow in OpenCV. I'm using calcOpticalFlowFarneback() to get flow coordinates and cartToPolar() to acquire vector angles which would indicate direction.
To interpret the results I need to know the reference point for measuring the angle. I have found this blog post indicating that the range of angles is 360°. That tells me that the angle measurement would go along the lines of the unit circle. I couldn't make out much more than that.
The documentation for cartToPolar() doesn't cover this and my attempts at testing it have failed.
It seems that the angle produced by cartToPolar() is in reference to the unit circle rotated clockwise by 90° centered on the image coordinate starting point in the top left corner. It would look like this.
I came to this conclusion by using the dense optical flow example provided by OpenCV. I replaced the line hsv[...,0] = ang*180/np.pi/2 with hsv[...,0] = ang*180/np.pi to get correct angle conversion from radians. Then I tested a video with people moving from top right to bottom left and vice versa. I sampled the dominant color with GIMP and got RGB values which I converted to HSV values. Hue value corresponds to the angle in degrees.
People moving from top right to bottom left produced an angle of about 300° and people moving the other way round produced an angle of about 120°. This hinted at the way the unit circle is positioned.
Looking at the code, fastAtan32f is used to compute the angles. and that seems to be a atan2 implementation.
Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this
Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.
You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.
In my Direct3D application, the camera can be moved using the mouse or arrow keys. But if I hard code (0,1,0) as the up direction vector in LookAtLH, the frame goes blank at some orientations of the camera.
I just learned the hard way that when looking along the Y-axis, (0,1,0) no longer works as the Up direction (seems obvious?). I am thinking of switching my up direction to something else for each of these special cases. Is there a more graceful way to handle this?
Assuming you can calculate a vector pointing forward (what you are looking at - your position) and a vector pointing right (always on the XZ-plane unless you can roll). Normalize both these vectors, then up is forward x right (where x is cross product).
In general, you can plug in your yaw, pitch and roll into a rotation matrix and rotate the axis vectors to get right, up and forward, but I guess that's what you are using LookAtLH to avoid.
See http://en.wikipedia.org/wiki/Rotation_matrix#The_3-dimensional_rotation_matricies
The graceful way to handle this is to use Unit Quaternions. A quaternion is a vector of 4 values that encodes an orientation in 3D space (not a rotation as some articles assert) and a unit quaternion is one where the vector length sqrt(x^2+y^2+z^2+w^2) is 1.0. There are a set of mathematical operations for working with quaternions that are analogous to using matrices to encode rotations, with the added bonus that quaternions can never represent an degenerate orientation. You can freely convert quaternions to a 3x3 or 4x4 matrix when you need to feed the result to a GPU.
Your problem is that, while you are moving your camera, you will introduce a little twist into the camera's up direction. By forcing the camera to re-center itself on the (0,1,0) vector every iteration, you are in effect rotating the camera and then clamping the camera's orientation to remain on the surface of a sphere, but when your camera hits the pole of this sphere there is no good direction to call "up" and your matrix goes singular and gives you zero-sized polygons (hence the black screen). Quaternions have the ability to interpolate through these poles and come out the other side just fine, leaving you with a valid matrix at all times. all you have to do is control the "twist".
To measure this twist you should read Ken Shoemake's article "Fiber Bundle Twist Reduction" in the book Graphics Gems 4. He shows a good way to measure this accumulated twist and how to remove it when it is offensive.