Arcade physics rotation and angularVelocity - phaser-framework

So ... what exactly are the parameters of body.rotation and body.angularVelocity in Phaser arcade physics?
The documentation for body.rotation just says "the amount the Body is rotated", without specifying units (radians or degrees), the zero vector (X axis?), nor the direction that's positive.
Docs for body.angle says "angle in radians" ... but again doesn't say which axis is the 0 rotation vector, nor which direction is positive.
The documentation for angularVelocity says "angular velocity in pixels per second squared" which doesn't make ANY SENSE AT ALL. You can't measure rotation in pixels.
I'm trying to sync up a phaser front-end with a server-based physics model that has its own coordinate system, so some clarity on the documentation would really make my life easier!

As far as I know "body.rotation" is given in radians and if using degrees you should use "body.angle".
For the rotation direction a higher value rotates the sprite clockwise. If the angle is 0 and the sprite is pointing up it will point to the right after entering the body.angle = 90.
angularVelocity is not for rotating your sprite. The name says "angularVELOCITY" so what it's used for is to set an angular velocity. It's mainly used when you want the sprite to move in the direction it's facing.

Related

How to change x and y velocities in response to a wall bounce

I 'program' simple hyper casual mobile games in my free time using a sudo programming language software called construct 3, as I am still learning actual languages and can't yet use them well enough to make games.
Essentially I am writing my own super simple bouncing ball physics engine. I have up to 3 balls in this little pinball game of mine at any time. I have given each ball an x velocity and y velocity instance variable.
Here is my question: how do the x and y velocities change when the ball bounces off of a surface with any angle? I know that if the floor is flat and it hits that, x stays the same and y flips it's polarity. I know the opposite happens with hitting a wall. But I have no idea how to calculate any other angle besides the 4 main axes. I'm sure it is a simple trig function. Oh, and dumb your answer down to the most simple sudo-code response you can make.
For any collision of an object against a flat surface of an angle alpha, your object will bounce back with an angle -alpha. Also, your have what's called a conservation of momentum, which means if your surface doesn't move and does not absorb anything, the total velocity of your object will not change either.
That being said, "all you need to do" is to parameter both the angle of your surface to the horizontal and the angle of your object incoming to your surface, so you can easily register an angle alpha. This way, you will be able to get a -alpha angle between your object and the surface after the collision in the frame of your surface, and you will then need to go back to the "horizontal frame" by simply adding the angle of your surface.
As far as your implementation should go, this is what I suggest:
Start with a function horizontalToAngularFrame that will takes one or more parameter depending if you're in 2D or 3D, so you can define the angle
Code another function AngularFrameToHorizontal with the same number of parameter
When an object enters in collision, just treat is as you would treat an object in the horizontal frame, and use the 2 previously coded functions to bring the angles back to your horizontal frame

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

What is the reference point for measuring angles in OpenCV?

I'm trying to infer an object's direction of movement using dense optical flow in OpenCV. I'm using calcOpticalFlowFarneback() to get flow coordinates and cartToPolar() to acquire vector angles which would indicate direction.
To interpret the results I need to know the reference point for measuring the angle. I have found this blog post indicating that the range of angles is 360°. That tells me that the angle measurement would go along the lines of the unit circle. I couldn't make out much more than that.
The documentation for cartToPolar() doesn't cover this and my attempts at testing it have failed.
It seems that the angle produced by cartToPolar() is in reference to the unit circle rotated clockwise by 90° centered on the image coordinate starting point in the top left corner. It would look like this.
I came to this conclusion by using the dense optical flow example provided by OpenCV. I replaced the line hsv[...,0] = ang*180/np.pi/2 with hsv[...,0] = ang*180/np.pi to get correct angle conversion from radians. Then I tested a video with people moving from top right to bottom left and vice versa. I sampled the dominant color with GIMP and got RGB values which I converted to HSV values. Hue value corresponds to the angle in degrees.
People moving from top right to bottom left produced an angle of about 300° and people moving the other way round produced an angle of about 120°. This hinted at the way the unit circle is positioned.
Looking at the code, fastAtan32f is used to compute the angles. and that seems to be a atan2 implementation.

Game Programming - Billboard issue in front of 3D object

I'm starting to develop a poc with the main features of a turn-based RPG similar to Breath of Fire 4, a mixture of 3D environment with characters and items such as billboards.
I'm using an orthographic camera with an angle of 30 degrees on the X axis, I did my sprite to act as a billboard with the pivot in the center, the problem occurs when the sprite is nearing a 3D object such as a wall.
Check out the image:
I had tried the solution leaving the rotation matrix of the billboard "upright", worked well, but of course, depending on the height and angle of the camera toward the billboard it gets kinda flattened, I also changed the pivot to the bottom of the sprite but this problem appears with objects in front of the sprite too. I was thinking that the solution would be to create a fragment shader that relies on the depth texture of some previous pass, I tried to think in how to do it with shaders but I could not figure it out. Could you help me with some article or anything that puts me in the right direction? Thank you.
See what I am trying to achieve on this video.
You had got the right approach. Use the upright matrix, and scale up Z of billboards preparing flattened Z by your camera. The Z scaling should be about 1.1547. It is (1 / cos30), which makes billboards look like original size from the camera with the angle of 30 degrees. It seems a tricky way but developers of BoF4 on the video might use the same solution too.

How do I find the world coordinates of a pixel on the image plane?

A bit of background
I am writing a simple ray tracer in C++. I have most of the core complete but don't understand how to retrieve the world coordinate of a pixel on the image plane. I need this location so that I can cast the ray into the world.
Currently I have a Camera with a position(aka my perspective reference point), a direction (vector) which is not normalized. The directions length signifies the center of the image plane and which way the camera is facing.
There are other values associated with the camera but they should not be relevant.
My image coordinates will range from -1 to 1 and the perspective(focal length), will change based on the distance of the direction associated with the camera.
What I need help with
I need to go from pixel coordinates (say [0, 256] in an image 256 pixels on each side) to my world coordinates.
I will also want to program this so that no matter where the camera is placed and where it is directed, that I can find the pixel in the world coordinates. (Currently the camera will almost always be centered at the origin and will look down the negative z axis. I would like to program this with the future changes in mind.) It is also important to know if this code should be pushed down into my threaded code as well. Otherwise it will be calculated by the main thread and then the ray will be used in the threaded code.
(source: in.tum.de)
I did not make this image and it is only there to give an idea of what I need.
Please leave comments if you need any additional info. Otherwise I would like a simple theory/code example of what to do.
Basically you have to do the inverse process of V * MVP which transforms the point to unit cube dimensions. Look at the following urls for programming help
http://nehe.gamedev.net/article/using_gluunproject/16013/ https://sites.google.com/site/vamsikrishnav/gluunproject

Resources