Update running animation in Phaser 3 - phaser-framework

I am doing a spinning animation after an action on a button. But to know where the animation needs to stop (the final angle) I need to do a call to backend. So I start the animation and when I receive the response from the backend I update the animation. Here is the code :
const anim = scene.tweens.add({
targets: [targetContainer],
angle: angle,
duration: WHEEL_ROTATION_DURATION,
ease: 'Cubic.easeOut',
})
And the update :
anim.data[0].end += newAngle
It works properly but the moment the angle is updated the animation produce a "glitch/jump" that is not nice to see.
Any idea on how to make it smooth ?

Answer from Antriel on phaser forums :
Best way to make it smooth would be to manually update the angle in
update, using constant speed (so something like container.angle +=
angleSpeed * dt). Then afterwards, when you get the backend response,
just start the easeOut tween with the correct end position, and
specify speed rather than the duration.
Works very well.

Related

Why doesn't my Raycast2Ds detect walls from my tilemap

I am currently trying to make a small top-down RPG with grid movement.
To simplify things, when i need to make something move one way, I use a RayCast2D Node and see if it collides, to know if i can move said thing.
However, it does not seem to detect the walls ive placed until I am inside of them.
Ive already checked for collision layers, etc and they seem to be set up correctly.
What have i done wrong ?
More info below :
Heres the code for the raycast check :
func is_path_obstructed_by_obstacle(x,y):
$Environment_Raycast.set_cast_to(Vector2(GameVariables.case_width * x * rayrange, GameVariables.case_height * y * rayrange))
return $Environment_Raycast.is_colliding()
My walls are from a TileMap, with collisions set up. Everything that uses collisions is on the default layer for now
Also heres the function that makes my character move :
func move():
var direction_vec = Vector2(0,0)
if Input.is_action_just_pressed("ui_right"):
direction_vec = Vector2(1,0)
if Input.is_action_just_pressed("ui_left"):
direction_vec = Vector2(-1,0)
if Input.is_action_just_pressed("ui_up"):
direction_vec = Vector2(0,-1)
if Input.is_action_just_pressed("ui_down"):
direction_vec = Vector2(0,1)
if not is_path_obstructed(direction_vec.x, direction_vec.y):
position += Vector2(GameVariables.case_width * direction_vec.x, GameVariables.case_height * direction_vec.y)
grid_position += direction_vec
return
With ray casts always make sure it is enabled.
Just in case, I'll also mention that the ray cast cast_to is in the ray cast local coordinates.
Of course collision layers apply, and the ray cast has exclude_parent enabled by default, but I doubt that is the problem.
Finally, remember that the ray cast updates on the physics frame, so if you are using it from _process it might be giving you outdated results. You can call call force_update_transform and force_raycast_update on it to solve it. Which is also what you would do if you need to move it and check multiple times on the same frame.
For debugging, you can turn on collision shapes in the debug menu, which will allow you to see them when running the game, and see if the ray cast is positioned correctly. If the ray cast is not enabled it will not appear. Also, by default, they will turn red when they collide something.

OSG/OSGEarth How to move a Camera

I'm making a Drone simulator with OSGEarth. I never used OSG before so I have some troubles to understand cameras.
I'm trying to move a Camera to a specific location (lat/lon) with a specific roll, pitch and yaw in OSG Earth.
I need to have multiple cameras, so I use a composite viewer. But I don't understand how to move the particular camera.
Currently I have one with the EarthManipulator which works fine.
class NovaManipulator : publicosgGA::CameraManipulator {
osg::Matrixd NovaManipulator::getMatrix() const
{
//drone_ is only a struct which contains updated infomations
float roll = drone_->roll(),
pitch = drone_->pitch(),
yaw = drone_->yaw();
auto position = drone_->position();
osg::Vec3d world_position;
position.toWorld(world_position);
cam_view_->setPosition(world_position);
const osg::Vec3d rollAxis(0.0, 1.0, 0.0);
const osg::Vec3d pitchAxis(1.0, 0.0, 0.0);
const osg::Vec3d yawAxis(0.0, 0.0, 1.0);
//if found this code :
// https://stackoverflow.com/questions/32595198/controling-openscenegraph-camera-with-cartesian-coordinates-and-euler-angles
osg::Quat rotationQuat;
rotationQuat.makeRotate(
osg::DegreesToRadians(pitch + 90), pitchAxis,
osg::DegreesToRadians(roll), rollAxis,
osg::DegreesToRadians(yaw), yawAxis);
cam_view_->setAttitude(rotationQuat);
// I don't really understand this also
auto nodePathList = cam_view_->getParentalNodePaths();
return osg::computeLocalToWorld(nodePathList[0]);
}
osg::Matrixd NovaManipulator::getInverseMatrix() const
{
//Don't know why need to be inverted
return osg::Matrix::inverse(getMatrix());
}
};
Then I install the manupulator to a Viewer. And when I simulate the world, The camera is on the good place (Lat/Lon/Height).
But the orientation in completely wrong and I cannot find where I need to "correct" the axis.
Actually my drone is in France but the "up" vector is bad, it still head to North instead of "vertical" relatively to the ground.
See what I'm getting on the right camera
I need to have a yaw relative to North (0 ==> North), and when my roll and pitch are set to zero I need to be "parallel" to the ground.
Is my approach (by making a Manipulator) is the best to do that ?
Can I put the camera object inside the Graph node (behind a osgEarth::GeoTransform (it works for my model)) ?
Thanks :)
In the past, I have done a cute trick involving using an ObjectLocator object (to get the world position and plane-tangent-to-surface orientation), combined with a matrix to apply the HPR. There's an invert in there to make it into a camera orientation matrix rather than an object placement matrix, but it works out ok.
http://forum.osgearth.org/void-ObjectLocator-setOrientation-default-orientation-td7496768.html#a7496844
It's a little tricky to see what's going on in your code snippet as the types of many of the variables aren't obvious.
AlphaPixel does lots of telemetry / osgEarth type stuff, so shout if you need help.
It is probably just the order of your rotations - matrix and quaternion multiplications are order dependent. I would suggest:
Try swapping order of pitch and roll in your MakeRotate call.
If that doesn't work, set all but 1 rotation to 0° at a time, making sure each is what you expect, then you can play with the orders (there are only 6 possible orders).
You could also make individual quaternions q1, q2, q3, where each represents h,p, and r, individually, and multiply them yourself to control the order. This is what the overload of MakeRotate you're using does under the hood.
Normally you want to do your yaw first, then your pitch, then your roll (or skip roll altogether if you like), but I don't recall off-hand whether osg::quat concatenates in a pre-mult or post-mult fashion, so it could be p-r-y order.

Player positions server side or client side?

I just made a multiplayer browser implementation of the game Pong using socket.io and have a question regarding logistics of real time. Basically the player's paddle is just a colored-in div that moves up or down depending on which button they're pressing. I noticed when testing my program with two different computers using AWS that the movement was nearly perfectly synchronized but sometimes not exact. For the player that controls the paddle, the movement of their paddle is done locally, but for the person they're playing against the server continuously sends them data of whether their opponent moved up or down.
My question is should I be doing all the movement server-side? Like a user presses to go up and it sends the server a request which emits to both players that the paddle should move, or is my way where movement for your paddle being done locally fine?
My code right now looks like this:
Client-side checking if up or down button pressed and emitting move request:
paddleMove = 0; // Keep track of which direction to move
speed = 5;
if (paddleL.position().top > arena.position().top){ // If left paddle not at top
if (keysPressed.up) paddleMove -= speed;
}
if (paddleL.position().top+paddleL.height() < arena.position().top + arena.height() - 15){ // If left paddle not at bottom
if (keysPressed.down) paddleMove += speed;
}
paddleL.css({ // Move paddle locally
top: paddleL.cssNumber('top') + paddleMove + 'px'
});
socket.emit("moveReq", paddleMove); // Send to server
The above code is in an interval that runs every fraction of a second.
Then the server side looks like this:
socket.on('moveReq', function(data){ // Send to opponent that other paddle moved
socket.broadcast.emit("movePaddle", data);
});
Which in turn alerts another portion of the user-side code to move the other paddle:
socket.on("movePaddle", function(data){
var paddleMove = 0;
paddleMove += data; // Data is speed (direction) of movement
paddleR.css({ // Move right paddle
top: paddleR.cssNumber('top') + paddleMove + 'px'
});
As I said, the movement right now is pretty good but not perfect. Should I make none of the movement local and make it all on a server emit?
Currently i am even working on multiplayer game using web sockets .
If you want real time player position then it will take lot bandwidth.
So far what we i did was prediction and lerping .
Suppose there are two players connected named A & B .
Lets say Player A by default is on x=0 (t=0) , so on B it will be also on x=0.
Now what will we do is will start emiting A's x-positon every 1 sec (depending on your game , if fps then lower the value )
After 1 sec (t=1), A's position is on x=2 (2px according to you).
B receives the position of A after 1.2sec (considering late due to network issues) . Now we have to lerp the position from x=0 to x=1 predicting the time . (This all can be achieved with scripting )
Basic Formula (This will be done in a update function) :
CurrentXposition = (NewXPosition - CurrentXPosition) * deltatime ;
Well you have to definitely work on the above formula . The deltatime will be calculated every time when we receive new position . So here we lerp and predict all in one using deltatime .
Lerping will smooth the movement of player , and deltatime which will work as prediction will set the correct time and smoothness of lerping according to received position .
Refer this blog for more into this ,
And This for lerping formula
Update the position immediately on the client side. Then send the movement message to the server.
When you get a message back from the server sync the position to the server's value.
This way the client movement should still seem smooth on flaky or high latency connections. However in some extreme cases the client may be so much out of sync that the paddle will appear to be in a position that it's not (the ball may appear to go through the paddle) - though either way a high ping is going to solve problems

Fabricjs canvas objects not rendering properly

I need help with fabricjs, objects are not rendering properly while I clone and shrink to another canvas.
If I do only with text object there is not problem kindly look the below fiddle (after save)
enter code hereFIDDLE
If I load an image then the objects not aligning properly.
enter code hereFIDDLE
Kindly help to resolve form this.
Thank You.
Your problem for the image not respecting the zoom is that the image loads asyncronously. So you load from json, you do the zoom logic, some microseconds later the image finish loading and sets its own width and height.
Other than this you have to use zoom the right way.
Everyone is scaling manually and changing left and top using division or multiply with scale factor.
The good way is using
canvas.setZoom(scale);
This will save a lot of headaches to you.
$(function () {
scale = 0.5;
canvas1 = new fabric.Canvas('c1');
canvas1.setDimensions({
"width": canvas.getWidth() * scale,
"height": canvas.getHeight() * scale
})
var json = localStorage.getItem('save');
canvas1.loadFromJSON(json, function () {
canvas1.renderAll();
});
canvas1.setZoom(scale);
canvas1.renderAll();
});
chek updated fiddle
https://jsfiddle.net/asturur/Lep9w01L/11/

iPhone help with animations CGAffineTransform resetting?

Hi I am totally confused with CGAffineTransform animations. All I want to do is move a sprite from a position on the right to a position on the left. When it has stopped I want to "reset" it i.e. move it back to where it started. If the app exits (with multitasking) I want to reset the position again on start and repeat the animation.
This is what I am using to make the animation..
[UIImageView animateWithDuration:1.5
delay:0.0
options:(UIViewAnimationOptionAllowUserInteraction |
UIViewAnimationOptionCurveLinear
)
animations:^(void){
ufo.transform = CGAffineTransformTranslate(ufo.transform, -270, 100);
}
completion:^(BOOL finished){
if(finished){
NSLog(#"ufo finished");
[self ufoAnimationDidStop];
}
}];
As I understand it the CGAffineTransforms just visually makes the sprite look like it's moved but doesn't actually move it. Therefore when I try and "reset" the position using
ufo.center = CGPointMake(355, 70);
it doesn't do anything.
I do have something working, if I call
ufo.transform = CGAffineTransformTranslate(ufo.transform, 270, -100);
it resets. The problem is if I exit the app half way through the animation then when it restarts it doesn't necessarily start from the beginning and it doesn't go the the right place, it basically goes crazy!
Is there a way to just remove any transforms applied to it? I'm considering just using a timer but this seems silly when this method should work. I;ve been struggling with this for some time so any help would be much appreciated.
Thanks
Applying a transform to a view doesn't actually change the center or the bounds of the view; it just changes the way the view is shown on the screen. You want to set your transform back to CGAffineTransformIdentity to ensure that it looks like "normal." You can set it to that before you start your animation and set it to what you want it to animate to.

Resources