I just made a multiplayer browser implementation of the game Pong using socket.io and have a question regarding logistics of real time. Basically the player's paddle is just a colored-in div that moves up or down depending on which button they're pressing. I noticed when testing my program with two different computers using AWS that the movement was nearly perfectly synchronized but sometimes not exact. For the player that controls the paddle, the movement of their paddle is done locally, but for the person they're playing against the server continuously sends them data of whether their opponent moved up or down.
My question is should I be doing all the movement server-side? Like a user presses to go up and it sends the server a request which emits to both players that the paddle should move, or is my way where movement for your paddle being done locally fine?
My code right now looks like this:
Client-side checking if up or down button pressed and emitting move request:
paddleMove = 0; // Keep track of which direction to move
speed = 5;
if (paddleL.position().top > arena.position().top){ // If left paddle not at top
if (keysPressed.up) paddleMove -= speed;
}
if (paddleL.position().top+paddleL.height() < arena.position().top + arena.height() - 15){ // If left paddle not at bottom
if (keysPressed.down) paddleMove += speed;
}
paddleL.css({ // Move paddle locally
top: paddleL.cssNumber('top') + paddleMove + 'px'
});
socket.emit("moveReq", paddleMove); // Send to server
The above code is in an interval that runs every fraction of a second.
Then the server side looks like this:
socket.on('moveReq', function(data){ // Send to opponent that other paddle moved
socket.broadcast.emit("movePaddle", data);
});
Which in turn alerts another portion of the user-side code to move the other paddle:
socket.on("movePaddle", function(data){
var paddleMove = 0;
paddleMove += data; // Data is speed (direction) of movement
paddleR.css({ // Move right paddle
top: paddleR.cssNumber('top') + paddleMove + 'px'
});
As I said, the movement right now is pretty good but not perfect. Should I make none of the movement local and make it all on a server emit?
Currently i am even working on multiplayer game using web sockets .
If you want real time player position then it will take lot bandwidth.
So far what we i did was prediction and lerping .
Suppose there are two players connected named A & B .
Lets say Player A by default is on x=0 (t=0) , so on B it will be also on x=0.
Now what will we do is will start emiting A's x-positon every 1 sec (depending on your game , if fps then lower the value )
After 1 sec (t=1), A's position is on x=2 (2px according to you).
B receives the position of A after 1.2sec (considering late due to network issues) . Now we have to lerp the position from x=0 to x=1 predicting the time . (This all can be achieved with scripting )
Basic Formula (This will be done in a update function) :
CurrentXposition = (NewXPosition - CurrentXPosition) * deltatime ;
Well you have to definitely work on the above formula . The deltatime will be calculated every time when we receive new position . So here we lerp and predict all in one using deltatime .
Lerping will smooth the movement of player , and deltatime which will work as prediction will set the correct time and smoothness of lerping according to received position .
Refer this blog for more into this ,
And This for lerping formula
Update the position immediately on the client side. Then send the movement message to the server.
When you get a message back from the server sync the position to the server's value.
This way the client movement should still seem smooth on flaky or high latency connections. However in some extreme cases the client may be so much out of sync that the paddle will appear to be in a position that it's not (the ball may appear to go through the paddle) - though either way a high ping is going to solve problems
Related
I am having problem figure out how the normal work.
I am using godot4 RC1.
I created a staticbody3D and want to place a 3dobject standing upright like how we stand on earth.
this is my code for the staticbody3d:
func _on_input_event(camera, event, position, normal, shape_idx):
player.global_position = position
var q = Quaternion.from_euler(normal)
player.basis = q
basically I use the code to capture the mouse_over position on the staticbody3d and place my player(Mesh3d) at the position, and align the player basis to the normal.
if mouse at the top of the planet, it turn out ok.
but anywhere else, it just gone hay wire:
How can I resolve this?
I am doing a spinning animation after an action on a button. But to know where the animation needs to stop (the final angle) I need to do a call to backend. So I start the animation and when I receive the response from the backend I update the animation. Here is the code :
const anim = scene.tweens.add({
targets: [targetContainer],
angle: angle,
duration: WHEEL_ROTATION_DURATION,
ease: 'Cubic.easeOut',
})
And the update :
anim.data[0].end += newAngle
It works properly but the moment the angle is updated the animation produce a "glitch/jump" that is not nice to see.
Any idea on how to make it smooth ?
Answer from Antriel on phaser forums :
Best way to make it smooth would be to manually update the angle in
update, using constant speed (so something like container.angle +=
angleSpeed * dt). Then afterwards, when you get the backend response,
just start the easeOut tween with the correct end position, and
specify speed rather than the duration.
Works very well.
I'm making a Drone simulator with OSGEarth. I never used OSG before so I have some troubles to understand cameras.
I'm trying to move a Camera to a specific location (lat/lon) with a specific roll, pitch and yaw in OSG Earth.
I need to have multiple cameras, so I use a composite viewer. But I don't understand how to move the particular camera.
Currently I have one with the EarthManipulator which works fine.
class NovaManipulator : publicosgGA::CameraManipulator {
osg::Matrixd NovaManipulator::getMatrix() const
{
//drone_ is only a struct which contains updated infomations
float roll = drone_->roll(),
pitch = drone_->pitch(),
yaw = drone_->yaw();
auto position = drone_->position();
osg::Vec3d world_position;
position.toWorld(world_position);
cam_view_->setPosition(world_position);
const osg::Vec3d rollAxis(0.0, 1.0, 0.0);
const osg::Vec3d pitchAxis(1.0, 0.0, 0.0);
const osg::Vec3d yawAxis(0.0, 0.0, 1.0);
//if found this code :
// https://stackoverflow.com/questions/32595198/controling-openscenegraph-camera-with-cartesian-coordinates-and-euler-angles
osg::Quat rotationQuat;
rotationQuat.makeRotate(
osg::DegreesToRadians(pitch + 90), pitchAxis,
osg::DegreesToRadians(roll), rollAxis,
osg::DegreesToRadians(yaw), yawAxis);
cam_view_->setAttitude(rotationQuat);
// I don't really understand this also
auto nodePathList = cam_view_->getParentalNodePaths();
return osg::computeLocalToWorld(nodePathList[0]);
}
osg::Matrixd NovaManipulator::getInverseMatrix() const
{
//Don't know why need to be inverted
return osg::Matrix::inverse(getMatrix());
}
};
Then I install the manupulator to a Viewer. And when I simulate the world, The camera is on the good place (Lat/Lon/Height).
But the orientation in completely wrong and I cannot find where I need to "correct" the axis.
Actually my drone is in France but the "up" vector is bad, it still head to North instead of "vertical" relatively to the ground.
See what I'm getting on the right camera
I need to have a yaw relative to North (0 ==> North), and when my roll and pitch are set to zero I need to be "parallel" to the ground.
Is my approach (by making a Manipulator) is the best to do that ?
Can I put the camera object inside the Graph node (behind a osgEarth::GeoTransform (it works for my model)) ?
Thanks :)
In the past, I have done a cute trick involving using an ObjectLocator object (to get the world position and plane-tangent-to-surface orientation), combined with a matrix to apply the HPR. There's an invert in there to make it into a camera orientation matrix rather than an object placement matrix, but it works out ok.
http://forum.osgearth.org/void-ObjectLocator-setOrientation-default-orientation-td7496768.html#a7496844
It's a little tricky to see what's going on in your code snippet as the types of many of the variables aren't obvious.
AlphaPixel does lots of telemetry / osgEarth type stuff, so shout if you need help.
It is probably just the order of your rotations - matrix and quaternion multiplications are order dependent. I would suggest:
Try swapping order of pitch and roll in your MakeRotate call.
If that doesn't work, set all but 1 rotation to 0° at a time, making sure each is what you expect, then you can play with the orders (there are only 6 possible orders).
You could also make individual quaternions q1, q2, q3, where each represents h,p, and r, individually, and multiply them yourself to control the order. This is what the overload of MakeRotate you're using does under the hood.
Normally you want to do your yaw first, then your pitch, then your roll (or skip roll altogether if you like), but I don't recall off-hand whether osg::quat concatenates in a pre-mult or post-mult fashion, so it could be p-r-y order.
I have been very disappointed to find that now that I am ready to migrate my html5 app onto phones using phonegap the ability to play audio is like perpetual motion. It is almost there, but not quite.
The answer for Androids (pre ice cream sandwich at least) is the Phonegap media api. According to everyone.
The problem is that I REALLY care about what the current time is while playing back my audio. On my Mac running Chrome I timed how long it takes to do:
pos=audio.currentTime
and it is less than a millisecond.
With the Phonegap media api I must call an asynchronous function, WAIT till it calls back and then I get the current media position. This takes anywhere from 5 to over 150 milliseconds. It is not consistent and it is distressingly slow.
It also begs the question: what time is it? If it takes 150ms to get the time, did I get the time at the beginning of those 150ms or the end? I really cant be sure that the time I got is really that close to the current time.
Its a good thing I am not still writing submarine navigation software, because with such a horrible source of time I would never be able to determine where I am.
I am now using the system clock to keep track of time, and using getCurrentPosition() to re-sync from time to time. However, I can still be off by over a hundred ms.
My question: Anyone got a better approach?
EDIT:
Looking at the code for Cordova:
/**
* Get current position of playback.
*
* #return position in msec or -1 if not playing
*/
public long getCurrentPosition() {
if ((this.state == STATE.MEDIA_RUNNING) || (this.state == STATE.MEDIA_PAUSED)) {
int curPos = this.player.getCurrentPosition();
this.handler.webView.sendJavascript("cordova.require('cordova/plugin/Media').onStatus('" + this.id + "', " + MEDIA_POSITION + ", " + curPos / 1000.0f + ");");
return curPos;
}
else {
return -1;
}
}
It looks to me like this should execute pretty fast...except for that call to sendJavascript(). I wonder if that is where the mysterious delay is originating?
For my purposes I don't need an onStatus event: I need to get the current position as quickly as I can. So I wonder if I can remove that line of code... I guess that would necessitate a change to the Cordova source itself. :(
Ok, I hope I don't mess this up, I have had a look for some answers but can't find anything. I am trying to make a simple sampler in openframeworks using the FMOD sound player in 3D mode. I can make a single instance work fine (recording a new file using libsndfilerecorder and then playing it back and moving it in surround.
However I want to have 8 layers of looping audio that I can record and replace one layer at a time in a live show. I get a lot of problems as soon as I have more than 1 layer.
The first part of my question relates to the FMOD 3D modes, it is listener relative, so I have to define the position of my listener for every sound (I would prefer to have head relative mode but I cannot make this work at all. Again this works fine when I am using a single player but with multiple players only the last listener I update actually works.
The main problem I have is that when I use multiple players I get distortion, and often a mix of other currently playing sounds (even when the microphone cannot hear them) in my new recordings. Is there an incompatability with libsndfilerecorder and FMOD?
Here I initialise the players
for (int i=0; i<CHANNEL_COUNT; i++) {
lvelocity[i].set(1, 1, 1);
lup[i].set(0, 1, 0);
lforward[i].set(0, 0, 1);
lposition[i].set(0, 0, 0);
sposition[i].set(3, 3, 2);
svelocity[i].set(1, 1, 1);
//player[1].initializeFmod();
//player[i].loadSound( "1.wav" );
player[i].setVolume(0.75);
player[i].setMultiPlay(true);
player[i].play();
setupHold[i]==false;
recording[i]=false;
channelHasFile[i]=false;
settingOsc[i]=false;
}
When I am recording I unload the file and make sure the positions of the player that is not loaded are not updating.
void fmodApp::recordingStart( int recordingId ){
if (recording[recordingId]==false) {
setupHold[recordingId]=true; //this stops the position updating
cout<<"Start recording Channel " + ofToString(recordingId+1)+" setup hold is true \n";
pt=getDateName() +".wav";
player[recordingId].stop();
player[recordingId].unloadSound();
audioRecorder.setup(pt);
audioRecorder.setFormat(SF_FORMAT_WAV | SF_FORMAT_PCM_16);
recording[recordingId]=true; //this starts the libSndFIleRecorder
}
else {
cout<<"Channel" + ofToString(recordingId+1)+" is already recording \n";
}
}
And I stop the recording like this.
void fmodApp::recordingEnd( int recordingId ){
if (recording[recordingId]=true) {
recording[recordingId]=false;
cout<<"Stop recording" + ofToString(recordingId+1)+" \n";
audioRecorder.finalize();
audioRecorder.close();
player[recordingId].loadSound(pt);
setupHold[recordingId]=false;
channelHasFile[recordingId]=true;
cout<< "File recorded channel " + ofToString(recordingId+1) + " file is called " + pt + "\n";
}
else {
cout << "Sorry track" + ofToString(recordingId+1) + "is not recording";
}
}
I am careful not to interrupt the updating process but I cannot see where I am going wrong.
Many Thanks
to deal with the distortion, i think you will need to lower the volume of each channel on playback, try setting the volume to 1/8 of the max volume. there isn't any clipping going on so if the sum of sounds > 1.0f you will clip and it will sound bad.
to deal with crosstalk when recording: i guess you have some sort of feedback going on with the output, ie the output sound is being fed back into the input channel, probably by the operating system. if you run another app that makes sound do you also get that in your recording as well? if so then that is probably your problem.
if it works with one channel, try it with just 2, instead of jumping straight up to 8 channels.
in general i would try to abstract out the playback/record logic and soundPlayer/recorder into a separate class. you have a couple of booleans there and it's really easy to make mistakes with >1 boolean. is there any way you can replace the booleans with an enum or an integer state variable?
EDIT: I didn't see the date on your question :D Suppose you managed to do it by now. Maybe it helps somebody else..
I'm not sure if I can answer everything of your question, but I can share how I've worked with 3D sound in FMOD. I haven't worked with recording though.
For my own application a user can place sounds in 3D space around himself. For this I only have one Listener and multiple Sounds. In your code you're making a listener for every sound, are you sure that is necessary? I would imagine that this causes the multiple listeners to pick up multiple sounds and output that to your soundcard. So from the second sound+listener, both listeners pick up both sounds? I'm not a 100% sure but it sounds plausible to me.
I made a class to create sound objects (and one listener). Then I use a vector to store the objects and move trough them to render them.
My class SoundBox basically holds all the necessary things for FMOD
Making a "SoundBox" object and adding it to my soundboxes vector:
SoundBox * box = new SoundBox(box_loc, box_rotation, box_color);
box->loadVideo(ofToDataPath(video_files[soundboxes.size()]));
box->loadSound(ofToDataPath(sound_files[soundboxes.size()]));
box->setVolume(1);
box->setMultiPlay(true);
box->updateSound(box_loc, box_vel);"
box->play();
soundboxes.push_back(box);
Constructor for the SoundBox. I use a similar constructor in the same class for the listener, but since the listener will always be at the origin for me, it doesn't take any arguments and just sets all the listener locations to 0. The constructor for the listener only gets called once, while the one for the Sound gets called whenever I want to make a new one. (don't mind the box_color. I'm drawing physical boxes in this case..):
SoundBox::SoundBox(ofVec3f box_location, ofVec3f box_rotation, ofColor box_color) {
_box_location = box_location;
_box_rotation = box_rotation;
_box_color = box_color;
sound_position.x = _box_location.x;
sound_position.y = _box_location.y;
sound_position.z = _box_location.z;
sound_velocity.x = 0;
sound_velocity.y = 0;
sound_velocity.z = 0;
Then I just use a for loop to loop trough them and play them if they're not playing. I also have some similar code to select them and move then around.
for(auto box = soundboxes.begin(); box != soundboxes.end(); box++){
if(!(*box)->getIsPlaying())
(*box)->play();
}
I really hoped this helped. I'm not a very experienced programmer but this is how I got FMOD with multiple sounds to work in OpenFrameworks and hope you can use some of it. I just dumped as much of my code as I could :D
My main suggestion is to make one listener instead of more. Also having a class for making the sounds is useful if you, for instance, want to relocate the sounds after the initial placement.
Hope it helps and good luck :)