I have a question regarding server - client data transmission. The data is sent by the client after satisfying simple protocol. But I found there is a delay on server side. The client and the server are tested on the same PC that has an i5 core with SSD and 8 GB ram.
The way how I measured the delay is, after the clients says "Sending," both sides write current system time in millisecond. The data itself is the current system time sent by the client. The server is checking how much it is delayed in the server side. It starts from 0 ms and is increased up to 90 ms and stabilized at 40 ms. I wonder this delay is normal.
Here is the code of the server(multi-threaded):
....
while(!ScriptWillAcessHere){
inputLine = in.readLine();
//Greetings
if(i==0)
{
outputLine = SIMONSAYS.processInput(inputLine);
out.println(outputLine);
}
if(inputLine.equals("Sending")){
i = 1;
}
if(i>=1){ //Javascript will access this block
if(i==1){
StartTime = System.currentTimeMillis();
System.out.println(StartTime);
i++;
}
Differences = System.currentTimeMillis() - Double.parseDouble(inputLine);
saveSvr.write(Double.toString(Differences)+"\n");
...
//Checking elapsed time below:
}
}
Here is the code of the client(single thread):
....
if(Client.equals("Sending"))
{
while(bTimer)
{
ins++;
local_time = System.currentTimeMillis();
out.println(local_time);
if(ins >= 100000)
{
out.println("End of Message");
break;
}
}
}
Thanks,
This code must be removed from the while() loop. It causes massive CPU traffic and delay on server side.
Differences = System.currentTimeMillis() - Double.parseDouble(inputLine);
Instead, if anyone needs to compare server side local time with client local time, use ping each other first then save their local time at the beginning of transmission then save both in the server.
If there is no delay in the hub, pinging will indicates Max. 1 ms delay and both local time must be identical.
Of course, the client's local time must be adjusted according to the server time; that's why we need to save their local time at the beginning of the transmission to find an offset.
Also, if the server is doing some different task at the same time, there must be some delay around 10-15 ms. If the transmission itself doesn't have any delay, the Max. delay of this operation must be identical to the server internal delay. I found the server was also running different tasks at the same time and had Max. 15 ms delay caused by them. So, total delay on the server is:
Total delay = server internal delay on other tasks + server internal delay on transmission thread + transmission delay.
Related
I have a server side function to check if a player is idle
socket.idle = 0;
socket.cached = {};
socket.checkIdle = function(){
socket.idle++;
console.log(socket.idle);
if(socket.cached.x != players[id].x || socket.cached.y != players[id].y){
socket.idle=0;
}
socket.cached = players[id];
if(socket.idle>12){
socket.disconnect();
}
}
socket.interval = setInterval(socket.checkIdle,1000);
I've noticed that even after the player gets booted/disconnected for being too long. The server still console log the socket.idle for it.
Am I going about this the wrong way? Also should I then clear the interval for when the player disconnects?
socket.on('disconnect', function(){
clearInterval(socket.interval);
});
You certainly shouldn't leave a setInterval() running for a player that is no longer connected. Your server will just have more and more of these running, wasting CPU cycles and possibly impacting your server's responsiveness or scalability.
I've noticed that even after the player gets booted/disconnected for being too long. The server still console log the socket.idle for it.
Yeah, that's because the interval is still running and, in fact, it even keeps the socket object from getting garbage collected. All of that is bad.
Also should I then clear the interval for when the player disconnects?
Yes, when a socket disconnects, you must clear the interval timer associated with that socket.
Am I going about this the wrong way?
If you keep with the architecture of a polling interval timer for each separate socket, then this is the right way to clear the timer when the socket disconnects.
But, I would think that maybe you could come up with another design that doesn't need to regularly "poll" for idle at all. It appears you want to have a 12 second timeout such that if the player hasn't moved within 12 seconds, then you disconnect the user. There's really no reason to check for this every second. You could just set a single timer with setTimeout() for 12 seconds from when the user connects and then each time you get notified of player movement (which your server must already being notified about since you're referencing it in players[id].x and players[id].y), you just clear the old timer and set a new one. When the timer fires, you must have gone 12 seconds without motion and you can then disconnect. This would be more typically how a timeout-type timer would work.
Before everybody marks this as a dup let me state that I know my fair share of network programming and this question is my attempt to solve something that riddles me even after finding the "solution".
The setup
I've spend the last weeks writing some glue code to incorporate a big industrial system into our current setup. The system is controlled by a Windows XP computer (PC A) which is controlled from a Ubuntu 14.04 system (PC B) by sending a steady stream of UDP packets at 2000 Hz. It responds with UDP packets containing the current state of the system.
Care was taken to ensure that the the 2000 Hz rate was held because there is a 3ms timeout after which the system faults and returns to a safe state. This involves measuring and accounting for inaccuracies in std::this_thread::sleep_for. Measurements show that there is only a 0.1% derivation from the target rate.
The observation
Problems started when I started to receive the state response from the system. The controlling side on PC B looks roughly like this:
forever at 2000Hz {
send current command;
if ( socket.available() >= 0 ) {
receive response;
}
}
edit 2: Or in real code:
auto cmd_buf = ...
auto rsp_buf = ...
while (true) {
// prepare and send command buffer
cmd_buf = ...
socket.send(cmd_buf, endpoint);
if (socket.available() >= 0) {
socket.receive(rsp_buf);
// the results are then parsed and stored, nothing fancy
}
// time keeping
}
Problem is that, whenever the receiving portion of the code was present on PC B, PC A started to run out of memory within seconds when trying to allocate receive buffers. Additionally it raised errors stating that the timeout was missed, which was probably due to packets not reaching the control software.
Just to highlight the strangeness: PC A is the pc sending UDP packets in this case.
Edit in response to EJP: this is the (now) working setup. It started out as:
forever at 2000Hz {
send current command;
receive response;
}
But by the time the response was received (blocking) the deadline was missed. Therefore the availability check.
Another thing that was tried was to receive in a seperate thread:
// thread A
forever at 2000Hz {
send current command;
}
// thread B
forever {
receive response;
}
Which displays the same behavior as the first version.
The solution
The solution was to set the socket on PC B to non blocking mode. One line and all problems were gone.
I am pretty sure that even in blocking mode the deadline was met. There should be no performance difference between blocking and non-blocking mode when there is just one socket involved. Even if checking the socket for available data takes some microseconds more than in non-blocking mode it shouldn't make a difference when the overall deadline is met accuratly.
Now ... what is happening here?
If I read your code correctly and referring to this code:
forever at 2000Hz {
send current command;
receive response;
}
Examine the difference between the blocking and not blocking socket. With blocking socket you send current command and then you are stuck waiting for the response. By this time I would guess you already miss the 2kHz goal.
Now in non blocking socket you send the current command, try to received whatever is in receive buffers, but if there is nothing there you return immediately and continue your tight 2kHz loop of sending. This explains to me why your industrial control system works fine in non-blocking code.
I'm making a simple online game and I'm suffering from out-of-sync issues. The server is implemented with IOCP, and since the game will be held almost always in LAN, the delay is relatively small.
The core algorithm of network connecting can be described as below: (There are 4 clients in a single fame)
Clients send their actions and the elasped time since the last frame to the server every frame, then wait until get a response from the server.
The server collects all four clients' messages, concatenate them together, then send it to all four clients.
On receiving the response, clients update their game with the messages provided in the response.
Now, I can see that after some time the four games go out of sync. It can be observed that the game I'm controling is different from the other three(which means the other three are the same), and just by walking around makes the problem happen.
Below is the code, if it might be helpful:
First, the server. Every message will be handled in a separate thread.
while(game_host[now_roomnum].Ready(now_playernum)) // wait for the last message to be taken away
{
Sleep(1);
}
game_host[now_roomnum].SetMessage(now_playernum, recv_msg->msg);
game_host[now_roomnum].SetReady(now_playernum, true);
game_host[now_roomnum].SetUsed(now_playernum, false);
while(!game_host[now_roomnum].AllReady()) // wait for all clients' messages
{
Sleep(1);
}
string all_msg = game_host[now_roomnum].GetAllMessage();
game_host[now_roomnum].SetUsed(now_playernum, true);
while(!game_host[now_roomnum].AllUsed()) // wait for all four responses are ready to send
{
Sleep(1);
}
game_host[now_roomnum].SetReady(now_playernum, false);// ready for receiving the next message
strcpy_s(ret.msg, all_msg.c_str());
And the clients' CGame::Update(float game_time) method:
CMessage msg = MakeMessage(game_time);//Make a message with the actions taken in the frame(pushed into a queue) and the elasped time between the two frames
CMessage recv = p_res_manager->m_Client._SendMessage(msg);//send the message and wait for the server's response
stringstream input(recv.msg);
int i;
rest_time -= game_time;
float game_times[MAX_PLAYER+1]={0};
//analyze recv operations
for(i=1; i<=MAX_PLAYER; i++)
{
int n;
input>>n;
input>>game_times[i];//analyze the number of actions n, and player[i]'s elasped game time game_times[i]
for(int oper_i = 1; oper_i <= n; oper_i++)
{
int now_event;
UINT nchar;
input>>now_event>>nchar;
if(now_event == int(Event::KEY_UP))
HandleKeyUpInUpdate(i, nchar);
else //if(now_event == int(Event::KEY_DOWN))
HandleKeyDownInUpdate(i, nchar);
}
}
//update player
for(i=1; i<=MAX_PLAYER; i++)
{
player[i].Update(game_times[i]);//something like s[i] = v[i] * game_time[i]
}
Thank you very much. I'll provide more detail if necassary.
Ur general design is wrong, that's why u get async at some point. The server should never ever deal with the fps of the clients. This is just a horrible design issue. On general the server calculates everything regarding on the input's the clients send to server. And the clients just request the current status of their surroundings of the server. That way u are fps independent on the server side. Which mean u can update the scene on the server as fast as possible, and the clients just retrieve the current status.
When u update the entities on the server fps dependent per user, u would have to keep a local copy of every entity for every client, otherwise it's impossible to transfer for different delta times.
The other design could be that ur server just syncs the clients, so every client calculates the scene on it's own and then send their status to the server. The server then distributes this to the other clients, and the clients decide what to do with this information.
Any other design will lead to major problems, i highly regret to not use any other design.
if u have any further questions feel free to contact me.
Can someone offer some more guidance on the use of the Azure Service Bus OnMessageOptions.AutoRenewTimeout
http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.onmessageoptions.autorenewtimeout.aspx
as I haven't found much documentation on this option, and would like to know if this is the correct way to renew a message lock
My use case:
1) Message Processing Queue has a Lock Duration of 5 minutes (The maximum allowed)
2) Message Processor using the OnMessageAsync message pump to read from the queue (with a ReceiveMode.PeekLock) The long running processing may take up to 10 minutes to process the message before manually calling msg.CompleteAsync
3) I want the message processor to automatically renew it's lock up until the time it's expected to Complete processing (~10minutes). If after that period it hasn't been completed, the lock should be automatically released.
Thanks
-- UPDATE
I never did end up getting any more guidance on AutoRenewTimeout. I ended up using a custom MessageLock class that auto renews the Message Lock based on a timer.
See the gist -
https://gist.github.com/Soopster/dd0fbd754a65fc5edfa9
To handle long message processing you should set AutoRenewTimeout == 10 min (in your case). That means that lock will be renewed during these 10 minutes each time when LockDuration is expired.
So if for example your LockDuration is 3 minutes and AutoRenewTimeout is 10 minutes then every 3 minute lock will be automatically renewed (after 3 min, 6 min and 9 min) and lock will be automatically released after 12 minutes since message was consumed.
To my personal preference, OnMessageOptions.AutoRenewTimeout is a bit too rough of a lease renewal option. If one sets it to 10 minutes and for whatever reason the Message is .Complete() only after 10 minutes and 5 seconds, the Message will show up again in the Message Queue, will be consumed by the next stand-by Worker and the entire processing will execute again. That is wasteful and also keeps the Workers from executing other unprocessed Requests.
To work around this:
Change your Worker process to verify if the item it just received from the Message Queue had not been already processed. Look for Success/Failure result that is stored somewhere. If already process, call BrokeredMessage.Complete() and move on to wait for the next item to pop up.
Call periodically BrokeredMessage.RenewLock() - BEFORE the lock expires, like every 10 seconds - and set OnMessageOptions.AutoRenewTimeout to TimeSpan.Zero. Thus if the Worker that processes an item crashes, the Message will return into the MessageQueue sooner and will be picked up by the next stand-by Worker.
I have the very same problem with my workers. Even the message was successfully processing, due to long processing time, service bus removes the lock applied to it and the message become available for receiving again. Other available worker takes this message and start processing it again. Please, correct me if I'm wrong, but in your case, OnMessageAsync will be called many times with the same message and you will ended up with several tasks simultaneously processing it. At the end of the process MessageLockLost exception will be thrown because the message doesn't have a lock applied.
I solved this with the following code.
_requestQueueClient.OnMessage(
requestMessage =>
{
RenewMessageLock(requestMessage);
var messageLockTimer = new System.Timers.Timer(TimeSpan.FromSeconds(290));
messageLockTimer.Elapsed += (source, e) =>
{
RenewMessageLock(requestMessage);
};
messageLockTimer.AutoReset = false; // by deffault is true
messageLockTimer.Start();
/* ----- handle requestMessage ----- */
requestMessage.Complete();
messageLockTimer.Stop();
}
private void RenewMessageLock(BrokeredMessage requestMessage)
{
try
{
requestMessage.RenewLock();
}
catch (Exception exception)
{
}
}
There are a few mounts since your post and maybe you have solved this, so could you share your solution.
While working on a timing sensitive project, I used the code below to test the granularity of timing events available, first on my desktop machine in Firefox, then as node.js code on my Linux server. The Firefox run produced predictable results, averaging 200 fps on a 1ms timeout and indicating I had timing events with 5ms granularity.
Now I know that if I used a timeout value of 0, the Chrome V8 engine Node.js is built on would not actually delegate the timeout to an event but process it immediately. As expected, the numbers averaged 60,000 fps, clearly processing constantly at CPU capacity (and verified with top). But with a 1ms timeout the numbers were still around 3.5-4 thousand cycle()'s per second, meaning Node.js cannot possibly be respecting the 1ms timeout which would create a theoretical maximum of 1 thousand cycle()'s per second.
Playing with a range of numbers, I get:
2ms: ~100 fps (true timeout, indicating 10ms granularity of timing events on Linux)
1.5: same
1.0001: same
1.0: 3,500 - 4,500 fps
0.99: 2,800 - 3,600 fps
0.5: 1,100 - 2,800 fps
0.0001: 1,800 - 3,300 fps
0.0: ~60,000 fps
The behavior of setTimeout(func, 0) seems excusable, because the ECMAScript specification presumably makes no promise of setTimout delegating the call to an actual OS-level interrupt. But the result for anything 0 < x <= 1.0 is clearly ridiculous. I gave an explicit amount of time to delay, and the theoretical minimum time for n calls on x delay should be (n-1)*x. What the heck is V8/Node.js doing?
var timer, counter = 0, time = new Date().getTime();
function cycle() {
counter++;
var curT = new Date().getTime();
if(curT - time > 1000) {
console.log(counter+" fps");
time += 1000;
counter = 0;
}
timer = setTimeout(cycle, 1);
}
function stop() {
clearTimeout(timer);
}
setTimeout(stop, 10000);
cycle();
From the node.js api docs for setTimeout(cb, ms) (emphasis mine):
It is important to note that your callback will probably not be called in exactly delay milliseconds - Node.js makes no guarantees about the exact timing of when the callback will fire, nor of the ordering things will fire in. The callback will be called as close as possible to the time specified.
I suppose that "as close as possible" means something different to the implementation team than to you.
[Edit] Incidentally, it appears that the setTimeout() function isn't mandated by any specification (although apparently part of the HTML5 draft). Moreover, there appears to be a 4-10ms de-facto minimum level of granularity, so this appears to be "just how it is".
The great thing about open source software is that you can contribute a patch to include a higher resolution per your needs!
For completeness I would like to point out to the nodeJS implementation:
https://github.com/nodejs/node-v0.x-archive/blob/master/lib/timers.js#L214
Which is:
// Timeout values > TIMEOUT_MAX are set to 1.
var TIMEOUT_MAX = 2147483647; // 2^31-1
...
exports.setTimeout = function(callback, after) {
var timer;
after *= 1; // coalesce to number or NaN
if (!(after >= 1 && after <= TIMEOUT_MAX)) {
after = 1; // schedule on next tick, follows browser behaviour
}
timer = new Timeout(after);
...
}
Remember this statement:
IDLE TIMEOUTS
Because often many sockets will have the same idle timeout we will not use one timeout watcher per item. It is too much overhead.
Instead we'll use a single watcher for all sockets with the same timeout value and a linked list.
This technique is described in the libev manual:
http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#Be_smart_about_timeouts
And we pass the same timeout value (1) here.
The implementation for Timer is here:
https://github.com/nodejs/node-v0.x-archive/blob/master/src/timer_wrap.cc