I am building a chat app currently with PubNub. The problem now is from the app/frontend point of view, how should it get the time (server time). If every message is sent to the server, I could get the server time there. But with a 3rd party service like PubNub, how can I manage this? Since app sends messages to PubNub rather than my server. I dont want to rely on local time as users might have inaccurate clocks.
The simplest solution I thought of is: When app starts up, get server time. Record the difference between local time and server time (diff = Date.now() - serverTime). When sending messages, the time will be Date.now() - diff. Is this correct so far?
I guess this solution assumes 0 transmission (or latency) time? Is there a more correct or recommended way to implement this?
Your use case is probably the reason why pubnub.time() exists.
In fact, they even have a code example describing your drift calculation.
https://github.com/pubnub/javascript/blob/1fa0b48227625f92de9460338c222152c853abda/examples/time-drift-detla-detection/drift-delta-detection.html
// Drift Functions
function now(){ return+new Date }
function clock_drift(cb) {
clock_drift.start = now();
PUBNUB.time(function(timetoken){
var latency = (now() - clock_drift.start) / 2
, server_time = (timetoken / 10000) + latency
, local_time = now()
, drift = local_time - server_time;
cb(drift);
});
if (clock_drift.ival) return;
clock_drift.ival = setInterval( function(){clock_drift(cb)}, 1000 );
}
// This is how you use the code
// Periodically Get Latency in Miliseconds
clock_drift(function(latency){
var out = PUBNUB.$('latency');
out.innerHTML = "Clock Drift Delta: " + latency + "ms";
// Flash Update
PUBNUB.css( out, { background : latency > 2000 ? '#f32' : '#5b5' } );
setTimeout( function() {
PUBNUB.css( out, { background : '#444' } );
}, 300 );
});
Related
I'd like to monitor how long each run of the event loop in node.js takes. However I'm uncertain about the best way to measure this. The best way I could come up with looks like this:
var interval = 500;
var interval = setInterval(function() {
var last = Date.now();
setImmediate(function() {
var delta = Date.now() - last;
if (delta > blockDelta) {
report("node.eventloop_blocked", delta);
}
});
}, interval);
I basically infer the event loop run time by looking at the delay of a setInterval. I've seen the same approach in the blocked node module but it feels inaccurate and heavy. Is there a better way to get to this information?
Update: Changed the code to use setImmediate as done by hapi.js.
"Is there a better way to get this information?"
I don't have a better way to test the eventloop than checking the time delay of SetImmediate, but you can get better precision using node's high resolution timer instead of Date.now()
var interval = 500;
var interval = setInterval(function() {
var last = process.hrtime(); // replace Date.now()
setImmediate(function() {
var delta = process.hrtime(last); // with process.hrtime()
if (delta > blockDelta) {
report("node.eventloop_blocked", delta);
}
});
}, interval);
NOTE: delta will be a tuple Array [seconds, nanoseconds].
For more details on process.hrtime():
https://nodejs.org/api/all.html#all_process_hrtime
"The primary use is for measuring performance between intervals."
Check out this plugin https://github.com/tj/node-blocked I'm using it now and it seems to do what you want.
let blocked = require("blocked");
blocked(ms => {
console.log("EVENT LOOP Blocked", ms);
});
Will print out how long in ms the event loop is blocked for
Code
this code will measure the time in nanoseconds it took for the event loop to trigger. it measures the time between the current process and the next tick.
var time = process.hrtime();
process.nextTick(function() {
var diff = process.hrtime(time);
console.log('benchmark took %d nanoseconds', diff[0] * 1e9 + diff[1]);
// benchmark took 1000000527 nanoseconds
});
EDIT: added explanation,
process.hrtime([time])
Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array. time is an optional parameter that must be the result of a previous process.hrtime() call (and therefore, a real time in a [seconds, nanoseconds] tuple Array containing a previous time) to diff with the current time. These times are relative to an arbitrary time in the past, and not related to the time of day and therefore not subject to clock drift. The primary use is for measuring performance between intervals.
process.nextTick(callback[, arg][, ...])
Once the current event loop turn runs to completion, call the callback function.
This is not a simple alias to setTimeout(fn, 0), it's much more efficient. It runs before any additional I/O events (including timers) fire in subsequent ticks of the event loop.
You may also want to look at the profiling built into node and io.js. See for example this article http://www.brendangregg.com/flamegraphs.html
And this related SO answer How to debug Node.js applications
I am working on a node.js express application which uses azure cache. I have deployed the service to azure and I notice a latency of 50ms or so for get and put rquests.
The methods I am using are:
var time1, time2;
var start = Date.now();
var cacheObject = this.cache;
cacheObject.put('test1', { first: 'Jane', last: 'Doe' }, function (error) {
if (error) throw error;
time1= Date.now() - start;
start = Date.now();
cacheObject.get('test1', function (error, data) {
if (error) throw error;
console.log('Data from cache:' + data);
time2 = (Date.now()-start);
res.send({t1: time1, t2: time2});
});
});
The time for put is represented by time1 and time2 represents the time for get.
From reading other posts on the internet, I understood that the latency should be in the order of a couple of ms, but 50ms seems a bit high. Am I using the methods properly? Are there any special settings I need to setup on the management portal? Or is 50ms latency expected?
A few obvious things to check first:
Is the client code running in the same region as the cache? The minimum possible latency is the network round trip time, which may be around 50ms between regions.
Is the Node.js Date.now() precise enough to measure a small number of milliseconds? I'm not familiar with Node.js, but in .NET you should use the StopWatch class for timing, rather than DateTime.Now.
I've a simple nodejs server that is started automatically.
It uses express to host the endpoint, which is started with a simple app.listen(port); command.
Since I've an automatic startup, I'd like to shutdown the server after an idle period - say 3 mins.
I've coded it manually just using the function below, which is called on each app.post:
//Idle timer
var timer;
function resetIdleTimer() {
if (timer != null) clearTimeout(timer);
timer = setTimeout(function () {
logger.info('idle shutdown');
process.exit();
}, 3 * 60 * 1000);
}
This seems a little crude though, so I wondered if there is an neater way (some sort of timer within express maybe).
Looking in the express docs I didn't see an easy way to configure this.
Is there a neater way to have this idle shutdown implemented?
app.listen() returns a wrapped HTTP server (as can be seen here in the source), on which you can then the .close() method.
var app = express();
var server = app.listen(port);
setTimeout(function() {
server.close();
}, 3 * 60 * 1000);
This will prevent the server from accepting new connection. When it has stopped serving existing connections, it will gracefully stop. This will then stop Nodejs entirely.
Edit: You might also find this GitHub issue relevant.
Take a look at forever . You can require it as a module into your application and it provides you with some functions that can help you achieve what you are looking for (such as forever.stop(index) which terminates the node process running at that index. Before terminating the process, you could retrieve the list of processes and manipulate the strings in order to get the uptime. Then, I would monitor the time that passes between server calls. If there is a gap of 3 minutes between requests, I would call forever.stop() in order to terminate the process.
I dont think it's "crude" to use your timer solution; I would take a slightly different tack:
app.timeOutDate = new Date().valueOf() + 1000*60*3; // 3 minutes from now, in ms
function quitIfTimedout(req, res, next){
if(new Date().valueOf() > app.timeOutDate){
logger.info('idle shutdown');
process.exit();
} else {
app.timeOutDate = new Date().valueOf() + 1000*60*3; //reset
next();
}
};
app.all('*', quitIfTimedout);
however this wont actually quit after 3 minutes, it would instead quit on the next request after 3 minutes. so that might not solve your problem
Is there a way to find out the cpu usage in % for a node.js process via code. So that when the node.js application is running on the server and detects the CPU exceeds certain %, then it will put an alert or console output.
On *nix systems can get process stats by reading the /proc/[pid]/stat virtual file.
For example this will check the CPU usage every ten seconds, and print to the console if it's over 20%. It works by checking the number of cpu ticks used by the process and comparing the value to a second measurement made one second later. The difference is the number of ticks used by the process during that second. On POSIX systems, there are 10000 ticks per second (per processor), so dividing by 10000 gives us a percentage.
var fs = require('fs');
var getUsage = function(cb){
fs.readFile("/proc/" + process.pid + "/stat", function(err, data){
var elems = data.toString().split(' ');
var utime = parseInt(elems[13]);
var stime = parseInt(elems[14]);
cb(utime + stime);
});
}
setInterval(function(){
getUsage(function(startTime){
setTimeout(function(){
getUsage(function(endTime){
var delta = endTime - startTime;
var percentage = 100 * (delta / 10000);
if (percentage > 20){
console.log("CPU Usage Over 20%!");
}
});
}, 1000);
});
}, 10000);
Try looking at this code: https://github.com/last/healthjs
Network service for getting CPU of remote system and receiving CPU usage alerts...
Health.js serves 2 primary modes: "streaming mode" and "event mode". Streaming mode allows a client to connect and receive streaming CPU usage data. Event mode enables Health.js to notify a remote server when CPU usage hits a certain threshold. Both modes can be run simultaneously...
You can use the os module now.
var os = require('os');
var loads = os.loadavg();
This gives you the load average for the last 60seconds, 5minutes and 15minutes.
This doesnt give you the cpu usage as a % though.
Use node process.cpuUsage function (introduced in node v6.1.0).
It shows time that cpu spent on your node process. Example taken from docs:
const previousUsage = process.cpuUsage();
// { user: 38579, system: 6986 }
// spin the CPU for 500 milliseconds
const startDate = Date.now();
while (Date.now() - startDate < 500);
// At this moment you can expect result 100%
// Time is *1000 because cpuUsage is in us (microseconds)
const usage = process.cpuUsage(previousUsage);
const result = 100 * (usage.user + usage.system) / ((Date.now() - startDate) * 1000)
console.log(result);
// set 2 sec "non-busy" timeout
setTimeout(function() {
console.log(process.cpuUsage(previousUsage);
// { user: 514883, system: 11226 } ~ 0,5 sec
// here you can expect result about 20% (0.5s busy of 2.5s total runtime, relative to previousUsage that is first value taken about 2.5s ago)
}, 2000);
see node-usage for tracking process CPU and Memory Usage (not the system)
Another option is to use node-red-contrib-os package
I m trying to implement a long polling strategy with node.js
What i want is when a request is made to node.js it will wait maximum 30 seconds for some data to become available. If there is data, it will output it and exit and if there is no data, it will just wait out 30 seconds max, and then exit.
here is the basic code logic i came up with -
var http = require('http');
var poll_function = function(req,res,counter)
{
if(counter > 30)
{
res.writeHeader(200,{'Content-Type':'text/html;charset=utf8'});
res.end('Output after 5 seconds!');
}
else
{
var rand = Math.random();
if(rand > 0.85)
{
res.writeHeader(200,{'Content-Type':'text/html;charset=utf8'});
res.end('Output done because rand: ' + rand + '! in counter: ' + counter);
}
}
setTimeout
(
function()
{
poll_function.apply(this,[req,res,counter+1]);
},
1000
);
};
http.createServer
(
function(req,res)
{
poll_function(req,res,1);
}
).listen(8088);
What i figure is, When a request is made the poll_function is called which calls itself after 1 second, via a setTimeout within itself. So, it should remain asynchronous means, it will not block other requests and will provide its output when its done.
I have used a Math.random() logic here to simulate data availability scenario at various interval.
Now, what i concern is -
1) Will there be any problem with it? - I simply don't wish to deploy it, without being sure it will not strike back!
2) Is it efficient? if not, any suggestion how can i improve it?
Thanks,
Anjan
All nodejs code is nonblocking as long as you don't get hunk in a tight CPU loop (like while(true)) or use a library that has blocking I/O. Putting a setTimeout at the end of a function doesn't make it any more parallel, it just defers some cpu work till a later event.
Here is a simple demo chat server that randomly emits "Hello World" every 0 to 60 seconds to and and all connection clients.
// A simple chat server using long-poll and timeout
var Http = require('http');
// Array of open callbacks listening for a result
var listeners = [];
Http.createServer(function (req, res) {
function onData(data) {
res.end(data);
}
listeners.push(onData);
// Set a timeout of 30 seconds
var timeout = setTimeout(function () {
// Remove our callback from the listeners array
listeners.splice(listeners.indexOf(onData), 1);
res.end("Timeout!");
}, 30000);
}).listen(8080);
console.log("Server listening on 8080");
function emitEvent(data) {
for (var i = 0; l = listeners.length; i < l; i++) {
listeners[i](data);
}
listeners.length = 0;
}
// Simulate random events
function randomEvents() {
emitData("Hello World");
setTimeout(RandomEvents, Math.random() * 60000);
}
setTimeout(RandomEvents, Math.random() * 60000);
This will be quite fast. The only dangerous part is the splice. Splice can be slow if the array gets very large. This can be made possibly more efficient by instead of closing the connection 30 seconds from when it started to closing all the handlers at once every 30 seconds or 30 seconds after the last event. But again, this is unlikely to be the bottleneck since each of those array items is backed by a real client connection that probably more expensive.