I would like to see in the console the time it takes for an HTTP request to be responded. Kind of like express.js does.
GET api/myurl/ 210ms 200
I run sails debug but this doesn't show much.
I have node-inspector running but it seems this lets me inspect the JavaScript objects at runtime but not this particular thing.
Is there a configuration in Sails I can enable or a NPM module I can install to find out this time between request and response?
If you want to measure total response time (including view generation and pushing data to socket) you can use req._startTime defined in startRequestTimer middleware or use response-time middleware which gives much more accurate results.
This middleware adds a X-Response-Time header to all HTTP responses so you can check it both on client and server side.
module.exports.http = {
middleware: {
order: [
'responseTimeLogger',
// ...
],
// Using built-in variable defined by Sails
responseTimeLogger: function (req, res, next) {
req.on("end", function() {
sails.log.info('response time: ' + new Date() - req._startTime + 'ms');
});
next();
},
// Using response-time middleware
responseTimeLogger: function (req, res, next) {
req.on("end", function() {
sails.log.info('response time: ' + res.getHeader('X-Response-Time'));
});
require('response-time')()(req, res, next);
}
}
}
#abeja's answer is great, but a little out of date. In case of anyone still seek the answer, I put the latest version below:
// Using response-time middleware
responseTimeLogger: function(req, res, next) {
res.on("finish", function() {
sails.log.info("Requested :: ", req.method, req.url, res.get('X-Response-Time'));
});
require('response-time')()(req, res, next);
}
A little explanation:
Listen to finish event of res object, because when req emit end event, res header is not set yet.
Recent Node.js change the API getHeader to get.
btw, my reputation is 0, so I can't post comment to the #abeja's answer :)
In addition to #abeja answer (very nice answer though), if you are looking for production application monitoring, you might be interested in New Relic.
The tool helps you to measure (among other things) response time, throughput, error rates etc.
It also measures database performance (in my case it is MongoDB) and help you easily investigate bottlenecks.
On top of that it allows you to measure application performance from a browser (including connection time, DOM rendering time etc.)
New Relic setup is very easy - a few steps to have your stats running (more here: Installing and maintaining Node.js). For development purposes you can also have free of charge account with 24hr data retention.
I hope that will also help.
For a more generic statistics solution (which could be used in your Sails.js project as well) there's a great series of tutorials on digitalocean on using StatsD with Graphite to measure anything (and everything) on your server: https://www.digitalocean.com/community/tutorials/an-introduction-to-tracking-statistics-with-graphite-statsd-and-collectd
There's also a nice StatsD node.js client called Lynx: https://github.com/dscape/lynx along with an express middleware for Lynx to measure counts and response times for express routes: https://github.com/rosskukulinski/lynx-express (Sails is fully compatible with Express/Connect middleware)
I'm planning to use this soon on my sails project before going to production, will update this with more details when I do, let me know if you go ahead before I do..
I've created a Sails Hook that eases the whole process. Just install it using npm install sails-hook-responsetime --save and it will add X-Response-Time to both HTTP and Socket requests.
https://www.npmjs.com/package/sails-hook-responsetime
Related
I have a 4 year old Express project running in production and using Express 4.14. Various developers have kept on adding new features but some old code also remains inside. Is there a way to find unused code- code which is not getting used in a production application ?.
I would like to start by identifying routes that are not being called. We do use logs and logs are ingested in Kibana. We also use APM with Kibana.
Since you log data you can create simple middleware to log every request to your application. After a while (days or weeks, depending on how sure you will feel about this process) you can collect and parse logs to get all requested routes. Then compare requested routes with all routes available in your application and delete unused ones.
The middleware can be as simple as:
// runs for every request
app.use('/', function (req, res, next) {
console.log(`Received request: ${req.method} ${req.path}`);
next();
});
To get all routes registered in your application use this development-only code (inspired by this answer):
console.log(app._router.stack.forEach(middleware => {
if (middleware.route) {
console.log(`${middleware.route.stack[0].method} ${middleware.route.path}`);
} else if (middleware.name === 'router') {
middleware.handle.stack.forEach(route => {
if (route.route) {
console.log(`${route.route.stack[0].method} ${route.route.path}`);
}
});
}
}));
How in the world is one supposed to install AWS XRAY with Sails?
I'm attempting to translate the installation instructions to Sails' preferred ways of using Express middleware, but I'm falling flat on my face.
Most people will instantly start with "use config/http.js" to configure middleware. Well, that doesn't work in my case, because my API is consumed exclusively with Sails.io (sockets), so the http middleware config is never used.
So now, the logical step is to use policies. Well, if you've read the XRAY instructions, you know that they are trying to capture ALL requests to the app, which requires "start" and "stop" function calls, before and after routes have been configured. So, policies don't work.
So, my next step was to attempt it in the app.js, and the config/bootstrap.js files, to no avail, probably because I can't easily get the Express instance Sails is using. So, is it even possible with Sails' current config options? Anyone have any clue how to accomplish this?
To anyone that should stumble upon this, attempting to integrate AWS X-Ray into Sails.js:
I finally got it working, by building a project hook for it. If someone is ambitious enough, they are more then welcome to make it an installable hook.
IMPORTANT NOTES
The hook is designed to only run when the environment variable AWS_XRAY === 'yes'. This is a safety trap, to prevent local and CI machines from running XRAY.
The hook taps into the "before" part of the route setup. What this means is: "before routes are instantiated, use this middleware".
This code is setup to ignore the route "/_ping" (for X-Ray, it'll let the request complete as normal), which is used
for ELB health checks. These do not need to be logged on X-Ray, they
are just a waste of money. I HIGHLY recommend you read through this
code, and adjust as needed. Especially the req.headers.host and
req.connection "fixes". This was the only way I could get X-Ray to
work, without changing the repo's code (still can't find the Github
repo for it).
The req.connection.encrypted injection is just to have X-Ray report the URL as https. It's not important, unless you want your
traces to reflect the correct URL.
Because we use CloudFlare, there are additional catches to collect the end-user's IP address for requests. This should have no affect if you don't use CF, and should not require any modification. But, I have to ask, WHY aren't use using CF?
This has only gotten me so far, and I can only see basic data about
requests in the X-Ray console. I can't yet see database queries, or
other services that are in use.
RESULTS MAY VARY
Don't forget!
npm i aws-xray-sdk --save.
To install and run the X-Ray Daemon
This is the code I put together api/hooks/setup-aws-xray.js:
var AWSXRay = require('aws-xray-sdk');
module.exports = function setupAwsXray(sails){
var setupXray = false;
function injectXrayIfRequested(req, res, next){
if (
setupXray
&& !req.segment
&& req.path !== '/_ping'
) {
req.headers.host = (sails.config.environment === 'production')
? 'myapp.com'
: 'dev.myapp.com';
req.connection = {
remoteAddress: req.headers['http_cf_connecting_ip']
|| req.headers['HTTP_CF_CONNECTING_IP']
|| req.headers['X-Real-IP']
|| req.ip,
encrypted: true
};
AWSXRay.express.openSegment()(req, res, next); // not a mistake
} else {
next();
}
}
// This just allows us to get a handle on req.segment.
// This is important if you want to add annotations / metadata.
// Despite AWS's documentation, you DO NOT need to close segments
// when using manual mode and express.openSegment, it will
// do this for you automatically.
AWSXRay.enableManualMode();
return {
configure: function(){
if (process.env.AWS_XRAY && process.env.AWS_XRAY === 'yes') {
setupXray = true;
AWSXRay.setDefaultName('myapp_' + sails.config.environment);
}
},
routes: {
before: {
'/*': injectXrayIfRequested
}
}
};
};
I'm struggling to pass response time data at the application-level to every route in my Express app. I've Googled and Googled but everything comes back suggesting one throw a new Date() into API calls, which is gross, or the Express response-time package (including many results from here on SO). While I'm sure it's great, I don't understand what purpose it serves other than adding a header seeing as how I can get response times in my browser's dev tools.
What I want to do is access all response time data from the server into the view, on every route.
Express already provides some request data, but not all of it. I just can't seem to access all responses. My terminal looks like this when I load up a basic page, even though I have 6 images on the page besides the one CSS file.
GET / 200 64.439 ms - 1663
GET /styles/index.css 200 36.582 ms - 2035
If I use the response-time package, I can't seem to access the 'X-Response-time' header. req.headers only seems to return a subset of all headers, similar to the Express traffic output mentioned above.
Maybe I'm just dense, but even the docs of the response-time package mention how to configure it with Express, but I still don't understand what it's supposed to be adding or how I would access it outside of my console.
Create a new middleware that records the response time of a request and makes this available to your own function fn. The fn argument will be invoked as fn(req, res, time), where time is a number in milliseconds.
Couldn't you just do this:
var responseTime = require('response-time')
app.use(responseTime(function(req, res, time) {
res.header('X-Response-Time', time);
}));
Now every route below this will have a response time header on it.
The response-time package will add the X-Response-Time header at the same time as you send the request:
app.use(responseTime());
app.get('/', function(req, res) {
console.log(res.get('X-Response-Time')); // undefined
res.send('Hello');
console.log(res.get('X-Response-Time')); // 3.720ms
});
It does this by listening out for when headers are about to be written (e.g. when a request is about to be sent), appending the response time just before (using the on-headers package on npm).
It sounds like what you want to do is record the response time, then set it into res.locals so you can render it in your view.
I've played around for a while trying to use the on-headers package to set the response time inside res.locals, but I think it must be too late by then to write to the response... Maybe this is the right path? Sorry I don't have more time right now to play around :(
I need to monitor my application's uptime via New Relic. I'm using the Restify framework which is largely based on Express.
New Relic wants to make HEAD requests to my application, but I'm not sure how to set up a HEAD route correctly to satisfy New Relic. Currently, my Restify app returns a 405 error for "Method Not Allowed", which causes New Relic to have fits and send me non-stop emails about how my application is down, and I can't find any documentation from New Relic that shows how to set up a simple ping URL to satisfy them.
Is there anything I need to do other than this:
server.head('/ping', function(error, req, res) {
res.send("hello");
});
EDIT:
The parameters are mislabeled so the res.send() is actually trying to call next().send() which would be undefined. Removing the error parameter and shifting everything over fixed the code as discovered by the OP.
As per the restify documentation, you need to call return next() in your callback function:
http://mcavage.me/node-restify/#Routing
server.head('/ping', function (req, res) {
res.send('hello');
});
If you would like to respond immediately and not continue down the chain, you can pass false as a parameter in your call to next()
To learn node.js I'm creating a small app that get some rss feeds stored in mongoDB, process them and create a single feed (ordered by date) from these ones.
It parses a list of ~50 rss feeds, with ~1000 blog items, so it's quite long to parse the whole, so I put the following req.connection.setTimeout(60*1000); to get a long enough time out to fetch and parse all the feeds.
Everything runs quite fine, but the request is called twice. (I checked with wireshark, I don't think it's about favicon here).
I really don't get it.
You can test yourself here : http://mighty-springs-9162.herokuapp.com/feed/mde/20 (it should create a rss feed with the last 20 articles about "mde").
The code is here: https://github.com/xseignard/rss-unify
And if we focus on the interesting bits :
I have a route defined like this : app.get('/feed/:name/:size?', topics.getFeed);
And the topics.getFeed is like this :
function getFeed(req, res) {
// 1 minute timeout to get enough time for the request to be processed
req.connection.setTimeout(60*1000);
var name = req.params.name;
var callback = function(err, topic) {
// if the topic has been found
if (topic) {
// aggregate the corresponding feeds
rssAggregator.aggregate(topic, function(err, rssFeed) {
if (err) {
res.status(500).send({error: 'Error while creating feed'});
}
else {
res.send(rssFeed);
}
},
req);
}
else {
res.status(404).send({error: 'Topic not found'});
}};
// look for the topic in the db
findTopicByName(name, callback);
}
So nothing fancy, but still, this getFeed function is called twice.
What's wrong there? Any idea?
This annoyed me for a long time. It's most likely the Firebug extension which is sending a duplicate of each GET request in the background. Try turning off Firebug to make sure that's not the issue.
I faced the same issue while using Google Cloud Functions Framework (which uses express to handle requests) on my local machine. Each fetch request (in browser console and within web page) made resulted in two requests to the server. The issue was related to CORS (because I was using different ports), Chrome made a OPTIONS method call before the actual call. Since OPTIONS method was not necessary in my code, I used an if-statement to return an empty response.
if(req.method == "OPTIONS"){
res.set('Access-Control-Allow-Origin', '*');
res.set('Access-Control-Allow-Headers', 'Content-Type');
res.status(204).send('');
}
Spent nearly 3hrs banging my head. Thanks to user105279's answer for hinting this.
If you have favicon on your site, remove it and try again. If your problem resolved, refactor your favicon url
I'm doing more or less the same thing now, and noticed the same thing.
I'm testing my server by entering the api address in chrome like this:
http://127.0.0.1:1337/links/1
my Node.js server is then responding with a json object depending on the id.
I set up a console log in the get method and noticed that when I change the id in the address bar of chrome it sends a request (before hitting enter to actually send the request) and the server accepts another request after I actually hit enter. This happens with and without having the chrome dev console open.
IE 11 doesn't seem to work in the same way but I don't have Firefox installed right now.
Hope that helps someone even if this was a kind of old thread :)
/J
I am to fix with listen.setTimeout and axios.defaults.timeout = 36000000
Node js
var timeout = require('connect-timeout'); //express v4
//in cors putting options response code for 200 and pre flight to false
app.use(cors({ preflightContinue: false, optionsSuccessStatus: 200 }));
//to put this middleaware in final of middleawares
app.use(timeout(36000000)); //10min
app.use((req, res, next) => {
if (!req.timedout) next();
});
var listen = app.listen(3333, () => console.log('running'));
listen.setTimeout(36000000); //10min
React
import axios from 'axios';
axios.defaults.timeout = 36000000;//10min
After of 2 days trying
you might have to increase the timeout even more. I haven't seen the express source but it just sounds on timeout, it retries.
Ensure you give res.send(); The axios call expects a value from the server and hence sends back a call request after 120 seconds.
I had the same issue doing this with Express 4. I believe it has to do with how it resolves request params. The solution is to ensure your params are resolved by for example checking them in an if block:
app.get('/:conversation', (req, res) => {
let url = req.params.conversation;
//Only handle request when params have resolved
if (url) {
res.redirect(301, 'http://'+ url + '.com')
}
})
In my case, my Axios POST requests were received twice by Express, the first one without body, the second one with the correct payload. The same request sent from Postman only received once correctly. It turned out that Express was run on a different port so my requests were cross origin. This caused Chrome to sent a preflight OPTION method request to the same url (the POST url) and my app.all routing in Express processed that one too.
app.all('/api/:cmd', require('./api.js'));
Separating POST from OPTIONS solved the issue:
app.post('/api/:cmd', require('./api.js'));
app.options('/', (req, res) => res.send());
I met the same problem. Then I tried to add return, it didn't work. But it works when I add return res.redirect('/path');
I had the same problem. Then I opened the Chrome dev tools and found out that the favicon.ico was requested from my Express.js application. I needed to fix the way how I registered the middleware.
Screenshot of Chrome dev tools
I also had double requests. In my case it was the forwarding from http to https protocol. You can check if that's the case by looking comparing
req.headers['x-forwarded-proto']
It will either be 'http' or 'https'.
I could fix my issue simply by adjusting the order in which my middlewares trigger.