node.js itself or nginx frontend for serving static files? - node.js

Is there any benchmark or comparison which is faster: place nginx in front of node and let it serve static files directly or use just node and serve static files using it?
nginx solution seems to be more manageable for me, any thoughts?

I'll have to disagree with the answers here. While Node will do fine, nginx will most definitely be faster when configured correctly. nginx is implemented efficiently in C following a similar pattern (returning to a connection only when needed) with a tiny memory footprint. Moreover, it supports the sendfile syscall to serve those files which is as fast as you can possibly get at serving files, since it's the OS kernel itself that's doing the job.
By now nginx has become the de facto standard as the frontend server. You can use it for its performance in serving static files, gzip, SSL, and even load-balancing later on.
P.S.: This assumes that files are really "static" as in at rest on disk at the time of the request.

I did a quick ab -n 10000 -c 100 for serving a static 1406 byte favicon.ico, comparing nginx, Express.js (static middleware) and clustered Express.js. Hope this helps:
Unfortunately I can't test 1000 or even 10000 concurrent requests as nginx, on my machine, will start throwing errors.
EDIT: as suggested by artvolk, here are the results of cluster + static middleware (slower):

Either way, I'd setup Nginx to cache the static files...you'll see a HUGE difference there. Then, whether you serve them from node or not, you're basically getting the same performance and the same load-relief on your node app.
I personally don't like the idea of my Nginx frontend serving static assets in most cases, in that
1) The project has to now be on the same machine - or has to be split into assets (on nginx machine) & web app (on multiple machines for scaling)
2) Nginx config now has to maintain path locations for static assets / reload when they change.

I have a different interpretation of #gremo's charts. It looks to me like both node and nginx scale at the same number of requests (between 9-10k). Sure the latency in the response for nginx is lower by a constant 20ms, but I don't think users will necessarily perceive that difference (if your app is built well).
Given a fixed number of machines, it would take quite a significant amount of load before I would convert a node machine to nginx considering that node is where most of the load will occur in the first place.
The one counterpoint to this is if you are already dedicating a machine to nginx for load balancing. If that is the case then you might as well have it serve your static content as well.

FWIW, I did a test on a rather large file download (~60 MB) on an AWS EC2 t2.medium instance, to compare the two approaches.
Download time was roughly the same (~15s), memory usage was negligible in both cases (<= 0.2%), but I got a huge difference in CPU load during the download:
Using Node + express.static(): 3.0 ~ 5.0% (single node process)
Using nginx: 0.3 ~ 0.7% (nginx process)

That's a tricky question to answer. If you wrote a really lightweight node server to just serve static files, it would most likely perform better than nginx, but it's not that simple. (Here's a "benchmark" comparing a nodejs file server and lighttpd - which is similar in performance to ngingx when serving static files).
Performance in regard to serving static files often comes down to more than just the web-server doing the work. If you want the highest performance possible, you'll be using a CDN to serve your files to reduce latency for end-users, and benefit from edge-caching.
If you're not worried about that, node can serve static files just fine in most situation. Node lends itself to asynchronous code, which it also relies on since it's single-threaded and any blocking i/o can block the whole process, and degrade your applications performance. More than likely you're writing your code in a non-blocking fashion, but if you are doing anything synchronously, you may cause blocking, which would degrade how fast other clients can get their static files served. The easy solution is to not write blocking code, but sometimes that's not a possibility, or you can't always enforce it.

Use Nginx to cache static files served by Node.js. The Nginx server is deployed in front of the Node.js server(s) to perform:
SSL
Termination:
Terminate HTTPS traffic from clients, relieving your upstream web and
application servers of the computational load of SSL/TLS encryption.
Configuring Basic Load Balancing with
NGINX:
set up NGINX Open Source or NGINX Plus as a load balancer in front of
two Node.js servers.
Content
Caching:
Caching responses from your Node.js app servers can both improve
response time to clients and reduce load on the servers, because
eligible responses are served immediately from the cache instead of
being generated again on the server.

I am certain that purely node.js can outperform nginx in a lot of aspect.
All said I have to stay NginX has an in-built cache, whereas node.js doesn't come with it factory installed (YOU HAVE TO BUILD YOUR OWN FILE CACHE).
The custom file cache does outperform nginx and any other server in the market as it is super simple.
Also Nginx runs on multiple cores. To use the full potential of Node you have to cluster node servers. If you are interested to know how then please pm.
You need to deep digger to achieve performance nirvana with node, that is the only problem. Once done hell yeah... it beats Nginx.

Related

Benchmarking Nginx against Express

I have Nginx set up as a reverse proxy in front of my express application.
So every request that comes to Nginx is proxied to express running on 4 ports. Both Nginx and express run on the same hosts .
After having read that all the static content should be served by Nginx and Express should be left for dynamic requests only, I gave it a shot and set up the Nginx config . It works perfectly . So now all JS / CSS and HTML assets are served by Nginx itself.
Now how do I prove that this is a better setup in terms of numbers ? Should I use some tool to simulate requests ( to both older and the newer setup ), and compare the average load times of assets ?
Open your browser => Dev tools => Networks
Here you can see the network wait time and download time for every request. So you can open your webpage and compare it with both the configs.
This can be helpful on a local env so latency has minimal effect on testing.
Other than that you can do a load test. Google load testing tools!
In a word, "benchmark." You have two configurations. You need to understand the efficiencies under each model. To do so you need to instrument the hosts to collect data on the finite resources (CPU, DISK, MEMORY, NETWORK and related sub statistics) as well as response times.
Any performance testing tool which exercises the HTTP interface and allows for the collection and aggregation of your monitoring data while under test should do the trick. You should be able to collect information on the most common paths through your site, the number of users on your system for any given slice of time, the average session duration (iteration interval) all from an examination of the logs. The most common traversals then become the basis for the business processes you will need to replicate with your performance testing tool.
If you have no engaged in performance testing efforts before then this would be a good time to tag someone in your organization who does this work on an ongoing basis. The learning curve is steep and (if you haven't done this before and you have no training or mentor) fairly long. You can burn a lot of cycles on poor tests/benchmark executions before you get it "right" where you can genuinely compare the performance of configuration A to configuration B.

Is it slower to employ "name-based" hosting of multiple websites on one VPS?

Assuming traffic/server load is not a factor ...
(Taken further, we could even assume that there are zero visitors, and I just happen to visit one of my websites in a "vacuum")
... Would there theoretically be any difference in the loading time if I were to host only a single site on my VPS vs. hosting multiple sites using the "name-based" method?
(Even if it is minuscule, I would still like to know—and why, ideally!)
So there are a tons of different ways to look at this, most importantly is what type of applications are running.
What I mean by this is if your running a static webpage for each and using simple domain based routing (nginx or apache) you will see no difference, other than added disk space.
On the other hand you could be running more advanced web applications, for most cases (provided traffic is not a factor) when a request is made to the web server processes it and returns the response, only using possessing time during the request. This also will see no difference.
But! When the application requires added programs and background processing you will see a performance difference minuscule but as you add more "domains" you will see greater performance hit.
Static Pages: No difference (besides disk space)
Web Applications: Difference based on non-request based processing
You are asking what is at the root of shared hosting. Which is amazing for static and basic programs but not so good when you scale it up to larger applications.
Sidenote: This is assuming the applications are not of different run-times and requirements thus having a python + mySql and a node.js + MongoDB at the same time on a weak server would see a performance hit as the services are always running

Hapijs: Performance tuning for lots of concurrent requests

Are there any special tuning tips for strengthening an API built on top of the hapijs framework?
Especially if you have lots of concurrent request (+10000/sec) that are accessing the DB?
I'm using PM2 to start my process in "cluster mode" to be able to load-balance to different cores on the server
I don't need to serve static content, so there's no apache/nginx proxy
update 17:11
Running tests with 1000 requests/sec (with loader.io) results in this curve - ok, so far. but I'm wondering if there is still room for improvements.
(hardware: 64gb / 20 core digital ocean droplet)
In the end I just used a combination of node's http and the body-parser module to achieve what I needed.
But I think that this was only viable, because my application had just two endpoint (one GET, one POST).
If your application logic is rather complicated and you want to stick with hapi, think about using a load-balancer and dividing the load to multiple VMs.
Loadtest results (new setup on an even smaller DO droplet):

How to make a distributed node.js application?

Creating a node.js application is simple enough.
var app = require('express')();
app.get('/',function(req,res){
res.send("Hello world!");
});
But suppose people became obsessed with your Hello World! application and exhausted your resources. How could this example be scaled up on practice? I don't understand it, because yes, you could open several node.js instance in different computers - but when someone access http://your_site.com/ it aims directly that specific machine, that specific port, that specific node process. So how?
There are many many ways to deal with this, but it boils down to 2 things:
being able to use more cores per server
being able to scale beyond more than one server.
node-cluster
For the first option, you can user node-cluster or the same solution as for the seconde option. node-cluster (http://nodejs.org/api/cluster.html) essentially is a built in way to fork the node process into one master and multiple workers. Typically, you'd want 1 master and n-1 to n workers (n being your number of available cores).
load balancers
The second option is to use a load balancer that distributes the requests amongst multiple workers (on the same server, or across servers).
Here you have multiple options as well. Here are a few:
a node based option: Load balancing with node.js using http-proxy
nginx: Node.js + Nginx - What now? (using more than one upstream server)
apache: (no clearly helpful link I could use, but a valid option)
One more thing, once you start having multiple processes serving requests, you can no longer use memory to store state, you need an additional service to store shared states, Redis (http://redis.io) is a popular choice, but by no means the only one.
If you use services such as cloudfoundry, heroku, and others, they set it up for you so you only have to worry about your app's logic (and using a service to deal with shared state)
I've been working with node for quite some time but recently got the opportunity to try scaling my node apps and have been researching on the same topic for some time now and have come across following pre-requisites for scaling:
My app needs to be available on a distributed system each running multiple instances of node
Each system should have a load balancer that helps distribute traffic across the node instances.
There should be a master load balancer that should distribute traffic across the node instances on distributed systems.
The master balancer should always be running OR should have a dependable restart mechanism to keep the app stable.
For the above requisites I've come across the following:
Use modules like cluster to start multiple instances of node in a system.
Use nginx always. It's one of the most simplest mechanism for creating a load balancer i've came across so far
Use HAProxy to act as a master load balancer. A few pointers on how to use it and keep it forever running.
Useful resources:
Horizontal scaling node.js and websockets.
Using cluster to take advantages of multiple cores.
I'll keep updating this answer as I progress.
The basic way to use multiple machines is to put them behind a load balancer, and point all your traffic to the load balancer. That way, someone going to http://my_domain.com, and it will point at the load balancer machine. The sole purpose (for this example anyways; in theory more could be done) of the load balancer is to delegate the traffic to a given machine running your application. This means that you can have x number of machines running your application, however an external machine (in this case a browser) can go to the load balancer address and get to one of them. The client doesn't (and doesn't have to) know what machine is actually handling its request. If you are using AWS, it's pretty easy to set up and manage this. Note that Pascal's answer has more detail about your options here.
With Node specifically, you may want to look at the Node Cluster module. I don't really have alot of experience with this module, however it should allow you to spawn multiple process of your application on one machine all sharing the same port. Also node that it's still experimental and I'm not sure how reliably it will be.
I'd recommend to take a look to http://senecajs.org, a microservices toolkit for Node.js. That is a good start point for beginners and to start thinking in "services" instead of monolitic applications.
Having said that, building distributed applcations is hard, take time to learn, take LOT of time to master it, and usually you will face a lot trade-off between performance, reliability, manteinance, etc.

Nginx or LVS for Node.js load balance?

Our project needs to do TCP packet load balance to node.js .
The proposal is: (Nginx or LVS) + Keepalived + Node Cluster
The questions:
The high concurrent client connections to TCP server needs to be long-lived. Which one is more suitable, Nginx or LVS?
We need to allocate different priority levels for node master on the Master server (the priority of localhost server will be higher than the remote servers). Which one can do this, Nginx or LVS?
Whose CPU utilization is smaller and the throughput is higher, Nginx or LVS?
Any recommended documents for performance benchmarking/function comparison between Nginx and LVS?
At last, we wonder whether our proposal is reasonable. Is there any other better proposals or component to choose?
I'm assuming you do not need nginx to server static assets, otherwise LVS would not be an option.
1) nginx only supports TCP via 3rd party module https://github.com/yaoweibin/nginx_tcp_proxy_module If you don't need a webserver, I'd say LVS is more suitable, but see my additional comment at the end of the #'d answers.
2) LVS supports priority, nginx does not.
3) Probably LVS: nginx is userland, LVS kernel.
4) Lies, Damned Lies and Benchmarks. You have to simulate your load on your equip, write a node client script and pound your setup.
We are looking at going all node from front to back with up https://github.com/LearnBoost/up Not in production yet, but we are pursuing this route for the following reasons:
1) We also have priority requirements, but they are custom and change dynamically. We are adjusting priority at runtime and it took us less than an hour to program node to do it.
2) We deploy a lot of code updates and up allows us to do it without interrupting existing clients. Because you can code it to do anything you want, we can spin up brand new processes to handle new connections and let the old ones die when existing connections are all gone.
3) We can see everything because we push any metric we want to see into a redis server.
I'm sure it's not the most performant per process/server, but the advantage of having so much programatic control is worth it, and scale out has the advantage of more redundancy so we are not looking at squeezing the last bit of performance out of the stack.
I just checked real quick to see if I could copy/paste a bunch of code, but we are rapidly coding it and it has a lot of references to stuff that would not be suitable for public consumption.

Resources