As per my knowledge, Nginx Server is best known for
Serving Static Contents
Proxy and load balancing
I am using Auto Scaling ELB of Amazon and my server is a complete REST server without any static file.
Please let me know if there are any other reason for using Nginx which might be beneficial for my server.
Related
I have an ExpressJS backend and I want to run over https on aws (so I don't get 'mixed type content' error when trying to connect with my frontend which runs over https), it's running great using http but when using https it doesn't work.
I asked this question before and I got answers like 'use nginx', 'use load balancer', unfortunately I don't know much about this stuff as I'm not very experienced with all aws variations and options, are there any tutorials I can follow step by step ? or any easy way to serve my backend over https without complexity?
any easy way to serve my backend over https without complexity?
The easiest way (don't confused with the cheapest way) is to change your EB environment to load-balanced one. You can do this in EB console's configuration settings.
This change will create Application Load Balancer for your app, and place it in-front of your instance. Once ALB is running you can follow this AWS guide:
How can I configure HTTPS for my Elastic Beanstalk environment?
In the above, only section Terminate HTTPS on the load balancer would be relevant.
Depending on the nature of your application, is it fully dynamic, or more on static side, you could also consider using Using Elastic Beanstalk with Amazon CloudFront, instead of using ALB. CloudFront could be also be easily setup to use HTTPS between clients and CloudFront, but the issue is that traffic between CloudFront and your EB instance would go over the internet unencrypted (HTTP). Obviously, you could make it HTTPS, but this requires further changes and configurations which does not fall into category of "easy ways".
I’m new in varnish, is it possible configure varnish on dedicated varnish server? I have separate nginx lb in front of kubernetes cluster. My goal is caching a lot of static files like .js .css and images or even static page, so every request cache related to services in kubernetes cluster will serve in varnish server, is it possible to do that? I attach varnish configuration, please check
10.10.10.27: nginx-lb-01, 10.10.10.28: nginx-lb-02, 10.10.10.29: Varnish
I already try to configure but I think it failed because when Im check using varnishstat, there’s no traffic average statistic. In every nginx vhost I already config default port site 8080 & redirect to 443
How can I solve this ?
Thank you
varnish config screenshot
I was going through Uber Engineering website where I came across this paragraph and I it confused me a lot, if anyone can make it clear for me then I would be thankful to him/her:
The Edge The frontline API for our mobile apps consists of over 600
stateless endpoints that join together multiple services. It routes
incoming requests from our mobile clients to other APIs or services.
It’s all written in Node.js, except at the edge, where our NGINX front
end does SSL termination and some authentication. The NGINX front end
also proxies to our frontline API through an HAProxy load balancer.
This is the link.
NGINX is already a reverse proxy + load balancer, then from where HAProxy load balancer came in the picture and where exactly it fits in the picture? What is "the edge" he talked about? Either the guy who wrote his he wrote confused words or I dont know English.
Please help.
It seems like they're using HAProxy strictly as a load balancer, and using NGINX strictly to terminate SSL and for authentication. It isn't necessary in most cases to use HAProxy along with NGINX, as you mentioned, NGINX has load-balancing capabilities, but being Uber, they probably ran into some unique problems that required the use of both. According to the information I've read, such as http://www.loadbalancer.org/blog/nginx-vs-haproxy/ and https://thehftguy.com/2016/10/03/haproxy-vs-nginx-why-you-should-never-use-nginx-for-load-balancing/, NGINX works extremely well as a web server, including the use case where it is serving as a reverse proxy for a node application, but its load-balancing capabilities are basic and not nearly as performant as HAProxy. Additionally, HAProxy exposes many more metrics for monitoring, and has more advanced routing capabilities.
Load balancing is not the core feature of NGINX. In the context of a node.js application, usually what you would see NGINX used for is to serve as a reverse proxy, meaning that NGINX is the web server, and http requests come through it. Then, based on the hostname and other rules, it forwards on the HTTP request to whatever port your node.js application is running on. As part of this flow, often NGINX will handle SSL termination, so that this computationally-intensive task is not being handled by node.js. Additionally, NGINX is often used to serve static assets for node.js apps, as it is more efficient, especially when compressing assets.
just a quick question.
What would be more beneficial, serving my angular application via node with a reverse proxy from nginx or just serving it directly from nginx?
I would think it would be faster to serve it direcly from nginx.
If there is a clean separation of your client-side code and your server side code (e.g so anything the client needs to run is either pre-built into static files or served using your rest api), then it's far better to serve the client-side files either directly from NGINX or from a CDN. Performance and scaling are better, and there is less work for you to do in code on the server to manage caching, etc. plus you can later scale the api independently.
nginx(as a reverse proxy) + nodejs - It's the best choice.
You will have much more benefits if you choose nginx as a frontend for nodejs. (ssl, http2, configuration, load balancing etc.)
If we think about static files (js, html, images) - it's more easier to cache them in one place (nginx host config) node also works with static file quite good.
I think that nodejs engine/server should do only one thing and it's business logic of the application.
Depending on your load requirements. You can setup multiple instances(runtimes) using nginx+node. If you have high load js application, i would suggest going for this solution. Otherwise, this does not matter.
I have read that proxies can be created by Nginx server for nodejs application to listen on but I am doubtful as to what exactly this will serve additional purpose and advantages compared to http module provide by nodejs for listening purpose.
For one, you can serve multiple Node applications on one server, with host based virtual servers managed by nginx, so that requests to the same port but with different Host: HTTP header reach different Node applications.
Also nginx can be set up to serve static assets without hitting your Node app and do some caching if you need it.
Those are two things that you can achieve with adding nginx to the mix but you may not need that in your case. Also, you can run a reverse proxy with Node and without nginx if that's what you prefer.