im buildin a MEAN stack application, and i just found out that it's a best practice to let Nginx serve static file (Currently my node is serving static file) and use reverse proxy. so i was able to serve a static file and reverse proxy on Nginx, my question is, is there a way to secure the access to the static file?
This is my Nginx code
server {
listen 80;
location /static {
alias /var/www/project/public;
autoindex off;
}
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Under the public folder, i have style.css
so when i go to the url localhost/static/style.css, i could see my code, so i imagine lets say i deployed my website to the public and have it's domain name, the users could access my static files by just going to www.domainname.com/static/style.css
Is this normal? or there's a way to just limit the access to NodeJS server? being the only thing could access the static file? or im getting this wrong.
Thanks! sorry im new to this web development world, but im learning.
You can limit access using nginx by adding the following to your location definition:
#This would be the IP of the server you want to have access to your protected file
allow 123.123.123.123/32;
deny all;
But in this case, you don't want to restrict access to your static files. The user loading the web page needs access to the css files to display it correctly. If you were to watch the network traffic of when you loaded a web page, you would see that your browser downloads all the client side CSS, JS, and HTML files it needs to run properly. So it is completely normal for people to be able to just look at CSS files that are hosted statically. Usually a backend NodeJS server has no use for CSS files.
Related
I have a backend that is composed of multiple microservices, one of them is the media microservice whose job is to receive a payload through rabbitmq and save the received files on the filesystem. The main backend then receives the response from the broker and saves the path and filename in a database. The frontend clients then receive the path and display the image found in the media microservice.
This all works fine when developing locally. But im missing a crucial part in production where i havent quite figured out how to configure nginx to allow access to the files/images. The main backend lives on a certain port and the microservices each on a different port (plan is to later dockerize the microservices and deploy them each on separate vps'). The media microservice does not have any functionality to serve any images, it just handles saving the files to the filesystem, so all i need is a way to access the files on said filesystem. Any hints on how i can configure something of the sort in nginx?
So turns out I still needed express or at least express makes the procedure much easier to serve the actual files.
app.use("/images", express.static(path.join(__dirname + "/media/images")));
With the following nginx configuration
location /images/ {
proxy_pass http://localhost:****;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
I want to run the child process using node js from one server to another. I have a too heavy process to run that causing my main server to work slowly so I want to run my heavy processes on another server that will perform heavy tasks like data modifications and return a buffer of that data but I could not find similar like this.
For example, I have server A that is running my website and users are sharing their content using this. When the users' traffic jumps to high my server will get slow because of data like images, videos upload, and pdf report generating on the basic images, videos and serving the site content. I want to perform these tasks on server B, so that server A will only work for data serving and traffic management.
Apparently at this point you probably need to split your webserver frontend routes into different worker servers.
Let's suppose you're using Nginx as a website frontend. If you're not, then your first step would be to setup an nginx webfront.
1 - If haven't done so, serve all public static content (like pdf files, videos, images, etc.) directly from nginx using different rules for static content and node server routes:
Something as basic as this:
server {
listen 80;
server_name example.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000;
}
location /static {
root /dir/to/my/static/files; # here you have your videos, images, etc.
}
}
2 - Now, if you need to separate your node server onto 2 services, you can just create 2 (or more) nginx proxy rules:
server {
listen 80;
server_name example.com;
location /api {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.2:5000; # server 2
}
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:5000; " server 1
}
location /static {
root /dir/to/my/static/files;
}
}
That way example.com/api/* routes will go to your secondary Node server (on ip 127.0.0.2), example.com/static will be served directly by Nginx at blazing fast speeds, while the non-mapped routes will be served by the default main node server on 127.0.0.1.
There are many ways to setup proxies and optimize Nginx so that it can, for instance, go through a pool of node servers in round-robin fashion, and you can also compress data and use protocols like HTTP/2 to take the load off the slower node-based webserver (ie. Express).
I am writing a website with node.js, and, until now, I've always separated the client and server parts in two different node.js instances (and processes):
one for the server part (APIs, interaction with databases, etc.)
one for the client part (js code is executed in the browser)
Is this the correct way of doing it? Or there is a way to collapse client and server in one node.js instance?
Thanks.
You do not need node.js to provide clients with static files.
Nginx (or any other reverse proxy) can do it in a more efficient way thus conserving resources of you server and allowing higher loads.
I suggest you to use nginx to provide static files and forward API requests to node.js service.
Here is an example how you could do it:
server {
listen 80 default_server;
root /client-code;
location / {
try_files $uri $uri/ #node;
}
location #node {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:8000;
}
}
I already use nginx as reverse proxy to serve my node.js webapps 3000<->80 for example. Actually, I serve my assets in the node app, using express.static middleware.
I read and read again that nginx is extremely efficient to serve static files.
The question is, what is the best ? Serving assets as I already do or configuring nginx to serve the static files itself directly ?
Or it is almost the same ?
The best way is to use nginx server to serve you static file and let you node.js server handle the dynamic content.
It is usually the most optimized solution to reduce the amount of requests on your node.js server that is slower to server static files than nginx for example :
The configuration to achieve that is very easy if you already set a reverse proxy for you nodejs app.
nd nginx configuration could be
root /home/myapp;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location /public/ {
alias /home/myapp/public/;
}
location / {
proxy_pass http://IPADRESSOFNODEJSSERVER:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
#try_files $uri $uri/ =404;
}
Every request with /public/ at the first part of the url will be handled by nginx and every other request will be proxied to you nodejs app at your IPADRESSOFNODEJSSERVER:NODEJSPORT usually the IPADRESSOFNODEJSSERVER is the localhost
The doc section of express tell that http://expressjs.com/en/advanced/best-practice-performance.html#proxy
An even better option is to use a reverse proxy to serve static files;
see Use a reverse proxy for more information.
Moreover nginx will let you easily define caching rules so for static assets that doesn't change it can speed up your app also with one line.
location /public/ {
expires 10d;
alias /home/myapp/public/;
}
You can find a lot of articles that compare the both methods on internet for example:
http://blog.modulus.io/supercharge-your-nodejs-applications-with-nginx
am running varnish on EC2 in front of nginx which routes to node.js.
What I would like is to serve specific static HTML pages from certain routes (like, / for index.html) via nginx, but have all other routes be handled by node.js.
As an example, / would be sent by nginx in the form of a static HTML page, while anything not matching, say /dynamic_stuff or /dynamic_stuff2, would be processed by node.js.
In other threads online, other people were putting node.js in a separate dir entirely, like /node/dynamic_stuff but I didn't want to have a separate dir for my routing.
Right now I have / served up by node.js like everything else but if I'm just testing my node.js server and I take it down, I'd like / to fallback to an nginx version of index.html. In this case, if my node.js server is taken down, then I get a 502 Bad Gateway.
I'm not too worried about performance from serving up files via nginx vs. node.js, I just figure that I want to have nginx handling basic pages if node.js goes down for whatever reason.
Relevant script:
location = / {
index index.html
root /path/to/public
try_files $uri $uri/ index.html;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass node_js;
}
If I use this above code, all requests still get sent to node.js, including /.
I think the simplest thing to do if it's just the index.html is to set index to
index index.html
root /path/to/public
All files in your public directory should now be served from nginx.
Now put this index.html in the public directory of your node app. The rest will be proxied from nginx to the node instance.
Of course you can simply put other static html in subdirectories if you want:
public/about(index.html
public/faq/index.html
...