Prevent Nginx from caching nodejs responses - node.js

I am using Nginx to proxy requests to server based on directory user want to access
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
proxy_pass http://****.***/;
}
location /app/{
proxy_no_cache '1';
proxy_cache_bypass '1';
proxy_buffering off;
include proxy_params;
proxy_pass http://localhost:3000/;
}
}
This is nginx configuration.
A Node app is running on 3000 port
Problem I am facing is
Users access "/app"
Server sends login.html from node app.
User logins from the page.
Node sends home.html after successful login.(Problem Lies here)
Although Node is sending home.html but Nginx responds to request with 304 code and browser shows same login page again.
example of Node app
.....
app.get("/",function(req,res){
***Sends login page or home page based on session***
});
app.get("/processLogin",function(req,res){
***redirects to / after setting session****
});
.....

In proxy mode nginx is using Expires header to reduce load on the backend server...
So simply set expires off; in the proxy location block and caching should be gone.
In case the caching occurs in the browser, you'll need to set the cache control header to no-cache:
add_header Cache-Control no-cache;

Adding no-cache headers in nodejs helped to solve the problem.

One question: do you use pm2 for your Node application ? Many use Nginx + pm2.
For the development phase of pm2, you need to put the --watch flag while launching your App. pm2 load all the Node.js in memory and dont check file change on the hardisk. You have then a cache phenomenon.
So, during the development phase, in place of
pm2 start MyApp.js
do
pm2 start MyApp.js --watch
Honestly, I dont see how a browser cache or Nginx can cache variable responses given by node.js programs. It had to be the pm2, in my case.

Related

Setting Up of Nginx reverse proxy

I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)

Cloudflare - No further requests possible during download

My site allows users to download big .zip files. A problem I'm dealing with right now is that whenever the user is currently downloading such a file, all other requests to the site wait until the download is finished or cancelled, making the site practically unusable. In the Chrome network tab, the request shows as pending. Why could this be?
The server itself is implemented in Node.js using Express and is proxied through NGINX and then through Cloudflare. When I connect to the Express server or the NGINX proxy directly, this problem doesn't come up, only when it's routed through Cloudflare from what I have observed.
This is my NGINX config, if of any help:
server {
listen 80;
listen [::]:80;
server_name marbleland.vani.ga;
client_max_body_size 20m;
location / {
proxy_pass "http://localhost:20020/";
}
}
Am I missing something obvious?

NodeJs Auto start in business server with systemd as different user

I have my business app running in Development env and inside that, 2 folders named Client and Backend.
Client (ReactJS) running in port 5000
Backend (Node.JS) running in Port 6000
Server Nginx.
So in Nginx default.conf file, listening 80 and I've proxy_pass http://localhost:5000.
Its working fine in the Development.
Please note, some redirections are configured like ${host}:3000/xxx in the backend and client scripts
But while doing the production deployment, I found difficulty in doing so.
I have the static build client file and placed it in the nginx root folder.
Below is the .conf file
server {
listen 80;
listen 5000;
server_name xx.xxx.xxx.xxx;
location / {
root /usr/share/nginx/html/client/build;
index index.html index.htm;
try_files $uri $uri/ #backend;
}
location ~ ^/([A-Za-z0-9]+) {
proxy_pass http://localhost:6000;
}
}
I Also have SSO enabled, when I navigate the address, it send the index.html file which is the login page.
When I press login, first it will navigate to "/login/abc/" which is routed in "backend" script.
But it responds with 404 error.
What am I doing wrong?

Nginx + node.js configuration

I need the right configuration of nginx for my problem.
Suppose the nginx + nodejs serverprograms are running on the same debian machine.
Domain name for my website is for simplicity just webserver.com (and www.webserver.com as alias)
Now, when someone surfs on the internet to "webserver.com/" it should pass the request to the nodejs application which should run on a specific port like 3000 for example. But the images and css files should get served by nginx as static files and the filestructure should looke like webserver.com/images or webserver.com/css .. images + css should get served by nginx like a static server
Now it gets tricky:
But when someone surfs on webserver.com/staticsite001 or webserver.com/staticsite002 then it should get served by the nginx server only. no need for nodejs then.
And for the nodejs server, I am just setting up my nodejs application with port 3000 for example to receive the bypass from nginx for webserver.com/
to put it in a more understandable language: when someone surfs to webserver.com/staticsite001 it should NOT pass it to the node application. It should only pass it to the node application if its inside of the first webserver.com/ directory that the outsiders can see. The webserver.com/staticsite001 should only get serverd by nginx.
How, how do I do that ? And what should the http and server block look like for the nginx configuration look like?
I am familiar with nodejs. But I am new to nginx and new to reverse proxying.
thanks
the file structure on the debian hard drive looks like:
/home/wwwexample/staticsite001 (for www.webserver.com/staticsite001/) only handled by nginx
/home/wwwexample/staticsite002 (for www.webserver.com/staticiste002/) only handlex by nginx
/home/wwwexample/images
/home/wwwexample/css
and in
/home/nodeapplication is my node js application
This server block should work:
server {
listen 80;
server_name webserver.com www.webserver.com;
root /home/wwwexample;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
}
location /staticsite001 {
}
location /staticsite002 {
}
location /images {
}
location /css {
}
}
First location makes nginx to proxy everything to localhost:3000. Following empty locations instruct nginx to use default behavior, that is to serve static files.
Put this code into file /etc/nginx/sites-available/my-server and create a symlink to it in /etc/nginx/sites-enabled. There is a default config, which you could use as a reference.
After that you could use command sudo /usr/sbin/nginx -t to check configuration. If everything is OK use /etc/init.d/nginx reload to apply new configuration.

socket.io slow response after using nginx

I have used my local setup without nginx to serve my node.js application, I was using socket.io and the performance was quite good.
Now, I am using nginx to proxy my request and I see that socket.io has a huge response time, which means my page is getting rendered fast, but the data rendered by socket.io is order of magnitude slower than before.
I am using NGINX 1.1.16 and here is the conf,
gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:9999;
root html;
index index.html index.htm;
}
Even though everything is working, I have 2 issues,
socket.io response is slower than before. With NGINX, the response
time is around 12-15sec and without, it's hardly 300ms. tried this
with apache benchmark.
I see this message in the console, which was not there before using
NGINX,
[2012-03-08 09:50:58.889] [INFO] console - warn - 'websocket connection invalid'
You could try adding:
proxy_buffering off;
See the docs for info, but I've seen some chatter on various forums that buffering increases the response time in some cases.
Is the console message from NGINX or SocketIO?
NGINX proxy does not talk HTTP 1.1, which may be why web socket is not working.
Update:
Found a blog post about it: http://www.letseehere.com/reverse-proxy-web-sockets
A proposed solution:
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
Nginx only supports websocket starting from 1.3.13. It should be straightforward to set it up. Check the link below:
http://nginx.org/en/docs/http/websocket.html

Resources