socket.io slow response after using nginx - node.js

I have used my local setup without nginx to serve my node.js application, I was using socket.io and the performance was quite good.
Now, I am using nginx to proxy my request and I see that socket.io has a huge response time, which means my page is getting rendered fast, but the data rendered by socket.io is order of magnitude slower than before.
I am using NGINX 1.1.16 and here is the conf,
gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:9999;
root html;
index index.html index.htm;
}
Even though everything is working, I have 2 issues,
socket.io response is slower than before. With NGINX, the response
time is around 12-15sec and without, it's hardly 300ms. tried this
with apache benchmark.
I see this message in the console, which was not there before using
NGINX,
[2012-03-08 09:50:58.889] [INFO] console - warn - 'websocket connection invalid'

You could try adding:
proxy_buffering off;
See the docs for info, but I've seen some chatter on various forums that buffering increases the response time in some cases.

Is the console message from NGINX or SocketIO?
NGINX proxy does not talk HTTP 1.1, which may be why web socket is not working.
Update:
Found a blog post about it: http://www.letseehere.com/reverse-proxy-web-sockets
A proposed solution:
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/

Nginx only supports websocket starting from 1.3.13. It should be straightforward to set it up. Check the link below:
http://nginx.org/en/docs/http/websocket.html

Related

Setting Up of Nginx reverse proxy

I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)

Cloudflare - No further requests possible during download

My site allows users to download big .zip files. A problem I'm dealing with right now is that whenever the user is currently downloading such a file, all other requests to the site wait until the download is finished or cancelled, making the site practically unusable. In the Chrome network tab, the request shows as pending. Why could this be?
The server itself is implemented in Node.js using Express and is proxied through NGINX and then through Cloudflare. When I connect to the Express server or the NGINX proxy directly, this problem doesn't come up, only when it's routed through Cloudflare from what I have observed.
This is my NGINX config, if of any help:
server {
listen 80;
listen [::]:80;
server_name marbleland.vani.ga;
client_max_body_size 20m;
location / {
proxy_pass "http://localhost:20020/";
}
}
Am I missing something obvious?

Phusion Passenger: Simple / easier / faster OR Too complicated?

Simple / Easier / Faster are slogans i read every where they wrote about Passenger.
But it's really complicated to deploy.
Early 2019 i deployed 1st time, it took about a week for searching how to config
And now, 2nd deployment, 3 days already and got no luck, post that helped me config last time has gone.
Official Tutorials are so general and helps just a little and only in easy cases. Most of online tutorials are about PM2,... a few wrote about Passenger but not much to see.
I have Frontend & Backend placed in same root folder with structure like this:
Root app folder
|___client (Frontend - React)
|.......|___node_modles
|.......|___public
|.......|.......|___index.html
|.......|.......|___favicon.ico
|.......|___src
|.......|.....|___index.js
|.......|.....|___App.js
|.......|.....|___actions
|.......|.....|___components
|
| (Backend - nodeJS using express)
|___node_modules
|___middleware
|___models
|___routes
|___server.js
I'm using Nginx + Passenger + nodeJS
Config for Backend:
server {
listen 8080;
server_name x.x.x.x www.x.x.x.x;
root /var/www/tnapp/code/public;
passenger_app_root /var/www/tnapp/code;
passenger_app_type node;
passenger_startup_file server.js;
passenger_enabled on;
}
The server doesn't response and just got err
This site can’t be reached<br/> x.x.x.x took too long to respond..
Frontend config:
server {
listen 80;
server_name x.x.x.x www.x.x.x.x;
root /var/www/tnapp/code/client/public;
passenger_app_root /var/www/tnapp/code/client/src;
passenger_app_type node;
passenger_startup_file server.js;
passenger_enabled on;
}
With this frontend i got response html part, but nodjs part seems not working.
Hope someone can help me out on config this thing
i'm exhausted ##

Use Reverse Proxy from Https client to Http server running locally on my machine

I have a published site that uses HTTPS. The site needs to communicates with a HTTP node Express API. The API is run on my local machine. Everything worked fine until I switched the client application to use HTTPS. Now I receive mixed content warnings. I have been reading about reverse proxys and wonder if this could be the solution to my problem. Is it possible to proxy a request to my localhost? Or will localhost point to the server the proxy is on?
I have been looking at using nginx as the reverse proxy server but I have zero experience with proxys and not positive how to go about it.
I am mainly wondering if it is possible or not before I dig any deeper.
Yes, this is a pretty standard use case for using nginx (or any other reverse proxy). You would configure the location prefixes, etc that need to go to your backend application and proxy (via proxy_pass directive) to them. Any static content can be served directly from nginx. All of this can then behind nginx.
Assuming that your application is never issuing absolute urls which make use of "http://" this should resolve your mixed content warnings.
You will probably want to read some tutorials but the basics of your configuration would be:
server {
listen 443 ssl; # you can also add http2
server_name hostnames that you listen for;
ssl_certificate_key /path/to/cert.key;
ssl_certificate /path/to/cert.pem;
root /var/www/sites/foo.com;
location /path/handled/by/application {
proxy_pass http://localhost:8000; # or whatever port is
}
}

Prevent Nginx from caching nodejs responses

I am using Nginx to proxy requests to server based on directory user want to access
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
location / {
proxy_pass http://****.***/;
}
location /app/{
proxy_no_cache '1';
proxy_cache_bypass '1';
proxy_buffering off;
include proxy_params;
proxy_pass http://localhost:3000/;
}
}
This is nginx configuration.
A Node app is running on 3000 port
Problem I am facing is
Users access "/app"
Server sends login.html from node app.
User logins from the page.
Node sends home.html after successful login.(Problem Lies here)
Although Node is sending home.html but Nginx responds to request with 304 code and browser shows same login page again.
example of Node app
.....
app.get("/",function(req,res){
***Sends login page or home page based on session***
});
app.get("/processLogin",function(req,res){
***redirects to / after setting session****
});
.....
In proxy mode nginx is using Expires header to reduce load on the backend server...
So simply set expires off; in the proxy location block and caching should be gone.
In case the caching occurs in the browser, you'll need to set the cache control header to no-cache:
add_header Cache-Control no-cache;
Adding no-cache headers in nodejs helped to solve the problem.
One question: do you use pm2 for your Node application ? Many use Nginx + pm2.
For the development phase of pm2, you need to put the --watch flag while launching your App. pm2 load all the Node.js in memory and dont check file change on the hardisk. You have then a cache phenomenon.
So, during the development phase, in place of
pm2 start MyApp.js
do
pm2 start MyApp.js --watch
Honestly, I dont see how a browser cache or Nginx can cache variable responses given by node.js programs. It had to be the pm2, in my case.

Resources