I have nginx set up as proxy to node.js for long polling like this:
location /node1 {
access_log on;
log_not_found on;
proxy_pass http://127.0.0.1:3001/node;
proxy_buffering off;
proxy_read_timeout 60;
break;
}
Unfortunately about half of long poll requests returns with error and empty response. My version of nginx is the one dreamhost offers v.0.8.53 and long poll request should be queues on the server for about 30seconds.
The case is that:
querying node.js directly like:
curl --connect-timeout 60 --max-time 60 --form "username=User" http://127.0.0.1:3001/node/poll/2/1373730895/0/0
works fine, but going through nginx:
curl --connect-timeout 60 --max-time 60 --form "username=User" http://www.mydomain.com/node1/poll/2/1373730895/0/0
is failing in half cases - the failed cases do not appear in the nginx access_log (the successful ones are there) and the curl returns:
curl: (52) Empty reply from server
It might be connected with higher traffic volume as well as I don't see that yet on other site that has lower traffic and should have pretty much similar settings.
I will be very grateful for any help on that issue or hints how to further debug it.
Related
I want to deploy a node.js app with pm2 and express into a Compute Engine Instance, it works fine in port 8080, but when i change the port to 8081, it returns me "500 Internal Server Error".
I also have a firewall rule with that port.
/etc/nginx/sites-available/default:
server {
listen 8081;
server_name **.***.***.***;
location / {
proxy_pass "http://127.0.0.1:8081";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name **.***.***.***;
root /var/www/html/;
}
In my file /home/myuser/.pm2/logs/index-error.log says: "ADDRESS ALREADY IN USE"
File: /var/log/nginx/error.log:
1260 768 worker_connections are not enough while connecting to upstream
I've tried with the next command:
sudo netstat -tulpn
And the only process that uses this port is the firewall rule that I create
Try this Below possible Solutions :
1)Set the maximum number of simultaneous connections that can be opened by a worker process. Please go through Worker_connections for more information. Also check full example configuration.
The formula for connections is worker_processes * worker_connections which should be 12 * 768, which would be (click clack) 9216. But your logs say 1768…
events {
worker_connections 10000;
}
Try this on your app.yml:
Any custom commands to run after building run:
exec: echo "Beginning of custom commands"
replace:
filename: "/etc/nginx/nginx.conf"
from: "worker_connections 768"
to: "worker_connections 2000"
replace:
filename: "/etc/nginx/nginx.conf"
from: "worker_processes auto"
to: "worker_processes 10"
Be aware that your block on post 2 is acting on the wrong file!
Another way to increase the limit is by setting worker_rlimit_nofile 10000 and had no issues, you can safely increase it, though, the chance of running out of file descriptors is minuscule.
Package bbb-config now sets worker_rlimit_nofile 10000; and worker_connections 4000; in /etc/nginx/nginx.conf #11347
Note : Note to CentOS / Fedora users, if you have SELinux enabled, you will need to run setsebool -P httpd_setrlimit 1 so that nginx has permissions to set its rlimit.
2)Check you may need to use a body parser to convert data to req.body github.com/expressjs/body-parser
3)Check the problem is now a linux kernel limit, see easyengine.io/tutorials/linux/increase-open-files-limit
Please see similar SO for more information.
I am Deploying a web application When executing the app url its showing this error. In logs, displaying this error, "Waiting for response to warmup request for container". Please help me with this.
Building a Python Based App. Tried using Docker Container in Azure Portal.
Try to increase the timeout on your webserver (e.g. nginx) or/and at python level, to allow your page to execute for longer.
You will receive a 504 error when the webserver waits too long for a page to be generated. The webserver closes the http request because it has reached a timeout.
Here some Nginx configurations to increase the timeout :
server {
server_name _;
root /var/www/html/public/;
###
# increase timeouts to avoid 504 gateway timeouts, digits are seconds
# for big files uploads
###
client_max_body_size 0;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
# ... your config
}
I'm not a Python developer but you may find some timeout configurations at Python level too (because there's a config in PHP to define the max_execution_time).
I have a node application running on an ec2 instance. Node is running on port 5000. I want to access the api from remote.
this is nginx configuration file.
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
client_max_body_size 20M;
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:5000;
}
location /nginx_status {
# Turn on stats
stub_status on;
access_log off;
}
}
when I try to curl using curl localhost/nginx_status
it returns
Active connections: 1
server accepts handled requests
11 11 12
Reading: 0 Writing: 1 Waiting: 0
Also when I try to access the IP in browser, it shows
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
But if I try to access the ip_address/nginx_status it shows 404 Error for example if I took IP address 123.456.789.098 in browser it shows the above mentioned message and if I took 123.456.789.098/nginx_status it will return 404 error. Even if I try curl ip_address/nginx_status it is also returning 404 error.
My question is, How can I access node application running on port 5000 from outside world?
unfortunately I only see part of your config, is there another server that listens to 80?
You don't use "default_server" for listen either, and without "server_name" I find it difficult to distinguish between them. So maybe another config with the server + port 80 as default_server takes effect. Check in your /etc/nginx/ folder which servers {..} all exist.
The proxy_pass looks correct, if the nodjs server is really listed there, check again whether it is really http or https scheme. For the correct protocol transmission of the proxy_pass.
But you should then add a control for the "stub_status" so that it is information that you do not entrust to everyone, for me it is the case that only one application has access to it internally and under another list what is not released on the internet:
server {
listen 127.0.0.1:10081 default_server;
location /flyingfish_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
I'm curious what you find out! :)
I have an Ubuntu 18.04 server running in a Droplet (DigitalOcean) secured with SSL and using an Nginx reverse proxy. Also Jenkins in running in my server (not in any docker) and configured to be accessed under the domain I created for it: jenkins.testdomain.com (all these steps following DO docs)
So the goal is to manage the deployment of a Node.js-React application to my testdomain.com later, by now, I just want to create the dist folder generated, after the 'npm build', within the /var/lib/jenkins/workspace/ , just that.
By now, I'm able to access my jenkins.testdomain.com site alright, trigger the pipeline to start the process after pushing to my repo, and start to run the stages; but it's here when start to fail nginx, when the pipeline reaches the Deliver phase (let's read 'npm build' phase), sometimes in the Build phase ('npm install').
It's at this point, reading the Jenkins console output where I see when it gets stuck and eventually shows a 502 Bad Gateway error. I will need to run the command systemctl restart Jenkins in my server console, to have access again. After restarting, the pipeline resume the work and seems to get the job done :/
In the /var/log/nginx/error.log for nginx I can read:
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /job/Basic%20NodeJS-React%20app/8/console HTTP/1.1",
upstream: "https:
//127.0.0.1:8080/job/Basic%20NodeJS-React%20app/8/console", host:
"jenkins.testdomain.com", referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/"
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking
to upstream, client: 85.146.85.194, server: jenkins.testdomain.com,
request: "GET /favicon.ico HTTP/1.1", upstream: "https:
//127.0.0.1:8080/favicon.ico", host: "jenkins.testdomain.com",
referrer: "https:
//jenkins.testdomain.com/job/Basic%20NodeJS-React%20app/8/console" ...
In the Jenkinsfile of my node-js-react app (from jenkins repo), the agent looks like this:
pipeline {
agent {
docker {
image 'node:6-alpine'
args '-p 3000:80'
}
}
environment {
CI = 'true'
}
stages {
// Build, Test, and Deliver stages
}
}
And my jenkins.testdomain.com configuration (/etc/nginx/sites-available/jenkins.testdomain.com) is like this (pass tests from nginx -t):
server {
listen 80;
root /var/www/jenkins.testdomain.com/html;
server_name jenkins.testdomain.com www.jenkins.testdomain.com;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Fix the "It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:8080;
# High timeout for testing
proxy_connect_timeout 1200s;
proxy_send_timeout 1200s;
proxy_read_timeout 1200s;
proxy_redirect http://localhost:8080 https://jenkins.testdomain.com;
# Required for new HTTP-based CLI
proxy_http_version 1.1;
proxy_request_buffering off;
# Required for HTTP-based CLI to work over SSL
proxy_buffering off;
}
# Certbot auto-generated lines...
}
Any help would be very welcomed 3 days struggling with this and playing around with the different proxy_ directives from nginx and so...
Thanks in advance!
OK just add an update that some days after my latest post, I realized that the main and only reason the server was going down was a lack of resources in the droplet.
So I was using a droplet with 1GB of RAM, 25GB HD, etc.. (the most basic one), so I chose to update it to use at least 2GB of RAM and indeed, that made it work as I was expecting. Everything until now works fine and that issue didn’t happen again.
Hope it helps if someone experiences the same issue.
I have used my local setup without nginx to serve my node.js application, I was using socket.io and the performance was quite good.
Now, I am using nginx to proxy my request and I see that socket.io has a huge response time, which means my page is getting rendered fast, but the data rendered by socket.io is order of magnitude slower than before.
I am using NGINX 1.1.16 and here is the conf,
gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:9999;
root html;
index index.html index.htm;
}
Even though everything is working, I have 2 issues,
socket.io response is slower than before. With NGINX, the response
time is around 12-15sec and without, it's hardly 300ms. tried this
with apache benchmark.
I see this message in the console, which was not there before using
NGINX,
[2012-03-08 09:50:58.889] [INFO] console - warn - 'websocket connection invalid'
You could try adding:
proxy_buffering off;
See the docs for info, but I've seen some chatter on various forums that buffering increases the response time in some cases.
Is the console message from NGINX or SocketIO?
NGINX proxy does not talk HTTP 1.1, which may be why web socket is not working.
Update:
Found a blog post about it: http://www.letseehere.com/reverse-proxy-web-sockets
A proposed solution:
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
Nginx only supports websocket starting from 1.3.13. It should be straightforward to set it up. Check the link below:
http://nginx.org/en/docs/http/websocket.html