I've a loadbalancer and I get this kind errors:
2017/09/12 11:18:38 [crit] 22348#22348: accept4() failed (24: Too many open files)
2017/09/12 11:18:38 [alert] 22348#22348: *4288962 socket() failed (24: Too many open files) while connecting to upstream, client: x.x.x.x, server: example.com, request: "GET /xxx.jpg HTTP/1.1", upstream: "http://y.y.y.y:80/xxx.jpg", host: "example.com", referrer: "https://example.com/some-page"
2017/09/12 11:18:38 [crit] 22348#22348: *4288962 open() "/usr/local/nginx/html/50x.html" failed (24: Too many open files), client: x.x.x.x, server: example.com, request: "GET /xxx.jpg HTTP/1.1", upstream: "http://y.y.y.y:80/xxx.jpg", host: "example.com", referrer: "https://example.com/some-page"
nginx version: nginx/1.10.1
Os: Debian GNU/Linux 8 (jessie)
The interesting thing is not always get error. Mostly I get a 30-50 lines of errors then nothing in 5-10 minutes. And then once the errors are coming again...
Here is my nginx.conf:
user www-data;
pid /usr/local/nginx/nginx.pid;
worker_processes auto;
error_log /var/log/nginx/error.log;
events {
worker_connections 30000;
}
http {
include mime.types;
default_type application/octet-stream;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_max_body_size 500m;
rewrite_log on;
log_format main '$remote_addr - "$proxy_add_x_forwarded_for" - [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$backend" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
geoip_country /etc/nginx/geodb/GeoIP.dat;
geoip_city /etc/nginx/geodb/GeoLiteCity.dat;
include /etc/nginx/loadbalancer/loadbalancer.conf;
}
And also some info:
$ ulimit -Hn
65536
$ ulimit -Sn
65536
$ sysctl fs.file-nr
fs.file-nr = 2848 0 70000
I don't know if it is worth but this loadbalancer is behind cloudflare.
I've added the following line to the nginx.conf:
worker_rlimit_nofile 20000;
Now it works, I don't get any error since the modification.
I hope it will help someone if have the same problem.
Related
I faced a problem case on linux nginx proxy server. I want to create a cache proxy server when original server is down. I determined proxy timeout 5s, but the main page loading have approximately 132 request. The proxy server is sending average 6 request per timeout period and waiting answers, so the page opens 132/6*5 second from cache. I tried with request zone, but i guess it represents the requests to nginx proxy cache server.
Are there any parameters to satisfy this request?
default.conf:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=90g inactive=10d;
server {
listen 443 ssl;
server_name blabla.com;
location / {
#limit_req zone=mylimit;
#limit_req_dry_run on;
proxy_cache my_cache;
proxy_cache_methods POST GET HEAD;
proxy_connect_timeout 5s;
proxy_cache_key "$host$request_uri|$request_body";
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_ignore_headers X-Accel-Expires Expires Set-Cookie Cache-Control;
proxy_set_header Host $host;
proxy_pass blabla.com;
proxy_buffering on;
proxy_cache_valid 200 304 301 10m;
proxy_cache_lock on;
}}
nginx.conf:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 100;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=150r/s;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
}```
I am getting following error in error.log file nginx conf file.
I am trying to run nginx server on to of node js server.
Error : no live upstreams while connecting to upstream, client:
127.0.0.1, server: www.XYZ.com, request: "GET / HTTP/1.0", upstream: "http://localhost/", host: "localhost"
Node js server is running on port 3000 and nginx on port 80
nginx conf file is
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 128;
#gzip on;
server_names_hash_bucket_size 64;
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 32m;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
server {
listen 80 ;
server_name XYZ.com;
location / {
# Base url of the Data Science Studio installation
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
and node js server is running on port 3000
app.listen(3000);
It looks like nginx is trying to connect to node js server on dafault port 80 and not 3000.
Please help me with the issue. I have spend lot of time on it but no luck.
Thanks
For me, I fixed it by move proxy_pass to the end of config block after proxy_set_header.
I have nodejs app that sending > 800 mongodb documents on client startup (execute only when a client access my app for the first time).
Nginx as reverse proxy in front of node server.
App server spec
Digital Ocean
CentOS 7.2
2GB Ram
2CPU
MongoDB server spec
Digital Ocean
Ubuntu 14.04
512 RAM
1 CPU
nginx -v // nginx version: nginx/1.8.1
nginx config
user nginx;
worker_processes 2;
worker_rlimit_nofile 100480;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
index index.html index.htm;
server {
server_name 128.199.139.xxx;
root /var/www/myapp/bundle/public;
module_app_type node;
module_startup_file main.js;
module_env_var MONGO_URL mongodb://{username}:{password}#128.199.139.xxx:27017/;
module_env_var ROOT_URL http://128.199.139.xxx;
location / {
proxy_pass http://128.199.139.xxx;
proxy_http_version 1.1;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Real-IP $remote_addr;
# pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
#proxy_set_header Host $host;
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade "upgrade";
proxy_set_header Connection $http_upgrade;
#add_header 'Access-Control-Allow-Origin' '*';
}
}
server {
listen 80 default_server;
server_name localhost;
root /usr/share/nginx/html;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
Error Log
Below error log:
2016/03/17 09:46:00 [crit] 10295#0: accept4() failed (24: Too many open files)
2016/03/17 09:46:01 [crit] 10295#0: accept4() failed (24: Too many open files)
2016/03/17 09:46:01 [crit] 10295#0: accept4() failed (24: Too many open files)
.....many duplicate error as above
2016/03/17 09:47:35 [error] 10295#0: *4064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 128.199.139.160, server: 128.199.139.1$
2016/03/17 09:47:35 [alert] 10295#0: accept4() failed (9: Bad file descriptor)
[ 2016-03-17 09:53:47.8144 10403/7f2833c9a700 age/Ust/UstRouterMain.cpp:422 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ 2016-03-17 09:53:47.8145 10403/7f2839500880 age/Ust/UstRouterMain.cpp:492 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ 2016-03-17 09:53:47.8146 10403/7f2833c9a700 Ser/Server.h:464 ]: [UstRouter] Shutdown finished
2016/03/17 09:54:11 [alert] 10549#0: 1024 worker_connections are not enough
2016/03/17 09:54:11 [error] 10549#0: *1021 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 128.199.139.160, server: 128.199.139.1$
2016/03/17 09:54:12 [alert] 10549#0: 1024 worker_connections are not enough
2016/03/17 09:54:12 [error] 10549#0: *2043 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 128.199.139.160, server: 128.199.139.1$
2016/03/17 11:43:20 [alert] 10549#0: 1024 worker_connections are not enough
2016/03/17 11:43:20 [error] 10549#0: *3069 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 128.199.139.160, server: 128.199.139.1$
2016/03/17 13:49:54 [error] 10549#0: *3071 open() "/usr/share/nginx/html/robots.txt" failed (2: No such file or directory), client: 180.97.106.xx, server: localhost, request: "GET
Appreciated for any help or clue
You should change the limit of open files for worker processes.
100480 is too high for your server.
I always use the following rule:
worker_rlimit_nofile = 2 * worker_connections
So in this case, try this:
worker_rlimit_nofile = 2048
After successfully completing this tutorial:
ELK on Cent OS
I'm now working on an ELK stack consisting of:
Server A: Kibana / Elasticsearch
Server B: Elasticsearch / Logstash
(After A and B work, scaling)
Server N: Elasticsearch / Logstash
So far, I've been able to install ES on server A / B, with successful curls to each server's ES instance via IP (curl -XGET "server A and B's IP:9200", returns 200 / status message.) The only changes to each ES's elasticsearch.yml file are as follows:
Server A:
host: "[server A ip]"
elasticsearch_url: "[server a ip:9200]"
Server B:
network.host: "[server b ip]"
I can also curl Kibana on server A via [server a ip]:5601
Unfortunately, when I try to open kibana in a browser, I get 502 bad gateway.
Help?
nginx config from server A (which I can't really change much due to project requirements):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
kibana.conf "in conf.d"
server {
listen 80;
server_name kibana.redacted.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx error log:
2015/10/15 14:41:09 [error] 3416#0: *7 connect() failed (111: Connection refused) while connecting to upstream, client: [my vm "centOS", no clue why it's in here], server: kibana.redacted.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "kibana.redacted.com"
When I loaded in test data "one index, one doc" things magically worked. In Kibana3, you could still get a dash and useful errors even if it couldn't connect.
But, that is not how the Kibana4 ... do.
I'm currently creating an api for my website, using nodejs and nginx, I've setup reversed proxies for each nodejs app i'll have running (api, mainsite, other stuff..).
However, when i try my api, it will use a very long time on every second request, sometimes time out..
NGINX.CONF
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 24;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#SSL performance tuning
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 10s;
add_header Strict-Transport-Security "max-age=31536000";
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
gzip on;
gzip_disable "msie6";
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml application/javascript text/css application/x-javascript;
#for mulyiple domains, www.codewolf.red, codewolf.red
server_names_hash_bucket_size 64;
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
include /etc/nginx/conf.d/*.conf;
}
ERROR.LOG
2014/10/27 14:26:46 [error] 6968#8992: *15 WSARecv() failed (10054: FormatMessage() error:(317)) while reading response header from upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://127.0.0.1:3000/ffd/users", host: "localhost"
2014/10/27 14:27:46 [error] 6968#8992: *15 upstream timed out (10060: FormatMessage() error:(317)) while connecting to upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://[::1]:3000/ffd/users", host: "localhost"
2014/10/27 14:39:31 [error] 6968#8992: *20 upstream timed out (10060: FormatMessage() error:(317)) while connecting to upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://[::1]:3000/ffd/users", host: "localhost"
2014/10/27 14:40:09 [notice] 5300#1352: signal process started
Any idea whats wrong?
It's been like this for a while, and its killing me :(
Please help, it's ruining my time for developing apps :/
Adding this because this is what worked for me
https://forum.nginx.org/read.php?15,239760,239760 Seems to indicate that you can proxy_pass to 127.0.0.1 instead of localhost and the request goes through fine
macbresch
One year old but I wanted to point out that there is a work around for that issue. Like Cruz Fernandez wrote you can set 127.0.0.1 instead of localhost on the proxy_pass directive. This prevents the 60s delay on every second request. I'm using Windows 8.1 and nginx 1.9.5.
Cruz Fernandez Wrote:
you can use 127.0.0.1 (instead of localhost on the proxy_pass directive)
location /nodejsServer/ {
proxy_pass http://127.0.0.1:3000/;
}
I was able to fix this by using an upstream statement.
eg. :
upstream nodejs_server {
server 192.168.0.67:8080; #ip to nodejs server
}
#server config
server {
location /nodejsServer/ { #http://localhost/nodejsServer/
proxy_pass http://nodejs_server;
}
}
reference: http://nginx.org/en/docs/http/ngx_http_upstream_module.html