nginx slow on every second request - node.js

I'm currently creating an api for my website, using nodejs and nginx, I've setup reversed proxies for each nodejs app i'll have running (api, mainsite, other stuff..).
However, when i try my api, it will use a very long time on every second request, sometimes time out..
NGINX.CONF
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 24;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 19000;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#SSL performance tuning
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 10s;
add_header Strict-Transport-Security "max-age=31536000";
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
gzip on;
gzip_disable "msie6";
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/xml application/javascript text/css application/x-javascript;
#for mulyiple domains, www.codewolf.red, codewolf.red
server_names_hash_bucket_size 64;
# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf
include /etc/nginx/conf.d/*.conf;
}
ERROR.LOG
2014/10/27 14:26:46 [error] 6968#8992: *15 WSARecv() failed (10054: FormatMessage() error:(317)) while reading response header from upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://127.0.0.1:3000/ffd/users", host: "localhost"
2014/10/27 14:27:46 [error] 6968#8992: *15 upstream timed out (10060: FormatMessage() error:(317)) while connecting to upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://[::1]:3000/ffd/users", host: "localhost"
2014/10/27 14:39:31 [error] 6968#8992: *20 upstream timed out (10060: FormatMessage() error:(317)) while connecting to upstream, client: ::1, server: localhost, request: "GET /api/ffd/users HTTP/1.1", upstream: "http://[::1]:3000/ffd/users", host: "localhost"
2014/10/27 14:40:09 [notice] 5300#1352: signal process started
Any idea whats wrong?
It's been like this for a while, and its killing me :(
Please help, it's ruining my time for developing apps :/

Adding this because this is what worked for me
https://forum.nginx.org/read.php?15,239760,239760 Seems to indicate that you can proxy_pass to 127.0.0.1 instead of localhost and the request goes through fine
macbresch
One year old but I wanted to point out that there is a work around for that issue. Like Cruz Fernandez wrote you can set 127.0.0.1 instead of localhost on the proxy_pass directive. This prevents the 60s delay on every second request. I'm using Windows 8.1 and nginx 1.9.5.
Cruz Fernandez Wrote:
you can use 127.0.0.1 (instead of localhost on the proxy_pass directive)
location /nodejsServer/ {
proxy_pass http://127.0.0.1:3000/;
}

I was able to fix this by using an upstream statement.
eg. :
upstream nodejs_server {
server 192.168.0.67:8080; #ip to nodejs server
}
#server config
server {
location /nodejsServer/ { #http://localhost/nodejsServer/
proxy_pass http://nodejs_server;
}
}
reference: http://nginx.org/en/docs/http/ngx_http_upstream_module.html

Related

Nginx: Reverse proxy setup gives a 404 error

I have set a simple reverse proxy setup on my nginx server
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
# include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and only file in /etc/nginx/sites-enabled
server {
listen 80 default_server;
listen [::]:80 default_server;
index index.html index.htm index.nginx-debian.html;
location /portainer/ {
proxy_pass http://127.0.0.1:9445;
}
}
on trying to access the server http://192.168.29.118/portainer/ I get 404 page not found response, although I'm able to access http://192.168.29.118:9445 and curl http://127.0.0.1:9445
my access.log looks like this and nothing on my error.log
192.168.29.67 - - [21/Oct/2022:13:20:43 +0000] "GET /portainer/ HTTP/1.1" 404 43 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:106.0) Gecko/20100101 Firefox/106.0"
Have tried looking answers for similar questions but haven't found anything solid to make my config work, appreciate any help!
nginx simple proxy_pass to localhost not working

How determine request number on nginx per timeout when origin server is down?

I faced a problem case on linux nginx proxy server. I want to create a cache proxy server when original server is down. I determined proxy timeout 5s, but the main page loading have approximately 132 request. The proxy server is sending average 6 request per timeout period and waiting answers, so the page opens 132/6*5 second from cache. I tried with request zone, but i guess it represents the requests to nginx proxy cache server.
Are there any parameters to satisfy this request?
default.conf:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=90g inactive=10d;
server {
listen 443 ssl;
server_name blabla.com;
location / {
#limit_req zone=mylimit;
#limit_req_dry_run on;
proxy_cache my_cache;
proxy_cache_methods POST GET HEAD;
proxy_connect_timeout 5s;
proxy_cache_key "$host$request_uri|$request_body";
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_ignore_headers X-Accel-Expires Expires Set-Cookie Cache-Control;
proxy_set_header Host $host;
proxy_pass blabla.com;
proxy_buffering on;
proxy_cache_valid 200 304 301 10m;
proxy_cache_lock on;
}}
nginx.conf:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 100;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=150r/s;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
}```

Internal server error when many request, nginx + node.js + pm2 [duplicate]

I've a loadbalancer and I get this kind errors:
2017/09/12 11:18:38 [crit] 22348#22348: accept4() failed (24: Too many open files)
2017/09/12 11:18:38 [alert] 22348#22348: *4288962 socket() failed (24: Too many open files) while connecting to upstream, client: x.x.x.x, server: example.com, request: "GET /xxx.jpg HTTP/1.1", upstream: "http://y.y.y.y:80/xxx.jpg", host: "example.com", referrer: "https://example.com/some-page"
2017/09/12 11:18:38 [crit] 22348#22348: *4288962 open() "/usr/local/nginx/html/50x.html" failed (24: Too many open files), client: x.x.x.x, server: example.com, request: "GET /xxx.jpg HTTP/1.1", upstream: "http://y.y.y.y:80/xxx.jpg", host: "example.com", referrer: "https://example.com/some-page"
nginx version: nginx/1.10.1
Os: Debian GNU/Linux 8 (jessie)
The interesting thing is not always get error. Mostly I get a 30-50 lines of errors then nothing in 5-10 minutes. And then once the errors are coming again...
Here is my nginx.conf:
user www-data;
pid /usr/local/nginx/nginx.pid;
worker_processes auto;
error_log /var/log/nginx/error.log;
events {
worker_connections 30000;
}
http {
include mime.types;
default_type application/octet-stream;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_max_body_size 500m;
rewrite_log on;
log_format main '$remote_addr - "$proxy_add_x_forwarded_for" - [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$backend" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
geoip_country /etc/nginx/geodb/GeoIP.dat;
geoip_city /etc/nginx/geodb/GeoLiteCity.dat;
include /etc/nginx/loadbalancer/loadbalancer.conf;
}
And also some info:
$ ulimit -Hn
65536
$ ulimit -Sn
65536
$ sysctl fs.file-nr
fs.file-nr = 2848 0 70000
I don't know if it is worth but this loadbalancer is behind cloudflare.
I've added the following line to the nginx.conf:
worker_rlimit_nofile 20000;
Now it works, I don't get any error since the modification.
I hope it will help someone if have the same problem.

NGINX SSL redirects too often

I am struggling with NGINX and setting up my v-hosts. I'm trying to setup a vhost that redirects HTTP requests to HTTPS and then to my application (when it is 443)
My OS is Ubuntu 16.04 and I am using NGINX 1.10.3.
The nginx.conf looks like that (its mostly the default):
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My ServerBlocks / VHosts look like that:
server {
listen 443 ssl;
server_name xxx.com;
# Prevent MITM
add_header Strict-Transport-Security "max-age=31536000";
ssl_certificate "/etc/nginx/ssl/xxx.com.pem";
ssl_certificate_key "/etc/nginx/ssl/xxx.com.key";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://localhost:2237;
}
}
server {
listen 80;
server_name xxx.com;
return 301 https://$server_name$request_uri;
}
Now the problem is, that either if I am using HTTP or HTTPS it tries to redirect me to HTTPS so I am stucked in an endless Loop of redirects.
I have absolutely no idea where my mistake is.
Every VHost is in a single File. The Application on Port 2237 is a nodeJS Express Server. I am also using Cloudflare (I got my SSL Certificate from them)
Edit:
Output from curl -I is:
$ curl -I https://example.com
HTTP/1.1 301 Moved Permanently
Date: Fri, 06 Oct 2017 19:42:19 GMT
Content-Type: text/html
Connection: keep-alive
Set-Cookie: __cfduid=d827df762e20a4e321b92b34bd15546621507318939; expires=Sat, 06-Oct-18 19:42:19 GMT; path=/; domain=.example.com; HttpOnly
Location: https://example.com/
Server: cloudflare-nginx
CF-RAY: 3a9b1a6a4e4564d5-FRA
You need to use below config
server {
listen 80;
server_name example.com;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_pass http://localhost:2237;
proxy_redirect http://localhost:2237/ https://$host/;
}
}
Your are using cloudflare SSL and terminating SSL at cloudflare. So you should just be listening on port 80. Your earlier config was redirecting port 80 back to HTTPS and sending the request to Cloudflare which then send to your nginx port 80 and hence creating infinite loop

Kibana4 can't connect to Elasticsearch by IP, only localhost

After successfully completing this tutorial:
ELK on Cent OS
I'm now working on an ELK stack consisting of:
Server A: Kibana / Elasticsearch
Server B: Elasticsearch / Logstash
(After A and B work, scaling)
Server N: Elasticsearch / Logstash
So far, I've been able to install ES on server A / B, with successful curls to each server's ES instance via IP (curl -XGET "server A and B's IP:9200", returns 200 / status message.) The only changes to each ES's elasticsearch.yml file are as follows:
Server A:
host: "[server A ip]"
elasticsearch_url: "[server a ip:9200]"
Server B:
network.host: "[server b ip]"
I can also curl Kibana on server A via [server a ip]:5601
Unfortunately, when I try to open kibana in a browser, I get 502 bad gateway.
Help?
nginx config from server A (which I can't really change much due to project requirements):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
kibana.conf "in conf.d"
server {
listen 80;
server_name kibana.redacted.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx error log:
2015/10/15 14:41:09 [error] 3416#0: *7 connect() failed (111: Connection refused) while connecting to upstream, client: [my vm "centOS", no clue why it's in here], server: kibana.redacted.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "kibana.redacted.com"
When I loaded in test data "one index, one doc" things magically worked. In Kibana3, you could still get a dash and useful errors even if it couldn't connect.
But, that is not how the Kibana4 ... do.

Resources