socket.io slow only when a message emit from server to the client,while it is very fast from server to the client sent the message - node.js

The problem:
when the client sent a message to the server there is a delay to emit the message to the other client while the same message sent super fast to the client that sent the message
i.e
Client A --> Server --> Client A Super fast
Client A --> Server --> Client B a bit slower
i am using socket.io NodeJs/Express , nginx web server(tested in Apache same issue)
i noticed that in safari and also in mobile chrome
edit below is the nginx config
worker_processes 1;
error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
error_log /usr/local/apps/nginx/var/log/error_log debug;
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
tcp_nodelay on;
keepalive_timeout 65;
client_max_body_size 200M;
server_names_hash_bucket_size 64;
server_tokens off;
include /usr/local/apps/nginx/etc/conf.d/*.conf;
}
server {
listen *:443 ssl ;
server_name wsapidev.sample.com www.wsapidev.sample.com;
error_log /usr/local/apps/nginx/var/log/wsapidev.sample.com.err;
access_log /usr/local/apps/nginx/var/log/wsapidev.sample.com.log main;
ssl on;
ssl_certificate /etc/ssl/cert/wsapidev.sample.com.crt;
ssl_certificate_key /etc/ssl/private/wsapidev.sample.com.key;
ssl_dhparam /etc/ssl/private/dhparam.pem;
location /socket {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:44451/socket;
}
}

Related

recv() failed (104: Connection reset by peer) while reading response header from upstream

I get error on nginx error log when trying to use nginx nodejs fs video stream. I had an error before about too many open files and fixed it by increasing worker connection. But now I get this error often:
recv() failed (104: Connection reset by peer) while reading response header from upstream
and sometimes:
upstream prematurely closed connection while reading response header from upstream
we use nginx on ubuntu 18.04 server with 2x CPU 2ghz 2 cores, 24ram
nginxConfig:
user www-data;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
worker_processes 2;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;
access_log /var/log/nginx/access.log main_ext;
error_log /var/log/nginx/error.log warn;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
nginx site config :
upstream http_backend {
server 127.0.0.1:8087;
keepalive 32;
}
server {
listen 80;
listen [::]:80;
server_name cdn.amjilt.com;
return 301 https://$server_name$request_uri;
}
server {
listen 7070;
listen [::]:7070;
server_name cdn.amjilt.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name cdn.amjilt.com;
ssl on;
ssl_certificate /etc/nginx/cert/media/media.crt;
ssl_certificate_key /etc/nginx/cert/media/media.key;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
client_max_body_size 500M;
client_body_buffer_size 500M;
proxy_buffer_size 16M;
proxy_buffers 24 8M;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
location /images{
root /home/ubuntu/projects/amjilt_media/static;
}
location /tmp{
root /home/ubuntu/projects/amjilt_media/static;
}
location /images/uploads{
root /home/ubuntu/projects/amjilt_media/static;
}
location /images/avatar{
root /home/ubuntu/projects/amjilt_media/static;
}
location /api/video/show{
expires off;
proxy_buffering off;
chunked_transfer_encoding on;
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
location /api/video/mobile{
expires off;
proxy_buffering off;
chunked_transfer_encoding on;
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
location /api/pdf/show{
expires off;
proxy_buffering off;
chunked_transfer_encoding on;
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
location / {
proxy_pass http://localhost:8087;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}

modsecurity does not work if no required SSL certificate was sent

I have a lot of rules in modsecurity but none works if the host is numeric in SSL https://SERVER_IP, i get this response:
400 Bad Request No required SSL certificate was sent
My SSL is only valid to my domain name, but should not modsecurity work anyways? Because any request pass thru modsecurity before go to the application or something like that.
Question:
1 - How can i fix it?
2 - Why modsecurity does not work, and am i vunerable if i don't fix it?
This is my nginx.conf:
load_module modules/ngx_http_modsecurity_module.so;
user nobody;
worker_processes 1;
error_log /var/log/nginx/error.log error;
pid /var/run/nginx.pid;
events {
worker_connections 5000;
use epoll;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
client_header_timeout 20s;
client_body_timeout 20s;
client_max_body_size 20m;
client_header_buffer_size 6k;
client_body_buffer_size 128k;
large_client_header_buffers 2 2k;
send_timeout 10s;
keepalive_timeout 30 30;
reset_timedout_connection on;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 1024;
ignore_invalid_headers on;
connection_pool_size 256;
request_pool_size 4k;
output_buffers 4 32k;
postpone_output 1460;
include mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_certificate /etc/nginx/ssl/cf_cert.pem;
ssl_certificate_key /etc/nginx/ssl/cf_key.pem;
ssl_client_certificate /etc/nginx/ssl/origin-pull-ca.pem;
ssl_verify_client on;
ssl_verify_depth 5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1h;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA!RC4:EECDH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS";
ssl_session_tickets on;
ssl_session_ticket_key /etc/nginx/ssl/ticket.key;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_ecdh_curve secp384r1;
ssl_buffer_size 4k;
# Logs
log_format main '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
access_log off;
# Cache bypass
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
etag off;
server_tokens off;
# Headers
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Frame-Options deny always;
server {
listen 443 ssl http2;
server_name domain.com;
root /home/user/public_html;
index index.php index.html;
access_log /var/log/domain/domain.com.bytes bytes;
access_log /var/log/domain/domain.com.log combined;
error_log /var/log/domain/domain.com.error.log warn;
location / {
location ~.*\.(jpeg|jpg|png|gif|bmp|ico|svg|css|js)$ {
expires max;
}
location ~ [^/]\.php(/|$) {
try_files $uri =404;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/opt/alt/php-fpm73/usr/var/sockets/user.sock;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
}
}
}
}
In short: This is unrelated to modsecurity.
Your server configuration requires the client to send client certificate. The TLS handshake will fail, if the client does not send such certificate - and this is the error you see.
modsecurity only analyzes the application data at the HTTP level. With HTTPS the TLS handshake first needs to be successfully done before the any application data gets exchanged. Since in this case the TLS handshake fails due to no certificate send by the client, the connection gets closed before any HTTP data gets exchanged and thus before modsecurity is used.

Can't hide location's port with nginx

I'm trying to set up a domain for my node project with nginx (v1.5.11), i have succesfull redirected the domain to the web, but i need to use 3000 port, so now, my web location looks like http://www.myweb.com:3000/ and of course, i want to keep only "www.myweb.com" part like this: http://www.myweb.com/
I have search and try many configurations but no one seems to work for me, i dont know why, this is my local nginx.conf file, i want to change http://localhost:8000/ text to http://myName/ text, remember that the redirect is working, i only want to "hide" the port on the location.
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8000;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:8000/;
proxy_redirect http://localhost:8000/ http://myName/;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
pd. I'm trying to fix it on my local windows 8 machine, but if other OS is required, my remote server works on Ubuntu 12.04 LTS
Thanks you all.
Add this to your server block:
port_in_redirect off;
E.g.
server {
listen 80;
server_name localhost;
port_in_redirect off;
}
Documentation reference.
You should also change server_name to myName. server_name should be your domain name.
You should also be listening on port 80, and then use proxy_pass to redirect to whatever is listening on port 8000.
The finished result should look like this:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name www.myweb.com;
location / {
proxy_pass http://localhost:8000/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Comments were removed for clarity.
Hiding the port during proxying needs these two lines in server body:
server_name_in_redirect off;
proxy_set_header Host $host:$server_port;
The conf is like:
server
{
listen 80;
server_name example.com;
server_name_in_redirect off;
proxy_set_header Host $host:$server_port;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080;
}
access_log off;
}

Nginx configuration

In the below code Im redirecting through nginx to http://127.0.0.1:3000/app1/namelist/name=xyz. When I hit http://127.0.0.1:80/, it throws an error "Cannot GET/". How can I resolve this problem?
If I directly hit 127.0.0.1:3000/app1/namelist/name=xyz, it should redirect via nginx. Is it possible to configure in nginx?
#user nobody; worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream node_entry {
server 127.0.0.1:3000;
}
server {
listen 80;
server_name 127.0.0.1;
location / {
#root html;
#index index.html index.htm;
#return 503;
proxy_pass http://node_entry/;
}
}
}
Well, as far as nginx is concerned, I would remove the trailing slash from your proxy pass line so it looks like this:
proxy_pass http://node_entry;
If that doesn't work, the response must be coming from your upstream.

My node.js app is getting an Unhandled 'error' event randomly on writing requests after I had put it behind nginx

I am running node.js(0.8.20 and 0.9.10) on windows 2012 server. I have ran it absolutely without any problems for weeks. That was without Nginx(1.2.6) in front. With nginx in front configured like this:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream dem2 {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name dem2.cz dem2;
access_log /nginx-1.2.6/logs/dem2.log;
location / {
#proxy_pass http://127.0.0.1:8080/;
proxy_pass http://dem2/;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
}
}
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
}
}
On requests, I randomly get this error in Node.js app:
events.js:69
throw arguments[1]; // Unhandled 'error' event
^
Error: socket hang up
at createHangUpError (http.js:1383:15)
at ServerResponse.OutgoingMessage._writeRaw (http.js:499:26)
at ServerResponse.OutgoingMessage._send (http.js:466:15)
at ServerResponse.OutgoingMessage.end (http.js:911:18)
at SendStream.notModified (C:\dem2cz_node_app\node_modules\express\node_modules\send\lib\send.js:223:7)
at SendStream.send (C:\dem2cz_node_app\node_modules\express\node_modules\send\lib\send.js:353:17)
at SendStream.pipe (C:\dem2cz_node_app\node_modules\express\node_modules\send\lib\send.js:322:10)
at Object.oncomplete (fs.js:93:15)
Process finished with exit code 1
I suspect I have something wrong configured in Nginx, but I sure as hell don't know what it could be. Could anyone advise please?
I can post the node.js code too if you want, but it is nothing fancy, just 100 line express app for serving AngularJS static files with the option to serve HTML generated in PhantomJS when there is a bot.
There was a bugfix in 0.8.20 which significantly raised the number of "http hang-up" errors you get. They talk about it in the release notes. As #robertkelp said, it's nothing to worry about, but you should catch error events emitted by the http server to avoid crashing the server.

Resources