Iredmail Roundcube Not Receiving Email - ubuntu-server

I have my Odoo application running on Ubuntu server 16.04LTS with nginx web server.
I have also installed iredmail and I can access my email account using the domain_name/mail i.e. mgbcomputers.com/mail from where I can send email to another account on my local domain and others like gmail and yahoo.
However when I send an email from my yahoo or gmail to my roundcube email, I get the following error.
This message was created automatically by mail delivery software.
A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed:
obabawale#mgbcomputers.com
retry time not reached for any host after a long failure period.
My A/Mx records are correct because i can access my website using the domain address.
Below is my nginx configuration file:
upstream backend-odoo{
server 127.0.0.1:8069;
}
server {
server_name mgbcomputers.com;
listen 80;
add_header Strict-Transport-Security max-age=2592000;
rewrite ^/.*$ https://$host$request_uri? permanent;
}
server {
listen 443 default;
#ssl settings
ssl on;
ssl_certificate /etc/nginx/ssl/server.crt;
# ssl_certificate /etc/ssl/certs/iRedMail.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
keepalive_timeout 60;
root /var/www/html; #added from iredmail file
index index.php index.html; #added from the iredmail file
# proxy header and settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
# odoo log files
access_log /var/log/nginx/odoo-access.log;
error_log /var/log/nginx/odoo-error.log;
# increase proxy buffer size
proxy_buffers 16 64k;
proxy_buffer_size 128k;
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500
http_502 http_503;
# enable data compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on;
location / {
proxy_pass http://backend-odoo;
}
location ~* /web/static/ {
# cache static data
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://backend-odoo;
}
location /longpolling { proxy_pass http://backend-odoo-im;}
location /mail/ { root /var/vmail/vmail1; } # Added by Lekan for iredmail
# Web applications. Added from iredmail file
#include /etc/nginx/templates/adminer.tmpl; #Added from iredmail file
include /etc/nginx/templates/roundcube.tmpl; #Added from iredmail file
include /etc/nginx/templates/sogo.tmpl; #Added from iredmail file
include /etc/nginx/templates/iredadmin.tmpl; #Added from iredmail file
include /etc/nginx/templates/awstats.tmpl; #Added from iredmail file
# PHP applications. WARNING: php-catchall.tmpl should be loaded after
# other php web applications.
include /etc/nginx/templates/php-catchall.tmpl; #Added from iredmail file
include /etc/nginx/templates/misc.tmpl; #Added from iredmail file
}
upstream backend-odoo-im { server 127.0.0.1:8072; }
What am I not getting here?

If you can access your website, it means that your A record is correct but it does not guarantee that your MX record is also correct. You can try to validate your MX record on this site: https://mxtoolbox.com.
I've checked your domain (mgbcomputers.com) on that site and it's valid with one blacklist record but I think it's also not a problem.
My suggestion, you can try to disable your firewall (ufw) and it's better if you setup your mail server on another server which is specified only for mail service.

Related

can Nginx randomly stop working by certain requests?

I'm currently having issues with my website. Sometimes, after a fresh restart of nginx service the url of my website works just fine in the browser, It redirects successfully to the .NET Core webapp running on Kestrel.If I type the IP of my vps it also works just fine. But suddenly and randomly nginx stops serving the website and the browser just shows err_connection_closed.
Some technical information:
Kestrel is running on localhost:5000, Nginx TCP ports are managed by ufw and opened for: 80 and 443.
I'm using: Ubuntu 16.04, nginx and a .NET Core 3.1 web app. Steps were followed as next url Host and Deploy using Linux and Kestrel
Something that I have noticed in syslog file is that some IPs are blocked by ufw, but I'm not sure why they are coming from China, Mongolia or even Poland, as the initial marketing campaign is currently located for Mexico.
Other log file that I searched in was /var/log/nginx/access.log Here, some IPs try to request random urls in my website like GET /Telerik.Web.UI.WebResource.axd?type=rau HTTP/1.1" 404 0 "-" or even like "GET /phpmyadmin/ HTTP/1.1" 301 178 "-" which is absolutely not me because I'm using PostgreSQL. Although, I have to say that I've seen that after this requests are randomly made, the nginx stops working but I'm not 100% sure if this is accurate, as seen in the title, this is very random.
Some config files for nginx:
/etc/nginx/sites-available/default
# Default server configuration
#
server {
listen 80;
server_name keecheeapp.com *.keecheeapp.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
/etc/nginx/proxy_conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
/etc/nginx/nginx.conf
#other directives
events {
worker_connections 768;
# multi_accept on;
}
http {
include /etc/nginx/proxy.conf;
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
server_tokens off;
sendfile on;
keepalive_timeout 29; # Adjust to the lowest possible value that makes sense for your use case.
client_body_timeout 10; client_header_timeout 10; send_timeout 10;
upstream keecheeapp{
server localhost:5000;
}
server {
listen *:80;
add_header Strict-Transport-Security max-age=15768000;
return 301 https://$host$request_uri;
}
server {
listen *:443 ssl;
server_name keecheeapp.com;
ssl_certificate /etc/ssl/certs/keecheeapp.com-concat-certs.crt;
ssl_certificate_key /etc/ssl/certs/private_new.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
#Redirects all traffic
location / {
proxy_pass http://www.keecheeapp.com;
limit_req zone=one burst=10 nodelay;
}
}
}
There are several issues with your Nginx configuration:
In the file /etc/nginx/nginx.conf
The combination of limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s; and limit_req zone=one burst=10 nodelay; will limit the request processing rate per client to 5 requests/second. If you send too many requests per second then you will get error messages from Nginx. So if you want to keep the limit feature, try to increase the existing value to, for example, rate=50r/s and burst=100. If you want to disable this feature, delete or comment out those lines. You can learn more about this feature here.
The value http://www.keecheeapp.com for the proxy_pass directive is wrong . The correct value is keecheeapp as defined by the upstream keecheeapp {...} block. So change proxy_pass http://www.keecheeapp.com; to proxy_pass http://keecheeapp;
The server block in the file /etc/nginx/sites-available/default instructs Nginx to serve your website using HTTP.
The following server block in the file /etc/nginx/nginx.conf instructs Nginx to serve your website using HTTPS.
server {
listen *:443 ssl;
server_name keecheeapp.com;
...
}
So your website is accessible over both HTTP and HTTPS. It's not a good idea. You should redirect all HTTP requests to HTTPS as follows:
Delete or comment out the server block in in the file /etc/nginx/sites-available/default
Modify the following server block in the file /etc/nginx/nginx.conf
server {
listen *:80;
add_header Strict-Transport-Security max-age=15768000;
return 301 https://$host$request_uri;
}
To:
server {
listen *:80;
server_name keecheeapp.com *.keecheeapp.com;
add_header Strict-Transport-Security max-age=15768000;
return 301 https://$host$request_uri;
}
With your given configuration, Nginx is passing all requests to Kestrel, including static file requests (image, JS, CSS, etc.). This is unrealistic. Let Nginx handle static files, and Kestrel handles dynamic requests. Please change the following configuration block:
#Redirects all traffic
location / {
proxy_pass http://www.keecheeapp.com;
limit_req zone=one burst=10 nodelay;
}
To:
root /path/to/your/static/folder;
# Serve static file requests
location / {
try_files $uri $uri/ #kestrel;
}
# Pass dynamic requests to Kestrel
location #kestrel {
proxy_pass http://keecheeapp;
limit_req zone=one burst=10 nodelay;
}
Change /path/to/your/static/folder to the actual folder on your server.
After editing, don't forget to test Nginx configuration with sudo nginx -t, then reload it with sudo systemctl reload nginx.service.

Cant Serve statics files using NGINX and Digital Ocean

/etc/nginx/sites-available/default conf.
Hello, I am using DigialOcean NodeJS one click app set up for my app. NGINX is serving my HTML files, but its not serving my css or java files. I have tried to add location blocks for the public folder which is where my stylesheets and images and java files are. I don't know NGINX very much so any help would be appreciated.
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /Portfolio;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name 157.230.203.182;
location ^~ /assets/ {
gzip_static on;
expires 12h;
add_header Cache-Control public;
}
location / {
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
}
}
I had to add
location ~* \.(css|gif|html|ico|jepg|jpg|mp4||js|jsx|pdf|php|png|scss|svg|txt|zip) {
add_header Cache-Control public;
add_header Cache-Control must-revalidate;
}
It works now but I had to add the above code
not sure why what this code does to be honest. If someone could explain that would be helpful.

Configure NGINX (Engintron) HTTPS to HTTP for Nodejs on specific port

I'm really new on webserver matters and trying to find a working configuration now for weeks, so any comment is greatly appreciated!
I have a CentOS machine running cPanel (EasyApache on ports 8080 and 8443) and Nginx in front on ports 80 and 443. Finaly, I have a Node js app running on port 8002.
My Node app is integrated with a Joomla website homepage, so I really need it to run in a different port (not sure though if 8002 was the best pick).
All is working great untill I install SSL Let's Encrypt certificates, I did it using cPanel Let's Encrypt for cPanel.
I've also read that the standard is to pass already encrypted traffic to Node js and let Ngnix deal with https. So, my Nodejs application is expecting http traffic.
With my current Ngnix configuration if I access it using https:// Joomla website will work fine, but my application will breake with a xhr poll error.
I can see from console that it is trying to access socket.io through https, which will not work:
Request URL:https://xxx.xx.xxx.xx:8002/socket.io/?userid=0&EIO=3&transport=polling&t=M086vNB
While accessing https://xxx.xx.xxx.xx:8002 will give me "Secure Connection Failed".
How to configure Ngnix to correctly use my app in this scenario?
Current configuration added on default.conf after block for port 80:
server {
listen 80 default_server;
server_name localhost;
# Initialize important variables
set $CACHE_BYPASS_FOR_DYNAMIC 0;
set $CACHE_BYPASS_FOR_STATIC 0;
set $PROXY_DOMAIN_OR_IP $host;
set $PROXY_TO_PORT 8080;
set $SITE_URI "$host$request_uri";
# Generic query string to request a page bypassing Nginx's caching entirely for both dynamic & static content
if ($query_string ~* "nocache") {
set $CACHE_BYPASS_FOR_DYNAMIC 1;
set $CACHE_BYPASS_FOR_STATIC 1;
}
# Proxy requests to "localhost"
if ($host ~* "localhost") {
set $PROXY_DOMAIN_OR_IP "127.0.0.1";
}
# Proxy cPanel specific subdomains
if ($host ~* "^webmail\.") {
set $PROXY_DOMAIN_OR_IP "127.0.0.1";
set $PROXY_TO_PORT 2095;
}
if ($host ~* "^cpanel\.") {
set $PROXY_DOMAIN_OR_IP "127.0.0.1";
set $PROXY_TO_PORT 2082;
}
if ($host ~* "^whm\.") {
set $PROXY_DOMAIN_OR_IP "127.0.0.1";
set $PROXY_TO_PORT 2086;
}
if ($host ~* "^webdisk\.") {
set $PROXY_DOMAIN_OR_IP "127.0.0.1";
set $PROXY_TO_PORT 2077;
}
if ($host ~* "^(cpcalendars|cpcontacts)\.") {
set $PROXY_DOMAIN_OR_IP "127.0.0.1";
set $PROXY_TO_PORT 2079;
}
# Set custom rules like domain/IP exclusions or redirects here
include custom_rules;
location / {
try_files $uri $uri/ #backend;
}
location #backend {
include proxy_params_common;
# === MICRO CACHING ===
# Comment the following line to disable 1 second micro-caching for dynamic HTML content
include proxy_params_dynamic;
}
# Enable browser cache for static content files (TTL is 1 hour)
location ~* \.(?:json|xml|rss|atom)$ {
include proxy_params_common;
include proxy_params_static;
expires 1h;
}
# Enable browser cache for CSS / JS (TTL is 30 days)
location ~* \.(?:css|js)$ {
include proxy_params_common;
include proxy_params_static;
expires 30d;
}
# Enable browser cache for images (TTL is 60 days)
location ~* \.(?:ico|jpg|jpeg|gif|png|webp)$ {
include proxy_params_common;
include proxy_params_static;
expires 60d;
}
# Enable browser cache for archives, documents & media files (TTL is 60 days)
location ~* \.(?:3gp|7z|avi|bmp|bz2|csv|divx|doc|docx|eot|exe|flac|flv|gz|less|mid|midi|mka|mkv|mov|mp3|mp4|mpeg|mpg|odp|ods|odt|ogg|ogm|ogv|opus|pdf|ppt|pptx|rar|rtf|swf|tar|tbz|tgz|tiff|txz|wav|webm|wma|wmv|xls|xlsx|xz|zip)$ {
set $CACHE_BYPASS_FOR_STATIC 1;
include proxy_params_common;
include proxy_params_static;
expires 60d;
}
# Enable browser cache for fonts & fix #font-face cross-domain restriction (TTL is 60 days)
location ~* \.(eot|ttf|otf|woff|woff2|svg|svgz)$ {
include proxy_params_common;
include proxy_params_static;
expires 60d;
add_header Access-Control-Allow-Origin *;
}
# Prevent logging of favicon and robot request errors
location = /favicon.ico {
include proxy_params_common;
include proxy_params_static;
expires 60d;
log_not_found off;
}
location = /robots.txt {
include proxy_params_common;
include proxy_params_static;
expires 1d;
log_not_found off;
}
location = /nginx_status {
stub_status;
access_log off;
log_not_found off;
# Uncomment the following 2 lines to make the Nginx status page private.
# If you do this and you have Munin installed, graphs for Nginx will stop working.
#allow 127.0.0.1;
#deny all;
}
location = /whm-server-status {
proxy_pass http://127.0.0.1:8080;
# Comment the following 2 lines to make the Apache status page public
allow 127.0.0.1;
deny all;
}
# Deny access to files like .htaccess or .htpasswd
location ~ /\.ht {
deny all;
}
}
#------- Custom added code
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name 127.0.0.1:443;
ssl_certificate /home/project/ssl/certs/example_com_d1d73_8dd49_1519411667_866136c129b5999aa4fbd9773c3ec6c1.crt;
ssl_certificate_key /home/project/ssl/keys/d1d73_8dd49_56cd172fe5a41ee5b923ad66210daecc.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
location / {
proxy_pass http://127.0.0.1:8002;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass "http://127.0.0.1:8002/socket.io/";
}
}
I think you're using the wrong syntax for the reverse proxy. You gotta tell it # a server or wsgi instance or it thinks its a directory. Here's my setup, extrapolate that to yours.
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
upstream app_server {
server unix:/opt/workTracker/run/gunicorn.sock fail_timeout=0;
}
The other thing that I thought of was a setting that enables end-to-end encryption might be on by default. This is also called upstream ssL and you want it turned off if you're serving the content over http. Based on this serverFault post https://serverfault.com/questions/583374/configure-nginx-as-reverse-proxy-with-upstream-ssl, I would say you may need to add this:
proxy_ssl_session_reuse on;. The original posting was from a guy trying to do the opposite, re-encrypt to the backend servers, which is what yours is doing right now. Some people like that setup, it takes a little longer (latency) but the advantage is the packets remain secure on the internal network.

Logs not coming through Nginx Reverse Proxy (Nginx config issue?)

We have Node.js applications sending logs to a URL which points to my Nginx Reverse Proxy server.
I have the nginx reverse proxy server setup in a docker container and then have a set of containers for Fluentd, ElasticSearch and Kibana which are meant to receive, collect and display these logs.
The only ports kept open on the server running these containers including nginx reverse proxy are 8080(http) and 443(https).
The logs get generated properly from the application as I have tested and confirmed that. Also, if I do the entire setup without the nginx reverse proxy in the docker container, then it all runs fine.
The same nginx reverse proxy is also being used to proxy other servers and they all are functioning fine.
The only problem seems to be the nginx reverse proxy setting which isn't able to receive the Node.js application logs which are in JSON format.
However Http and https request are going through.
I am using LetsEncrypt to generate SSL certificates automatically and auto generating this nginx config accordingly.
I have attached my nginx config file here:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml applic
ation/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header X-Forwarded-Host $host;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream <hid_the_name> {
## Can be connect with "reverse-proxy" network
# fluentd
server 172.21.0.9:24224;
}
server {
server_name <hid_the_name>;
listen 80 so_keepalive=1m::10;
access_log /var/log/nginx/access.log vhost;
return 301 https://$host$request_uri;
}
server {
server_name <hid_the_name>;
listen 443 ssl so_keepalive=1m::10 http2 ;
access_log /var/log/nginx/access.log vhost;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-G
CM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-E
CDSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES2
56-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AE
S256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256
:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/<hid_the_name>.crt;
ssl_certificate_key /etc/nginx/certs/<hid_the_name>.key;
ssl_dhparam /etc/nginx/certs/<hid_the_name>.dhparam.pem;
add_header Strict-Transport-Security "max-age=31536000";
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://<hid_the_name>;
}
}
So this Config file was then being called in another nginx config file inside the http block. We can't accept TCP input in that block. So I just had to create another block for stream and then put in the necessary details inside that for the tcp connection and it is all good now.

Nginx 502 Bad Gateway when uploading files

I get the following error when I try to upload files to my node.js based web app:
2014/05/20 04:30:20 [error] 31070#0: *5 upstream prematurely closed connection while reading response header from upstream, client: ... [clipped]
I'm using a front-end proxy here:
upstream app_mywebsite {
server 127.0.0.1:3000;
}
server {
listen 0.0.0.0:80;
server_name {{ MY IP}} mywebsite;
access_log /var/log/nginx/mywebsite.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_mywebsite;
proxy_redirect off;
# web socket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
This is my nginx.conf file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 20;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
# default_type application/octet-stream;
default_type text/html;
charset UTF-8;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_min_length 256;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# nginx-naxsi config
##
# Uncomment it if you installed nginx-naxsi
##
#include /etc/nginx/naxsi_core.rules;
##
# nginx-passenger config
##
# Uncomment it if you installed nginx-passenger
##
#passenger_root /usr;
#passenger_ruby /usr/bin/ruby;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Any idea on how to better debug this? The things I've found haven't really worked (e.g. removing the tailing slash from my proxy_pass
Try adding the following to your server{} block, I was able to solve an Nginx reverse proxy issue by defining these proxy attributes:
# define buffers, necessary for proper communication to prevent 502s
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
The issue may be caused by PM2. If you're enabled watching, the app will restart on every single file change(and new uploads too). The solution could be disabling watching completely or adding the uploads folder to ignore list.
More: https://pm2.keymetrics.io/docs/usage/watch-and-restart/
So in the end I ended up changing in my keepalive from 20 to 64 and it seems to handle large files fine now. The bummer about it is that I re-wrote from scratch the image upload library I was using node-imager, but at least I learned something from it.
server {
location / {
keepalive 64
}
}
Try adding the following below to the http section of your /etc/nginx/nginx.conf:
fastcgi_read_timeout 400s;
and restart nginx.
Futher reading: nginx docs
Try this:
client_max_body_size - Maximum uploadable file size
http {
send_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
client_max_body_size 100m;
large_client_header_buffers 8 32k;
}
and server section:
server {
location / {
proxy_buffer_size 32k;
}
}
large_client_header_buffers 8 32k and proxy_buffer_size 32k
- is enough for most scripts, but you can try 64k, 128k, 256k...
(sorry, im not english speaking) =)

Resources