I am trying to setup a nginx server. I can access the content on 127.0.0.1:80 and localhost:80 but not on my public IP (xxxx.xxxx.xxxx.xxxx). Here are my configs:
/etc/nginx.conf:
user rud;
worker_processes auto;
include /etc/nginx/modules-enabled/*.conf;
daemon off;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
types_hash_max_size 4096;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/sites-enabled/default:
server {
listen *:80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ =403;
}
}
I have applied several tips found from web like adding my public ip (xxxx.xxxx.xxxx.xxxx) to default file at server_name ip but it still doesn't work.
Answer for duplicate question https://superuser.com/q/841255/733877 and https://serverfault.com/q/361499/476613 didn't work.
Related
I am running a NodeJS app behind nginx reverse proxy. When I POST large requests, I get HTTP 413 entity too large error. However, I've tried setting the client_max_body_size to 1000M at every level of my nginx config and I'm still getting the error. Yes, I restarted nginx several times and tried setting the max size in several locations, but it didn't help.
I only have 2 nginx configs - the main one, and the virtual host one, both of which I included below.
Here is the error I receive:
{'message': 'request entity too large', 'error': {'message': 'request entity too large', 'expected': 107707, 'length': 107707, 'limit': 102400, 'type': 'entity.too.large'}, 'title': 'Error'}
Here is my main config:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
client_max_body_size 1000M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:1m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
# Cloudflare OCSP DNS resolvers
resolver 1.1.1.1 1.0.0.1;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here is my virtual host config:
server {
listen 443 ssl http2;
server_name example.com;
client_max_body_size 1000M;
# ssl certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
# Strict Transport Security (HSTS)
add_header Strict-Transport-Security "max-age=63072000" always;
location / {
root /var/www/example;
try_files $uri $uri/ /index.html;
}
location /api {
client_max_body_size 1000M;
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Turns out this was actually related to NodeJS Express settings. I updated the following lines in my app.js file to include the limit and this fixed the issue:
app.use(express.json({ limit: "1000mb", extended: true }));
app.use(express.urlencoded({ limit: "1000mb", extended: true }));
Firstly, I'm sorry about my poor english and also I warn you that i'm a newbie still learning those technologies that I'm going to talk about.
So, I work on a companie and they needed some simple pages apps. I choose to use the React.js technologie with an Node.js API running with Express. (Sorry if I am wrong about the terms but I'm not english and still student).
I've done my 2 react apps and my api that are actually working correctly. I must deploy them on a CentOs. SO I've "daemonized" my 2 react apps and my API. The first react app with the port :8080, the other one with the port :3000 and the api, with the port :8081.
Then I installed Nginx, with a simple conf. It worked well. After that I've been trying to use https. So I did. But I'm now facing a problem.
When I try to reach one of my apps, I got a blank page with those messages :
GET https://domain_name/src/index.js net::ERR_ABORTED 404 (index):19
GET https://domain_name/static/js/2.3d1c602b.chunk.js net::ERR_ABORTED 404 (index):20
GET https://domain_name/static/js/main.95db8d0e.chunk.js net::ERR_ABORTED 404 manifest.json:1
GET https://domain_name/manifest.json 404 manifest.json:1
Manifest: Line: 1, column: 1, Syntax error.
And if I try to reach one of my api routes I get this :
Cannot GET /api/oneThing
and :
GET https://patt_www_ppd/api/ 404 patt_www_ppd/:1
I couldn't figure out with the problem in the net. I've found some possible solutions but I didn't understood them or it didn't worked. Can somebody help me?
Here is my nginx.conf :
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
client_max_body_size 16M;
# MIME
include mime.types;
default_type application/octet-stream;
# logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
# SSL
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites
ssl_dhparam /etc/nginx/dhparam.pem;
# Mozilla Intermediate configuration
ssl_protocols TLSv1.2;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
resolver_timeout 2s;
# load configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And here is my domain_name.conf under the /sites-available/ directory :
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain_name;
# SSL
ssl_certificate /etc/certifs/domain_name.pem;
ssl_certificate_key /etc/certifs/domain_name.key;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/domain_name.access.log;
error_log /var/log/nginx/domain_name.error.log warn;
# reverse proxy
location /inventaire/ {
proxy_pass http://127.0.0.1:8080;
include nginxconfig.io/proxy.conf;
}
location /api/ {
proxy_pass http://127.0.0.1:8081;
include nginxconfig.io/proxy.conf;
}
location /ticket/ {
proxy_pass http://127.0.0.1:3000;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
}
# subdomains redirect
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name *.domain_name;
# SSL
ssl_certificate /etc/certifs/domain_name.pem;
ssl_certificate_key /etc/certifs/domain_name.key;
return 301 https://domain_name$request_uri;
}
# HTTP redirect
server {
listen 80;
listen [::]:80;
server_name .domain_name;
return 301 https://domain_name$request_uri;
}
I really thank anyone that can bring me some answers... And again, sorry for my english and my poor abilities in this domain, but I'm still learning.
I have a Laravel 5.8 project locate at /var/www/html/got/
If I run php artisan serve, it works fine. But I'm trying to deploy my site via nginx instead.
sites-available/default
server {
listen [::]:80;
listen 80;
root /var/www/html/got/public;
index index.php index.html index.htm;
location / {
try_files $uri/ $uri /index.php?$query_string;
}
location ~ \.php?$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_read_timeout 300;
fastcgi_intercept_errors on;
fastcgi_split_path_info ^(.+\.php)(.*)$;
#Prevent version info leakage
fastcgi_hide_header X-Powered-By;
proxy_read_timeout 300;
}
}
nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
# include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-available/*;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
I kept getting
What did I do wrong ?
The nginx.conf should be something like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
The files from /etc/nginx/sites-available directory should be soft linked to the directory /etc/nginx/sites-enabled, never directly included.
Copy the file /etc/nginx/sites-available/default to reflect the domain, something like /etc/nginx/sites-available/example.com.conf and then modify the server {} part to reflect the following:
listen 80;
listen 443 ssl http2;
server_name .example.com;
root "/var/www/html/got/public";
I am struggling with NGINX and setting up my v-hosts. I'm trying to setup a vhost that redirects HTTP requests to HTTPS and then to my application (when it is 443)
My OS is Ubuntu 16.04 and I am using NGINX 1.10.3.
The nginx.conf looks like that (its mostly the default):
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My ServerBlocks / VHosts look like that:
server {
listen 443 ssl;
server_name xxx.com;
# Prevent MITM
add_header Strict-Transport-Security "max-age=31536000";
ssl_certificate "/etc/nginx/ssl/xxx.com.pem";
ssl_certificate_key "/etc/nginx/ssl/xxx.com.key";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://localhost:2237;
}
}
server {
listen 80;
server_name xxx.com;
return 301 https://$server_name$request_uri;
}
Now the problem is, that either if I am using HTTP or HTTPS it tries to redirect me to HTTPS so I am stucked in an endless Loop of redirects.
I have absolutely no idea where my mistake is.
Every VHost is in a single File. The Application on Port 2237 is a nodeJS Express Server. I am also using Cloudflare (I got my SSL Certificate from them)
Edit:
Output from curl -I is:
$ curl -I https://example.com
HTTP/1.1 301 Moved Permanently
Date: Fri, 06 Oct 2017 19:42:19 GMT
Content-Type: text/html
Connection: keep-alive
Set-Cookie: __cfduid=d827df762e20a4e321b92b34bd15546621507318939; expires=Sat, 06-Oct-18 19:42:19 GMT; path=/; domain=.example.com; HttpOnly
Location: https://example.com/
Server: cloudflare-nginx
CF-RAY: 3a9b1a6a4e4564d5-FRA
You need to use below config
server {
listen 80;
server_name example.com;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_pass http://localhost:2237;
proxy_redirect http://localhost:2237/ https://$host/;
}
}
Your are using cloudflare SSL and terminating SSL at cloudflare. So you should just be listening on port 80. Your earlier config was redirecting port 80 back to HTTPS and sending the request to Cloudflare which then send to your nginx port 80 and hence creating infinite loop
i use vue js webpack for development, when dev run smoothly, but unfortunately after npm run build (run in nginx) nginx can't load, look this:
x GET http://localhost/static/css/app.335db141d4c13fd545c8362771dbe30a.css
x GET http://localhost/static/js/manifest.a8a366914bb58ec98264.js
x GET http://localhost/static/js/vendor.538766e755f95e4f1561.js
x GET http://localhost/static/js/app.23582232aa46a8daf39d.js
x GET http://localhost/static/js/manifest.a8a366914bb58ec98264.js
x GET http://localhost/static/js/vendor.538766e755f95e4f1561.js
x GET http://localhost/static/js/app.23582232aa46a8daf39d.js
this nginx conf:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
this default conf:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /usr/share/nginx/html;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html
{
root /usr/share/nginx/html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php7.0-cgi alone:
# fastcgi_pass 127.0.0.1:9000;
# # With php7.0-fpm:
# fastcgi_pass unix:/run/php/php7.0-fpm.sock;
#}
location ~ \.php$
{
fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
this error display
enter image description here
please give me solution , or ask for more detile
Try to update your nginx.conf:
sendfile off;