I have a application in a vps server that have the backend in node.js and the frontend in Angular;
I restart the nginx and something problems starts. My api don't work more in https, only in http (before i can make request in https);
When i access in browser the link of my application i receive a message from my backend, as if i'm making a get in this route, but before than i restart the nginx when i access this link my frontend shows the login page...
My angular dist files is in public_html and my node app is in /nodeapp;
This is my nginx conf:
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log;
error_log error.log warn;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
listen [::]:80 ipv6only=on;
server_name knowhowexpressapp.com;
location / {
proxy_pass http://189.90.138.98:3333;
proxy_http_version 1.1;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
I try some things like:
pm2 restart server;
nginx -s reload
service nginx restart
but my frontend is still not showing when i try to access the page.
As we have been able to deduce together, the configuration of nginx started a redirect towards the backend incorrectly.
Our solution was to not use nginx and expose the port we needed on the server so that the angular application could reach it.
Of course we could also use nginx in this regard and redirect only one path to a specific port.
Related
My current configuration is as follows
Bare metal cluster. SELINUX status is off.
A nginx deployment
A nodejs deployment
I am serving static content through the ngnix service and the dynamic content using the node service. Below is my nginx configuration.
worker_processes 4;
#error_log logs/error.log info;
error_log /dev/stdout info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
#keepalive_timeout 0;
#keepalive_timeout 5;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
server {
listen 1080;
server_name localhost $hostname;
root /usr/share/nginx/static/;
# static content
location ~ some-regex {
alias /usr/share/nginx/static/;
# handle cors see 'NGINX-Cookbook' for production quality
add_header 'Access-Control-Allow-Origin' '*';
}
# forward request to node-service
location / {
client_max_body_size 128M;
proxy_buffer_size 256k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 512k;
proxy_http_version 1.1;
# proxy_set_header Connection "";
# proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_socket_keepalive on;
proxy_pass http://nodeserver:3000;
}
}
include servers/*;
}
In my ingress I am hitting the nginx service. I am able to forward my requests correctly to node service few times and get the proper response but around 50% of the time the request fails with a 502 bad gateway error and I see this error in the nginx pod logs
[error] 20#20: *187 connect() failed (111: Connection refused) while
connecting to upstream, client: 10.44.0.2, server: localhost, request:
"GET /path HTTP/1.1", upstream:
"http://node-service-clusterip:3000/path", host:
"my-nginx-node.example.com"
I have tried multiple directives from the nginx documentation but to no avail. Any help would be much appreciated
There was a mistake in my kubernetes label selectors. I was using same selectors for multiple deployments which caused issues in the routing.
I'm using nginx as a proxy for a Node server that's rate-limiting requests. The rate is one request every 30 seconds; most requests return a response fine, but if a request is kept open for an extended period of time, I get this:
upstream prematurely closed connection while reading response header from upstream
I cannot figure out what might be causing this. Below is my nginx configuration:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
# include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /srv/www/main/htdocs;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location /vcheck {
proxy_pass http://127.0.0.1:8080$is_args$query_string;
# proxy_buffer_size 128k;
# proxy_buffers 4 256k;
# proxy_busy_buffers_size 256k;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection 'upgrade';
# proxy_set_header Host $host;
# proxy_cache_bypass $http_upgrade;
# proxy_redirect off;
proxy_read_timeout 600s;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index routes.php$is_args$query_string;
}
location / {
if (-f $request_filename) {
expires max;
break;
}
if ($request_filename !~ "\.(js|htc|ico|gif|jpg|png|css)$") {
rewrite ^(.*) /routes.php last;
}
}
}
}
Is there a reason why Node could be closing the connection early?
EDIT: I'm using Node's built-in HTTP server.
Seems like You've to extend response timeout of nodejs application.
So if it's expressjs app so I can guess You try this one:
install: npm i --save connect-timeout
use:
var timeout = require('connect-timeout');
app.use(timeout('60s'));
But I recommend to not to keep connection waiting and fix issue in nodejs app, find why it's halting so long.
Seems like nodejs app has issues that cannot respond and request is getting lost keeping nginx waiting.
So, I am having a really hard time right now. I have two nodejs applications. One is running on port 8080, and one on 8081. They are both running on the same ip addresses. I have two domains, domain1.com and domain2.com. I am using Nginx as a reverse proxy to redirect domain1.com to port 8080 and domain2.com to port 8081. My problem at the moment is that domain1.com is the only on that works. I can only access the other node app by going to domain1.com:8081 or domain2.com:8081.
My nginx file structure:
domain1.com.conf:
server {
listen 80;
server_name domain1.com www.domain1.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
domain2.com.conf:
server {
listen 80;
server_name domain1.com www.domain1.com;
location / {
proxy_pass http://127.0.0.1:8081;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Any help would be greatly appreciated. I have been racking my brain a this for so long, and I can not find many relevant answers online.
BTW, I a running all of this on CentOS 6.3
UPDATE: after troubleshooting some more. I discovered that my problem might not be an nginx problem because I completely shut the nginx service down and my node app was still displaying. It is weird because nothing is running on port 80. I even used the netstat command to check if anything was running on port 80. I am so confused right now. If anyone has any idea on how to fix this or how to troubleshoot further. Please let me know.
I'm not an nginx expert but this kind of setup works for me:
upstream www.domain1.com {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name domain1.com,www.domain1.com;
location / {
proxy_pass http://www.domain1.com;
}
}
# same for domain2
I don't know how or why this worked, but restarting my server seemed to solve this issue. I still haven't the slightest clue what was causing this, but the power flickered at my house, my server restarted, and everything worked fine with the configuration I had to start. Thanks to anyone who tried to help. I am marking ShanShan's answer as the correct one since his/her configuration is valid and works fine.
Im working on a centos 6.7 machine and I’m trying to configure nginx to serve a node.js application. I feel like I’m really close but I’m missing something. So heres my nginx.conf and below that is my server.conf thats in my sites-enabled directory.
When I go to the public IP address it gives me a 502 bad gateway error. But if I curl the private IP with the correct port on my centos machine I can see the node application running. What am I missing here? is it a firewall issue or maybe something else?
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
#include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
sites-enabled/server.conf
server {
listen 80;
#server_name localhost;
location / {
proxy_pass http://192.xxx.x.xx:8000; // private IP
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
UPDATE:
I figured this out! heres the server block that worked for me
server {
listen 80 default_server;
listen [::]:80 default_server;
#server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I want to write a comment but stack overflow does not let me do it.
I am 99% sure that Node.js website does NOT need to work with nginx or apache.
If the script setup correctly, the Node.js Application should listen to the port by itself.
Since you did not really say much of your structure, I guess you can just try to access through the public IP with the port of Node.js.
I have a pretty simple server setup that isn't working for some reason.
I have two apps running locally, one on port 1999 and the other on port 8000.
I have disabled Apache, and have nginx installed. Here's my nginx.conf in /etc/nginx/:
user nginx;
worker_processes 8;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
server_names_hash_bucket_size 64;
include /etc/nginx/conf.d/*.conf;
}
And here's my default.conf in /etc/nginx/conf.d/ :
server {
listen 80;
server_name attendahh.com;
location / {
proxy_pass http://localhost:1999;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name threadfinder.net;
location / {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Navigating to http://my-ip-address:1999 and http://my-ip-address:8000 works just fine, but going to my domains does not.
I get the feeling that NGINX just plain isn't working, and maybe something in my hosts file/something else is messing things up. Any ideas on what steps I can take to work this out? This is a fresh install of NGINX.
EDIT: Also, when I try to access threadfinder.net and attendahh.com nothing appears in my access logs at /var/log/nginx/access.log
Well, do you have problem with DNS?