I want to default to a php application if the node applications are not currently responding to requests. (i.e. down).
What do I do?
Below is my nginx config
upstream my_servers {
least_conn;
#ip_hash; # ensures persistence of session id across servers
server 127.0.0.1:3001; # httpServer2 listens to port 8001
server 127.0.0.1:3000; # httpServer1 listens to port 8000
server 127.0.0.1:80; <---- THIS ONE IS THE PHP APP
#this could also be entirely a different host server
#Ex. server 113.333.123.190:8000;
}
server {
listen 80;
server_name weburl.com;
root /home/vince/Documents/weburl/static_php_player;
index index.php index.html index.htm;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
include snippets/fastcgi-php.conf;
}
}
Add the backup option to the php server, otherwise all the petitions will be handled by the 3 servers
upstream my_servers {
least_conn;
#ip_hash; # ensures persistence of session id across servers
server 127.0.0.1:3001; # httpServer2 listens to port 8001
server 127.0.0.1:3000; # httpServer1 listens to port 8000
server 127.0.0.1:80 backup; <---- THIS ONE IS THE PHP APP
Related
I have a nodejs app that functions as a webserver listening to port 5050
I've created certificates and configured NGINX which works for normal https calls to the standard port (https://x.x/)
If I make a call to port 5050 with a normal http://x.x:5050 call it also works, but with an https://x.x:5050/conf call I get: This site can’t provide a secure connection
Below the NGINX config file:
(The names of the website are changed)
server {
root /var/www/x.x/html;
index index.html index.htm index.nginx-debian.html;
server_name x.x www.x.x;
location / {
try_files $uri $uri/ =404;
}
location /conf {
proxy_pass http://localhost:5050;
try_files $uri $uri/ =404;
}
location /wh {
proxy_pass http://localhost:5050;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/x.x/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/x.x/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
What am I doing wrong here?
You configured nginx to serve as a reverse proxy, forwarding incoming requests from https://example.com/whatever to http://localhost:5050/whatever. You said you did that correctly and it works. Good. (Getting that working is a notorious pain in the xxx neck.)
You did not configure nginx to listen on port 5050. Nor should you; that's the port it uses to pass along requests to your nodejs program. You cannot forward requests from port 5050 to port 5050. If you try to have nodejs and nginx both listen to port 5050, one of them will get an EADRINUSE error when you start your servers.
Your nodejs program listens for http requests, not https requests, on port 5050. You can't easily make it listen for both http and https on the same port. Your nodejs program, when behind nginx, should not contain any https server, only http. (You're making nginx do the hard crypto work to handle https, and letting nodejs handle your requests.)
Nor do you want your nodejs program to listen directly for http-only requests from outside your server. Because cybercreeps.
If you can block access to port 5050 from anywhere except localhost, you can declare victory on your task of configuring your server. You can do this by using
server.listen({
host: 'localhost',
port: 5050, ...
});```
in your nodejs program. Or you can configure your server's firewall to block incoming requests on any ports except https (and ssh, so you can manage it). Digital Ocean has a useful tutorial on this point.
I've got an NGINX reverse proxy on my server handling requests for http://apcompdoc.com. It listens on port 80, and can successfully return the Vue Dist, however, I have a backend node API running on port 8081 and another node process running on port 8082. The user never directly requests anything on 8082, but rather the process on 8081 sometimes requests the process on 8082, so I'm assuming I never have to even expose that to Nginx at all, but I'm not too sure. However, the main problem is that the API is never reached I believe. I have it so that when you hit the endpoint http://apcompdoc.com/api/* it should proxy over to the node process. I am using PM2 to keep the process alive and monitor it, and am assured it's running. This is my NGINX apcompdoc.com config file:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name: apcompdoc.com www.apcompdoc.com;
charset utf-8;
root /var/www/apcompdoc/dist;
index index.html index.htm;
# Always serve index.html for any request;
location / {
root /var/www/apcompdoc/dist;
try_files $uri /index.html;
}
location /api/ {
proxy_pass http://localhost:8081;
}
error_log /var/log/nginx/vue-app-error.log;
access_log /var/log/nginx/vue-app-access.log;
}
I am trying to get all requests to my API at /api/* to get redirected to the API at localhost:8081 and then returned to the user. I saw something about redirecting the proxy back, do I have to do that? I also don't know if I have to do /api/* in the NGINX config file.
I'm really new to NGINX but I just want the requests to http://apcompdoc.com/api/* to be redirected to the node process on port 8081.
Bad or good practice, I'm not sure, but I always defining my backend as upstream.
For example, your file will look like this:
upstream nodeprocess {
server localhost:8081;
}
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name: apcompdoc.com www.apcompdoc.com;
charset utf-8;
root /var/www/apcompdoc/dist;
index index.html index.htm;
# Always serve index.html for any request;
location / {
root /var/www/apcompdoc/dist;
try_files $uri /index.html;
}
location ^~ /api {
proxy_pass http://nodeprocess;
}
error_log /var/log/nginx/vue-app-error.log;
access_log /var/log/nginx/vue-app-access.log;
}
Please note I added ^~ in the location of the api and removed the trailing /
I have a VPS which runs under CentOS 7.
The idea is: to have under maindomain.com node.js front-end app deployed while under api.maindomain.com to have php back-end deployed. Is it possible? Say, add server blocks to Nginx: reverse proxy localhost:4000 for node.js app and the other block for localhost:80 for php back-end.
Maybe there exists the other solution, I don't know, I would appreciate any ideas! The main goal: to have both app at the same server.
Solution 1 with www.maindomain.com + api.maindomain.com
Frontend
server {
listen 80;
server_name www.maindomain.com;
location / {
root /path/to/your/files;
try_files /index.html;
}
}
Backend php API
server {
listen 80;
server_name api.maindomain.com;
location / {
proxy_pass http://localhost:4000;
}
}
Solution 2 everything on same domain, www.maindomain.com
server {
listen 80;
server_name www.maindomain.com;
location /api {
proxy_pass http://localhost:4000/api;
}
location / { # always at the end, like wildcard
root /path/to/your/files;
try_files /index.html;
}
}
We have nginx running two servers (port 80 and 443). These proxy_pass our upstreams:
upstream app_nodes {
ip_hash;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}
upstream app_nodes_https {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
For port 80, this is fine. However, for 443 this fails because we don't have ssl certs defined within nginx. We need our node.js app (listening on port 8000/8001) to handle the certificates to support many domains dynamically.
Is there a way to have nginx simply proxy our upstream servers and let them handle ssl?
Thank you
EDIT: Here's our server block for 443
server {
listen 443;
gzip on;
gzip_types text/plain application/json application/ocet-stream;
location / {
proxy_pass https://app_nodes_https;
add_header X-Upstream $upstream_addr;
add_header X-Real-IP $remote_addr;
include /etc/nginx/proxy_params;
}
}
Doing nginx -t actually gives the error that https protocol requires SSL support
The solution is to use streams:
stream {
upstream https_stream {
hash $remote_addr;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
server {
listen 443;
proxy_pass https_stream;
}
}
This is assuming your running your app instances on 8000/8001.
My Nginx default file looks like this:
server {
listen 80;
server_name humanfox.com www.humanfox.com;
rewrite ^/(.*)$ https://www.humanfox.com$1 permanent;
}
server {
listen 443 ssl spdy;
server_name humanfox.com www.humanfox.com;
ssl on;
ssl_certificate /root/ca3/www.humanfox.com.crt;
ssl_certificate_key /root/ca3/humanfox.com.key;
access_log /var/log/nginx/humanfox.com.access.log;
error_log /var/log/nginx/humanfox.com.error.log;
rewrite ^/(.*)$ https://www.humanfox.com$1 permanent;
}
Now, Nginx is running properly but when I try to run my nodejs server on port 443(https) it says EADDR already in use.
When I kill the port to use my nodejs server instead, it also kills Nginx and Nginx stops working.
How do I run my nodejs server on 443 ensuring nginx doesn't close.
You can't run nodejs on port 443 and nginx to serve ssl(443) at the same time. You can do this by configuring nginx as a reverse proxy for nodejs.
Let say you are running nodejs in port 3000.
const http = require('http');
http.createServer((req,res) => {
res.writeHead(200, {"Content-Type":"plain/html"});
res.end('Node is Running');
}).listen(3000);
your nginx config should be:
server {
listen 443 ssl spdy;
server_name humanfox.com www.humanfox.com;
ssl on;
ssl_certificate /root/ca3/www.humanfox.com.crt;
ssl_certificate_key /root/ca3/humanfox.com.key;
access_log /var/log/nginx/humanfox.com.access.log;
error_log /var/log/nginx/humanfox.com.error.log;
location / {
proxy_set_header X-Forwarder-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3000;
proxy_redirect off;
}
}
Hope it Helps.
You will not be able listen on the same port as nginx is already listening on.
What you can do is listen on some different port and configure nginx as a reverse proxy to connect to your node process. That way from the point of view of external users it will look exactly how you want - the Node app will be accessible on the 443 port - but there would be no conflict of ports.
See this answers for examples on how to do that:
Where do I put my Node JS app so it is accessible via the main website?
How do I get the server_name in nginx to use as a server variable in node
Configuring HTTPS for Express and Nginx