Redirecting amazon ec2 nodejs instance to HTTPS - node.js

I'm running a node/express app on an amazon ec2 instance, no load-balancer, free tier. I'm trying to redirect everything to HTTPS. Everything I've done up until now was through the EB CLI (eb deploy, eb ssh, and so on).
I got a free certificate from letsencrypt (certbot) and I've set up the nginx.conf as explained in this tutorial. I'm able to access both the http and the https versions of the app URL. The http retrieves my nodejs app, but the https returns the default nginx html page (from /usr/share/nginx/html).
I would like to get my nodejs app on HTTPS only and redirect all HTTP requests to HTTPS.
My nginx.conf is as follows:
# Elastic Beanstalk managed configuration file
# Some configuration of nginx can be by placing files in /etc/nginx/conf.d
# using Configuration Files.
# http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html
#
# Modifications of nginx.conf can be performed using container_commands to modify the staged version
# located in /tmp/deployment/config/etc#nginx#nginx.conf
# Elastic_Beanstalk
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
port_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
# Elastic Beanstalk Modification(EB_INCLUDE)
log_format healthd '$msec"$uri"'
'$status"$request_time"$upstream_response_time"'
'$http_x_forwarded_for';
server {
listen 80;
server_name localhost;
location / {
# Redirect any http requests to https
if ($http_x_forwarded_proto != 'https') {
rewrite ^ https://$host$request_uri? permanent;
}
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate "/etc/letsencrypt/live/domain/fullchain.pem";
ssl_certificate_key "/etc/letsencrypt/live/domain/privkey.pem";
# It is *strongly* recommended to generate unique DH parameters
# Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
#ssl_dhparam "/etc/pki/nginx/dhparams.pem";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
ssl_prefer_server_ciphers on;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
include /etc/nginx/conf.d/*.conf;
# End Modification
}

To re-route ports, you can add iptables routing in your EC2 instance, for example:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 443
* Make sure that in EC2 Security Group, the inbound HTTP port 80 source = "Anywhere".
To view the iptables routing entries, run:
sudo iptables -t nat -L
If you need to remove routing entry (first line), run:
sudo iptables -t nat -D PREROUTING 1

Related

Is it dangerous opening port 3000 of the server?

I want to deploy my Angular + NodeJS application. My NodeJS application runs on http://localhost:3000 on the server. And my Angular application tries to send it's requests to the server with this prefix address: http://server.ip.address:3000. I opened the port 3000 of the server with the following commands to help my program works and it works fine by now.
irewall-cmd --zone=public --add-port=3000/tcp --permanent
firewall-cmd --reload
But I am not sure if I did a good job or not?
My Angular app runs on nginx and my NodeJS app runs on PM2. I also tried to setting a reverse proxy as you can see below inside etc/nginx/nginx.conf, but it didn't work and just opening port 3000 worked for me!
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /demo/stock-front9/dist/strategy;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#proxy_pass http://localhost:3000;
#proxy_http_version 1.1;
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
What is the best way to deploy Angular + NodeJS application and how can I do it?
You can deploy the app by just assigning port to process.env.PORT, and put the whole angular build in a public/src folder and give the public folder path in node server file.
You can take reference here https://github.com/Ris-gupta/chat-application
There's no best way but there some best practices. Opening port directly on a server is not good solution. I would suggest you to use docker and publish your application inside container with NGINX. Also you can deploy your backend server in same way.

Apps daemonized with pm2 not working with nginx

Firstly, I'm sorry about my poor english and also I warn you that i'm a newbie still learning those technologies that I'm going to talk about.
So, I work on a companie and they needed some simple pages apps. I choose to use the React.js technologie with an Node.js API running with Express. (Sorry if I am wrong about the terms but I'm not english and still student).
I've done my 2 react apps and my api that are actually working correctly. I must deploy them on a CentOs. SO I've "daemonized" my 2 react apps and my API. The first react app with the port :8080, the other one with the port :3000 and the api, with the port :8081.
Then I installed Nginx, with a simple conf. It worked well. After that I've been trying to use https. So I did. But I'm now facing a problem.
When I try to reach one of my apps, I got a blank page with those messages :
GET https://domain_name/src/index.js net::ERR_ABORTED 404 (index):19
GET https://domain_name/static/js/2.3d1c602b.chunk.js net::ERR_ABORTED 404 (index):20
GET https://domain_name/static/js/main.95db8d0e.chunk.js net::ERR_ABORTED 404 manifest.json:1
GET https://domain_name/manifest.json 404 manifest.json:1
Manifest: Line: 1, column: 1, Syntax error.
And if I try to reach one of my api routes I get this :
Cannot GET /api/oneThing
and :
GET https://patt_www_ppd/api/ 404 patt_www_ppd/:1
I couldn't figure out with the problem in the net. I've found some possible solutions but I didn't understood them or it didn't worked. Can somebody help me?
Here is my nginx.conf :
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65535;
events {
multi_accept on;
worker_connections 65535;
}
http {
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
client_max_body_size 16M;
# MIME
include mime.types;
default_type application/octet-stream;
# logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log warn;
# SSL
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites
ssl_dhparam /etc/nginx/dhparam.pem;
# Mozilla Intermediate configuration
ssl_protocols TLSv1.2;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4 208.67.222.222 208.67.220.220 valid=60s;
resolver_timeout 2s;
# load configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
And here is my domain_name.conf under the /sites-available/ directory :
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain_name;
# SSL
ssl_certificate /etc/certifs/domain_name.pem;
ssl_certificate_key /etc/certifs/domain_name.key;
# security
include nginxconfig.io/security.conf;
# logging
access_log /var/log/nginx/domain_name.access.log;
error_log /var/log/nginx/domain_name.error.log warn;
# reverse proxy
location /inventaire/ {
proxy_pass http://127.0.0.1:8080;
include nginxconfig.io/proxy.conf;
}
location /api/ {
proxy_pass http://127.0.0.1:8081;
include nginxconfig.io/proxy.conf;
}
location /ticket/ {
proxy_pass http://127.0.0.1:3000;
include nginxconfig.io/proxy.conf;
}
# additional config
include nginxconfig.io/general.conf;
}
# subdomains redirect
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name *.domain_name;
# SSL
ssl_certificate /etc/certifs/domain_name.pem;
ssl_certificate_key /etc/certifs/domain_name.key;
return 301 https://domain_name$request_uri;
}
# HTTP redirect
server {
listen 80;
listen [::]:80;
server_name .domain_name;
return 301 https://domain_name$request_uri;
}
I really thank anyone that can bring me some answers... And again, sorry for my english and my poor abilities in this domain, but I'm still learning.

Nginx Reverse Proxy display default page instead of remote home page

I have configured nginx as a reverse proxy and load balancer on a server and on another server there is a web application running. When i access the public URL of nginx it display the default page of RHEL instead of the homepage of the application on remote server. Also, when I add a path in the nginx IP it redirects me to the IP of the application server in the browser instead being the same nginx server. I want the IP to be same as nginx server.
Example:
Nginx IP : 52.2.2.2
Remote Ip : 52.2.2.3
Browser
http://52.2.2.2/admin_portal
IP changes in Broswer
http://52.2.2.3/admin_portal
Below are my configuration:
/etc/nginx/conf.d/load_balancer.conf
upstream backend {
server 10.128.0.2;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://backend;
}
}
My Nginx configuration file
/etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Can someone please help me
Before you pass the proxy you have to rewrite it.
location / {
rewrite ^/reclaimed/Ip / last;
proxy_pass http://backend;
}

Nginx 502 Bad Gateway error on EC2 Instance

I've been having some trouble configuring an nginx server on a EC2 Linux instance. I'm running an application on port 3000 and want to map that to port 80 using nginx.
Here is my configuration file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote-addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
top_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
index index.html index.htm
server {
listen 80 default_server;
[::]:80 default_server;
server_name localhost;
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location =/40x.html {
}
error_page 500 502 503 504 /50x.html;
location =/50x.html {
}
}
include /etc/nginx/sites-enabled/default;
This is the default file that comes with nginx with very slight changes by me, most notably the inclusion of a custom file called default, whose contents are as follows:
server {
listen 80;
server_name [my_domain_name];
location / {
proxy_pass http://[my_private_ip]:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
With the items in square brackets replaced with the correct values. Whenever I try to navigate to the website I get 502 Bad Gateway nginx/1.12.1.
My server is a node.js server running on port 3000.
I've tried troubleshooting and reading other stackoverflow questions about bad gateways but I can't figure out the solution. Thank you
Follow a different approach. Allow your application to run on port 3000 (and listen on 3000 as well). In this case, you would then have to open it as
http://url:3000
Now we just need to forward all requests coming to port 80 to 3000 which can be easily done using iptables:
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Now you should be simply able to open it with the url, without the port number

NGINX not show default page on Amazon EC2 Instance

I installed nginx on Fedora. But I don't know why I cannot get the default nginx page by requesting server IP through browser. My request is down by timeout.
But nginx is running.
$ sudo service nginx status
nginx (pid 20372) is running...
My default generated config is
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
index index.html index.htm;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl;
# listen [::]:443 ssl;
# server_name localhost;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# # It is *strongly* recommended to generate unique DH parameters
# # Generate them with: openssl dhparam -out /etc/pki/nginx/dhparams.pem 2048
# #ssl_dhparam "/etc/pki/nginx/dhparams.pem";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
}
I have no idea what is happening. Also /var/log/nginx/access.log and /var/log/nginx/access.log are empty. Help please...
On Amazon EC2, you will need to open up the firewall to allow incoming HTTP connections from your browser to the instance.
Login to the Amazon Web Console
Go to EC2
Find your instance
Click on its Security Group
Click Inbound Tab
Click Edit
Add Rule -> HTTP, set the Source field to My Ip
The changes will go into effect immediately.
Please note that if you are accessing your instance from a non fixed IP (coffee shop wifi), you will need to change the Source IP address everytime you connect and get assigned a new IP address. So if it works, and then after a while, it seems to hang, that may be why.
If you are also serving HTTPS, you will to add a specific rule also.

Resources