NGINX proxy_pass to CloudFront with customm domains - amazon-cloudfront

I have a CloudFront Distribution and I need to do multiple NGINX proxy_pass(es)
for custom domains like described below:
aaa.example.com -> https://d2c4fb2vqtbx6f.cloudfront.net/channel/vfc-bcxfhh37u/
bbb.example2.com -> https://d2c4fb2vqtbx6f.cloudfront.net/channel/vfd-gcxiuh47u/
ccc.example3.com -> https://d2c4fb2vqtbx6f.cloudfront.net/channel/voi-j7sfmb09w/
I'm getting 403 ERROR from CloudFront unless I'm setting up the Alternate domain name + the Cert. for one of the subdomains above.
Is there any solution for my scenario to use one distribution with multiple domains?
My NGINX is taking care of SSL

The solution is to have a single certificate file containing all the domains and add those domains to CloudFront's Alternate domain name list.
I decided to solve it with a Reverse Proxy (NGINX) and Let'sencrypt (certbot) Here are the steps:
Setup an NGINX server to act as a reverse proxy
Create a CNAME record pointing to the proxy server from step 1
Create a Virtual Host file for the certbot (Let's Encrypt) challenge and restart NGINX (sudo nginx -s reload):
server {
listen 80;
server_name aaa.example.com;
access_log /var/log/nginx/$server_name-access.log;
error_log /var/log/nginx/$server_name-error.log;
location /.well-known/acme-challenge/ {
root /web/sites/$server_name/www/;
}
location / {
proxy_pass https://d2c4fb2vqtbx6f.cloudfront.net/channel/vfc-bcxfhh37u/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
}
Created a bash script that will update the certificate:
#!/bin/bash
CF_Dist_id=ES70C33IOVC7G
Cert_ARN=arn:aws:acm:us-east-1:723678323458:certificate/48g54dre-5t6b-gf20-9c74-5d6435333dd5
Cert_Name=allcustomdomain
Region=us-east-1 #Certs must remain in us-east-1 regardless of your env. region
#List all existing domains and add them to array
mapfile -t domains < <(sudo cat /etc/letsencrypt/live/$Cert_Name/cert.pem | openssl x509 -text | grep DNS | tr , '\n' | sed 's/DNS://g' | tr -s ' ')
#Check if cert already exist
if [[ $domains == *"$1"* ]]; then
echo "Certificate already exists"
exit 1
fi
#Build domains list for certbot
i=0
while [ $i -le $(("${#domains[#]}"-1)) ];
do
domainslst="$domainslst -d "${domains[$i]}""
i=$(($i+1))
done
domainslst="$domainslst -d $1"
echo $domainslst
#Generate certificate
sudo certbot certonly --expand --nginx --non-interactive --agree-tos --email me#example.com --cert-name $Cert_Name $domainslst
#Update ACM with new cert
sudo /usr/local/bin/aws acm import-certificate --certificate-arn $Cert_ARN --certificate fileb:///etc/letsencrypt/live/$Cert_Name/cert.pem --private-key fileb:///etc/letsencrypt/live/$Cert_Name/privkey.pem --certificate-chain fileb:///etc/letsencrypt/live/$Cert_Name/fullchain.pem --region $Region
#Get current CloudFront dist config
ETag=$(/usr/local/bin/aws cloudfront get-distribution-config --id $CF_Dist_id | jq -r '.ETag')
/usr/local/bin/aws cloudfront get-distribution-config --query 'DistributionConfig' --id $CF_Dist_id > dist.json
#Update the config file with the new custom domain
jq --arg domain $1 '.Aliases.Items += [$domain]' dist.json > tmp.$$.json && mv tmp.$$.json dist.json
domains_count=$(jq -r '.Aliases.Items | length' dist.json)
echo "Domains count: $domains_count"
jq -r --argjson domains_count "$domains_count" '.Aliases.Quantity = $domains_count' dist.json > tmp.$$.json && mv tmp.$$.json dist.json
#Update the CloudFront Dist. with the new config.
/usr/local/bin/aws cloudfront update-distribution --if-match $ETag --id $CF_Dist_id --distribution-config file://dist.json >/dev/null
#remove the config file
rm dist.json
Save the script as certs.sh
Run the script with your new domain, this will generate and import the cert to ACM and update the Dist. with Alt name: ./certs.sh aaa.example.com
Generate the ssl cert for this domain for NGINX use:
certbot certonly --nginx --non-interactive --agree-tos --email me#example.com -d aaa.example.com
Update the same Virtual Host file from step 3 above with its final config: and restart NGINX (sudo nginx -s reload):
server {
listen 80;
listen [::]:80;
server_name aaa.example.com;
access_log /var/log/nginx/aaa.example.com-access.log;
error_log /var/log/nginx/aaa.example.com-error.log;
return 301 https://aaa.example.com$request_uri; # Redirect to https
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name aaa.example.com;
access_log /var/log/nginx/aaa.example.com-ssl-access.log;
error_log /var/log/nginx/aaa.example.com-ssl-error.log;
ssl on;
ssl_certificate /etc/letsencrypt/live/aaa.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/aaa.example.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
location /.well-known/acme-challenge/ {
root /web/sites/aaa.example.com/www/;
}
location / {
proxy_pass https://d2c4fb2vqtbx6f.cloudfront.net/channel/vfc-bcxfhh37u/;
proxy_set_header host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
proxy_ssl_server_name on;
proxy_read_timeout 5m;
proxy_set_header Access-Control-Allow-Credentials true;
proxy_set_header Content-Encoding gzip;
}
}

Related

HAproxy - multiple conditions in ACL

I'm try to take few redirect on HAproxy and all of them don't work on the same time.
So in my config
bind *:443 ssl crt SOME CERT
mode http
http-request add-header X-Forwarded-Proto http
option forwardfor
option forwardfor header X-Real-IP
http-request set-header X-Forwarded-Proto https if { ssl_fc }
timeout client 300000
acl route_1 hdr(host) -i some.example.com
use_backend some_example1 if route_1
acl route_2 hdr(host) -i some.example.com && path_beg /test
use_backend some_example2 if route_2
acl begins_with_test path_beg /test
use_backend normal_test if use_backend
default_backend regular_backend
I was try to change order ACL route_1 and route_2 but only first of them working. Domain without alias is ok and works fine. Split route_2 to two ACL's and place it into use_backend not working as I wish
Well the order counts.
# Because this acl matches will the second not be evaluated
acl route_1 hdr(host) -i some.example.com
use_backend some_example1 if route_1
acl route_2 hdr(host) -i some.example.com && path_beg /test
use_backend some_example2 if route_2
My suggestion is to rearrange the acls
bind *:443 ssl crt SOME CERT
mode http
timeout client 300000
option forwardfor
option forwardfor header X-Real-IP
http-request add-header X-Forwarded-Proto http
http-request set-header X-Forwarded-Proto https if { ssl_fc }
# && (AND) is implicit
acl route_2 hdr(host) -i some.example.com path_beg /test
use_backend some_example2 if route_2
acl route_1 hdr(host) -i some.example.com
use_backend some_example1 if route_1
acl begins_with_test path_beg /test
use_backend normal_test if use_backend
default_backend regular_backend

How to scale node.js / socket.io server?

I am currently running a node.js app and am about to introduce socket.io to allow real time updates (chat, in-app notifications, ...). At the moment, I am running the smallest available setup from DigitalOcean (1 vCPU, 1 GB RAM) for my node.js server. I stress-tested the node.js app connecting to socket.io using Artillery:
config:
target: "https://my.server.com"
socketio:
- transports: ["websocket"] // optional, same results if I remove this
phases:
- duration: 600
arrivalRate: 20
scenarios:
- name: "A user that just connects"
weight: 90
engine: "socketio"
flow:
- get:
url: "/"
- think: 600
It can handle a couple hundred concurrent connections. After that, I start getting the following errors:
Errors:
ECONNRESET: 1
Error: xhr poll error: 12
When I resize my DigitalOcean droplet to 8 vCPU's and 32 GB RAM, I can get upwards of 1700 concurrent connections. No matter how much more I resize, it always sticks around that number.
My first question: is this normal behavior? Is there any way to increase this number per droplet, so I can have more concurrent connections on a single node instance? Here is my configuration:
sysctl -p
fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 2000 65535
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.core.rmem_default = 31457280
net.core.rmem_max = 12582912
net.core.wmem_default = 31457280
net.core.wmem_max = 12582912
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 65536
net.core.optmem_max = 25165824
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1
ulimit
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3838
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 10000000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
nginx.conf
user www-data;
worker_processes auto;
worker_rlimit_nofile 1000000;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
multi_accept on;
use epoll;
worker_connections 1000000;
}
http {
##
# Basic Settings
##
client_max_body_size 50M;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 120;
keepalive_requests 10000;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Another question: I am thinking about scaling horizontally and spinning up more droplets. Let's say 4 droplets to proxy all connections to. How would this be set up in practice? I would use Redis to emit through socket.io to all connected clients. Do I use 4 droplets with the same configuration? Do I run the same stuff on all 4 of them? For instance, should I upload the same server.js app on all 4 droplets? Any advice is welcome.
I can't really answer your first question, however I can try my best on your second.
If you're setting up load balancing, you run the same server.js app on each droplet and have one handle traffic. I don't know much about Redis but found this: https://redis.io/topics/cluster-tutorial
I hope this helped.

Nodejs reverse proxy server (nginx)

I have already running 2 servers at digital ocean and I installed nginx for webserver and nodejs for app server.
For app server :
Nodeapp Directory : /var/appdata/myapp/
Nodejs app running at 4680 Port;
However in the app server I have couple iptables options(firewall)
IPTABLES Options I did for appserver:
*filter
# Default policy is to drop all traffic
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
# Allow all loopback traffic
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
# Allow ping.
-A INPUT -p icmp -m state --state NEW --icmp-type 8 -j ACCEPT
# Allow incoming SSH, HTTP and HTTPS traffic
-A INPUT -i eth0 -p tcp -m multiport --dports 22,80,443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m multiport --sports 22,80,443 -m state --state ESTABLISHED -j ACCEPT
# Allow inbound traffic from established connections.
# This includes ICMP error returns.
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Log what was incoming but denied (optional but useful).
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables_INPUT_denied: " --log-level 7
# Allow outgoing SSH, HTTP and HTTPS traffic
# This is useful because we won't be able to download and install
# NPM packages otherwise and use git over SSH
-A OUTPUT -o eth0 -p tcp -m multiport --dports 22,80,443 -m state --state NEW,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m multiport --sports 22,80,443 -m state --state ESTABLISHED -j ACCEPT
# Allow dns lookup
-A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
-A INPUT -p udp -i eth0 --sport 53 -j ACCEPT
# Set rate limits for DOS attack prevention (optional)
# The rates here greatly depend on your application
-A INPUT -p tcp -m multiport --dports 80,443 -m limit --limit 250/minute --limit-burst 1000 -j ACCEPT
# Log any traffic which was sent to you
# for forwarding (optional but useful).
-A FORWARD -m limit --limit 5/min -j LOG --log-prefix "iptables_FORWARD_denied: " --log-level 7
COMMIT
For Webserver default config is like this-
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
proxy_pass http://10.135.9.223:4680 ;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
With All these option I almost write down everything I guess but if anything missed pls let me tell.
So the main problem here is
when I enter url for http://web-server-ip-address it responds 504 gateway timed out
EDIT :
When I disable the firewall there is no problem.
Disabled the firewall and take an advantage on cloudflare if you're not familiar with those types of errors

nginx, php-fpm and tilde user directories

I'm using nginx and php5-fpm on a Debian system.
I want my server to serve like so;
ip/index.html serves the static html page (or files) at the nginx web root
and likewise, ip/somefile.php (or index.php) serves PHP through php-fpm
ip/~user/index.html serves the static html page (or files) in /home/user/public_html
and likewise, ip/~user/somefile.php (or index.php) serves PHP through php-fpm
(where ip is either an IPv4 or IPv6 address).
Here is my configuration for nginx:
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
server_name _;
root /usr/share/nginx/www;
index index.php index.html index.htm;
# Deny access to all dotfiles
location ~ /\. {
deny all;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
try_files $uri = 404; # Prevents exploit
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
}
# Serve user directories
location ~ ^/~(.+?)(/.*)?$ {
alias /home/$1/public_html$2;
autoindex on;
}
}
And for php-fpm:
; Start a new pool named 'www'.
; the variable $pool can we used in any directive and will be replaced by the
; pool name ('www' here)
[www]
; Per pool prefix
; It only applies on the following directives:
; - 'slowlog'
; - 'listen' (unixsocket)
; - 'chroot'
; - 'chdir'
; - 'php_values'
; - 'php_admin_values'
; When not set, the global prefix (or /usr) applies instead.
; Note: This directive can also be relative to the global prefix.
; Default Value: none
;prefix = /path/to/pools/$pool
; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
; will be used.
user = www-data
group = www-data
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses on a
; specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = /var/run/php5-fpm.sock
; Set listen(2) backlog.
; Default Value: 128 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 128
; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions.
; Default Values: user and group are set as the running user
; mode is set to 0666
;listen.owner = www-data
;listen.group = www-data
;listen.mode = 0666
; List of ipv4 addresses of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
;listen.allowed_clients = 127.0.0.1
; ... and more that doesn't matter, just defaults
Both static files and PHP work in nginx web root (ip/blah.html or ip/blah.php), static files also work in user directories (ip/~user/blah.html) however PHP is giving 404 in user directories.
Can someone help me fix my config?
Edit: some ls -las incase it's a permission issue.
kvanb#pegasus:~$ ls -la
total 32
drwxr-xr-x 3 kvanb sudo 4096 Jan 4 04:04 .
drwxr-xr-x 6 root root 4096 Jan 4 01:36 ..
-rw------- 1 kvanb kvanb 570 Jan 4 02:54 .bash_history
-rw-r--r-- 1 kvanb sudo 220 Jan 4 01:36 .bash_logout
-rw-r--r-- 1 kvanb sudo 3392 Jan 4 01:36 .bashrc
-rw-r--r-- 1 kvanb sudo 675 Jan 4 01:36 .profile
drwxr-xr-x 2 kvanb sudo 4096 Jan 4 03:41 public_html
-rw------- 1 kvanb sudo 3303 Jan 4 04:04 .viminfo
kvanb#pegasus:~/public_html$ ls -la
total 20
drwxr-xr-x 2 kvanb sudo 4096 Jan 4 03:41 .
drwxr-xr-x 3 kvanb sudo 4096 Jan 4 04:04 ..
-rwxr-xr-x 1 kvanb sudo 21 Jan 4 03:40 index.php
-rwxr-xr-x 1 kvanb sudo 20 Jan 4 03:09 info.php
-rw-r--r-- 1 kvanb sudo 4 Jan 4 03:41 test.html
kvanb#pegasus:/usr/share/nginx/www$ ls -la
total 20
drwxr-xr-x 2 root root 4096 Jan 4 03:28 .
drwxr-xr-x 3 root root 4096 Jan 4 01:34 ..
-rw-r--r-- 1 root root 383 Jul 7 2006 50x.html
-rw-r--r-- 1 root root 151 Oct 4 2004 index.html
-rw-r--r-- 1 root root 20 Jan 4 03:28 info.php
You'll need to add this rule before the initial php one:
# Serve user directories php files
location ~ ^/~(.+?)(/.*\.php)$ {
alias /home/$1/public_html;
autoindex on;
include /etc/nginx/fastcgi_params;
try_files $2 = 404; # Prevents exploit
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
}
This one matches all php files in the user directory, directing them through php-fpm. The php rule you have matches all these php files, but tries to find them in the wrong directory.
I came across this whilst trying to solve a similar problem. So I'll add the solution I found when I got to it. This was on Arch, but it is systemd related.
This solution is for my development machine, and for good reasons, you shouldn't run a public site from your /home folder.
I configured php-fpm and nginx to run as my user. Edit the following file, and remove the ProtectHome=true line
sudo vi /etc/systemd/system/multi-user.target.wants/php-fpm.service
Reload, and restart everything;
systemctl daemon-reload
systemctl restart nginx.service
systemctl restart php-fpm.service

ping or curl invalid domains redirects to local server on Linux

When I try to ping or retrieve an invalid domain, I get redirect to default domain on my local server.
ex:
trying to ping www.invaliddomainnameexample.com from my server s1.mylocaldomain.com
~: ping www.invaliddomainnameexample.com
PING www.invaliddomainnameexample.com.mylocaldomain.com (67.877.87.128) 56(84) bytes of data.
64 bytes from mylocaldomain.com (67.877.87.128): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from mylocaldomain.com (67.877.87.128): icmp_seq=2 ttl=64 time=0.039 ms
or using curl
~: curl -I www.invaliddomainnameexample.com
HTTP/1.1 301 Moved Permanently
Date: Mon, 26 Nov 2012 16:09:57 GMT
Content-Type: text/html
Content-Length: 223
Connection: keep-alive
Keep-Alive: timeout=10
Location: http://mylocaldomain.com/
my resolve.conf
~: cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
Could it be that your /etc/resolv.conf also contains a
search mylocaldomain.com
statement and there's an „*“ DNS A RR for your domain?
Because then the searchlist is applied, the * record matches, and voilà!
Try ping www.invaliddomainnameexample.com. with a dot appended to mark the domain name as a FQDN which prevents applying the searchlist.
It looks like the only way to fix this is to disallow unknown hosts to be processed by Http server. Though I did that only for local IPs
I use Nginx so the config would be
#list of server and local IPs
geo $local {
default 0;
34.56.23.0/21 1;
127.0.0.1/32 1;
}
#Deny access to any host other
server {
listen 80 default_server;
server_name _;
if ( $local = 1 ){
return 405;
}
rewrite ^(.*) http://mylocaldomain.com$1 permanent;
}

Resources