How to Connect Externally hosted website with AWS CLoudfront CDN - amazon-cloudfront

I am hosting my site on Vultr and I want to connect it to CLoudfront CDN. How to do this? I have tried but it shows error that origin connectivity issue.

You see, this is a very specific situation and Vultr does not have the same integration with Cloudfront as it does with Cloudflare. For this I had to do the following:
First:
Release the cloud front IPs on the server's firewall, as the cloudfront has 135 IPs and Vultr's firewall panel can only register 50 entries, so transfer this responsibility to the server.
Create a script that only adds Cloudfront IPs to UFW.
I got this repo: https://github.com/Paul-Reed/cloudflare-ufw
So I have this in CRON:
0 0 * * 1 /usr/local/bin/cloudflare-ufw > /dev/null 2>&1
And for my case the script looked like this:
#!/bin/sh
curl -s https://www.cloudflare.com/ips-v4 -o /tmp/cf_ips
curl -s https://www.cloudflare.com/ips-v6 >> /tmp/cf_ips
# Allow all traffic from Cloudflare IPs (no port restrictions)
#to cfip in `cat /tmp/cf_ips`; ufw enable tcp proto of $cfip comment 'Cloudflare IP'; done
ufw reload > /dev/null
OTHER EXAMPLES OF RULES
Restrict to port 80
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80 comment 'Cloudflare IP'; done
Restrict to ports 22 and 443
for cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 22443 comment 'Cloudflare IP'; done
Restrict to ports 80 and 443
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80.443 comment 'Cloudflare IP'; done
ufw reload > /dev/null
Second:
I configured cloudfront, my case was specific for wordpress traffic. followed the following steps:
I created an AWS Certificate Manager public certificate
As per documents on AWS: https://docs.aws.amazon.com/pt_br/acm/latest/userguide/gs-acm-request-public.html#request-public-console
I created the distribution on CloudFront: https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating.html
The distribution will be responsible for the security and performance of the application.
I created a certificate for the origin server: https://www.gocache.com.br/seguranca/como-gerar-certificado-ssl-via-terminal-certbot-com-wildcard/
It is necessary to install a valid SSL certificate inside your server to make a secure connection with CloudFront. I recommend Let’s Encrypt as a free solution for generating certificates.
I registered the record in the DNS table: https://docs.aws.amazon.com/pt_br/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html
For the distribution to be accessible by the website address, it is necessary to register the address in the DNS table.
The record is a CNAME and its value is a distribution domain name. You can find this information in the Details section on the CloudFront Distribution General tab.

Related

"Timeout during connect (likely firewall problem)" while renewing Certbot

I am facing the following error when I try to renew my ssl certificate using
certbot renew
Challenge failed for domain ***********.com
Some challenges have failed.
The following errors were reported by the server:
Domain: arjunbroker.com
Type: connection
Detail: Fetching
http://arjunbroker.com/.well-known/acme-challenge/F9nlyrRQBpJGOpPLHGPCj1vzdJOd_rBISU7q2aX7t_o:
Timeout during connect (likely firewall problem)
I have checked UFW and firewalld. And both port 80 and 443 are open.
I finally realised that prior to installing SSL on this server, I used to forward port 80 to port 8080 using
sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
So I simply forwarded port 80 back to port 80.
Lesson learnt, for Certbot to work port 80 forwarding should be in place.
I finally realized that I ONLY had http/https open to my test client machines. I opened them wide for the certbot run then closed them again. I'll try to determine what IP needs to be open for letsencrypt probes so I can automate the certbot renewals.
For me the issue was that Let's Encrypt uses IPv6 if possible to do the http challenge and my site worked fine over IPv4 but not over IPv6 (as I had it setup wrong). You can use this site to test your IPv6 setup.
I solved this by disabling 'Permanent SEO-safe 301 redirect from HTTP to HTTPS' (in Hosting Settings for Plesk / CentOS Linux 7.9).
LetsEncrypt wouldn't assign or renew its SSL certificates otherwise. Spent a day re-configuring, DNS, panel.ini, firewall, etc., and eventually pinpointed this as the specific cause.
The issue surfaced about 10 months ago and we only realised what was happening recently.
I fixed that in AWS EC2 updating the Group Security like this:
More about EC2 Group Security: https://docs.aws.amazon.com/pt_br/AWSEC2/latest/UserGuide/ec2-security-groups.html

Nginx is refusing to connect on AWS EC2

I'm trying to use nginx to setup a simple node.js server, I'm running the server in background on port 4000, my nginx config file is
server {
listen 80;
listen [::]:80;
server_name 52.53.196.173;
location / {
include /etc/nginx/proxy_params;
proxy_pass http://127.0.0.1:4000;
}
}
I saved it in /etc/nginx/sites-available and also symlinked it to sites-enabled, the nginx.conf file has the include line already to load files from sites-enabled, then i restarted the service using
sudo service nginx restart
I tried going to 52.53.196.173 and it refuses to connect, however going to 52.53.196.173:4000 with port 4000 it is working, but I'm trying to make it listen on port 80 with nginx, i tried putting my .ml domain as server_name and no luck, and i have the IP 52.53.196.173 as the A record in the domain dns settings, and I'm doing this on an AWS EC2 Instance Ubuntu Server 16.04, i even tried the full ec2 public dns url no luck, any ideas?
Edit: I solved it by moving the file directly in sites-enabled instead of a symlink
There is few possible things. First of all you need to verify that nginx server is running & listening on port 80. you can check the listening ports using the following command.
netstat -tunlp
Then you need to check your server firewall & also the selinux policies. ( OR disable selinux for test )
Then you need to verify that AWS security group configured to access the http/https connections on port 80.
PS : Outputs from the following command & configurations will be helpful for troubleshooting.
netstat -tunlp
sestatus
iptables -L
* AWS Security Group Rules
* Nginx configurations ( including main configuration if changed )
P.S : OP fixed the problem by moving the config file directly into site-enabled directory. maybe, reefer the comments for more info if you are having the same issue.
Most probably port 80 might not be open in your security group or nginx is not running to accept the connections. Please post the nginx status and check the security group
check belows:
in security group, add Http (80) and Https (443) in inbound section with 0.0.0.0 ip as follow:
for 80 :
for 443 :
in Network ACL, allow inbound on http and https. outbound set custom TCP role as follow:
inbound roles:
outbound roles:
assign a elastic ip on ec2 instance, listen to this ip for public.

How can I connect domain to aws whm

I have a domain bought from GoDaddy. I have set the custom name servers this
ns1.domain.com
ns2.domain.com
and set hostname
ns1 52.70.xxx.xxx(aws ip)
ns2 52.70.xxx.xxx (aws ip)
As I have installed WHM in my amazon aws instance. so In WHM, I have created an account and then went to Edit DNS Zone and added A records. These are my settings there
But I don't see my domain working and I am not able to see Cpanel of the domain as well.
what am I missing?
Please follow these steps to integrate your domain into whm and create a cpanel.
Create an account in WHM by going into Account Functions->Create Account: enter your domain here
Go to DNS Functions->Edit Dns Zone and click your domain and add A records
Then Go to Godaddy or any Company where you have purchased your domain and edit the name servers. For example if the nameservers you set in whm dns were ns1 and ns2 then same add here(e.g godaddy). In your case it would be
ns1.domain.com
ns2.domain.com
Click Manage hostname in Godaddy and add
ns1 52.70.xxx.xxx(aws ip)
ns2 52.70.xxx.xxx (aws ip)
Your domain should be working here. But If still It didn't work then
Check if ports(2087,2083,53,2095) are open. Check it from the terminal
nmap -Pn -sT 172.31.iphere --reason -p 2087,2083,2095,53
If any port is closed. Open it from the aws by going into Security Firewall.
Please note, 2083 and 2095 will always show as closed from external port scans as these ports are only opened publicly based on valid sessions established from within the cPanel server.
Verify again if your DNS port is opened
nmap -Pn -sT 172.31.iphere --reason -sU -p 53
After opening all the ports rebuilt your DNS configuration on the server by typing these commands on the terminal
cpanel root#9449099 /var/named]cPs# cd /etc
cpanel root#9449099 /etc]cPs# mkdir /root/cptechs
cpanel root#9449099 /etc]cPs# mv named.conf /root/cptechs
cpanel root#9449099 /etc]cPs# mv rndc.* /root/cptechs
cpanel root#9449099 /etc]cPs# /scripts/rebuilddnsconfig
Hope it helps
Are you using Nameservers for domain.com that are ns1/2.domain.com?
If this is the case the domain will not be able to resolve without adding the ns1/2. as "Child Nameservers".
You can create that for your domain through GoDaddy https://uk.godaddy.com/help/add-my-own-host-names-as-nameservers-12320
Alternatively - you can post your domain so we can troubleshoot it if it's a DNS issue.

Change the domains of SSL certificate - like Charles Proxy

When generating an SSL paid or self signed you assign a set of specific domains (wildcard or not), known as canonical names. If you use this SSL to open domains which are not on the list Chrome gives warning - NET::ERR_CERT_COMMON_NAME_INVALID - you know, click advanced > Proceed Unsafe.
I use the same certificate on Charles Proxy which opens all urls fine on chrome, without warning. Viewing on dev options > security > view certificate, I can see that it's my certificate, my domain etc. However Charles changes the domains on the cert automatically for any website you visit, which pass all Chrome validations / warnings.
-- >
How can I achieve this?
Preferably using Nginx or NodeJS via https.createServer(...)
Not worried about how to bypass chrome but how can a .cer be modified so instantly for each http request and be served to the browser.
Solved!
There are several options which include mitmproxy, sslsniff and my favorite SSLSPLIT
It is available for all distros, prepackaged, install via apt-get or yum install sslsplit and that's all.
You simply need to run 1 command, simply package your certificate with key and bundle into 1 pem file and run this:
Forward the port though NAT via iptables and then run sslsplit
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8888
sslsplit -p /path/anywhere.pid -c certbundle.pem -l connections.log ssl 0.0.0.0 8888 sni 443
It reissues new certificates on the fly with modified subject names as well as log all traffic if you wish. Bypasses all chrome validations and it is quite fast. It doesn't proxy through Nginx though as I was hoping.
.
--- Edit 1/6/2018
Also found a node solution which is beyond what I need https://www.npmjs.com/package/node-forge

dnsmasq forwards queries to 2 servers instead of 1

I'm having a small issue with dnsmsasq on debian-jessie, it seems to forward incoming DNS queries to 2 ports instead of 1.
Background:
Runs on a machine whose LAN IP is 192.168.0.10. Sits behind a home router. The home router is configured to forward DNS traffic to 192.168.0.10. That part works, I do see incoming traffic from the LAN onto this machine.
dnsmasq configuration:
>cat /etc/dnsmasq.conf | grep -v ^# | grep -v ^\s*$
domain-needed
bogus-priv
server=127.0.0.1#5053
cache-size=10000
My resolv.conf tells local processes to send DNS queries to dnsmasq
>cat /etc/resolv.conf
# Generated by resolvconf
nameserver 127.0.0.1
And dnsmasq, if it can't answer from its cache, then forwards incoming DNS traffic to another service running locally and listening on port 5053 via the server=127.0.0.1#5053 config. That service is something I build myself and it does not forward DNS queries to 8.8.8.8
This works but not the way I intended. DNS queries get answered properly. As expected, port 5053 shows traffic and even provides answers (though slower than GoogleDNS)
>tcpdump -l -n -i any '(port 5053) and (port 53)'
13:57:53.817522 IP 127.0.0.1.47207 > 127.0.0.1.53: 7494+ [1au] A? www.example.com. (44) # dnsmasq receives a query from `dig www.example.com` running locally
13:57:53.818609 IP 127.0.0.1.5258 > 127.0.0.1.5053: UDP, length 44 # dnsmasq forwards to local DNS Server listening on 5053
13:57:53.818970 IP 192.168.0.10.5258 > 8.8.8.8.53: 50849+ [1au] A? www.example.com. (44) # dnsmasq forwards to 8.8.8.8 on port 53 (Google DNS)
13:57:53.862170 IP 8.8.8.8.53 > 192.168.0.10.5258: 50849$ 1/0/1 A 93.184.216.34 (60) # dnsmasq receives answer from 8.8.8.8
13:57:53.862559 IP 127.0.0.1.53 > 127.0.0.1.47207: 7494 1/0/1 A 93.184.216.34 (60) # dnsmasq forwards answer to dig running locally
13:57:53.980238 IP 127.0.0.1.5053 > 127.0.0.1.5258: UDP, length 49 # dnsmasq receives answer from local DNS Server
So it appears dnsmasq tee's DNS queries to both
127.0.0.1 on port 5053, and almost immediately after also forwards to
8.8.8.8 on port 53
Why, what's wrong with my dnsmasq configuration, I expected traffic only on port 5053?
And where is that 8.8.8.8 coming from. Yes I know that's Google DNS, but where is dnsmasq or linux getting that IP from and which config file can I edit to change that?
>grep -r 8\.8\.8\.8 /etc/*.conf
returns nothing.

Resources