Change the domains of SSL certificate - like Charles Proxy - node.js

When generating an SSL paid or self signed you assign a set of specific domains (wildcard or not), known as canonical names. If you use this SSL to open domains which are not on the list Chrome gives warning - NET::ERR_CERT_COMMON_NAME_INVALID - you know, click advanced > Proceed Unsafe.
I use the same certificate on Charles Proxy which opens all urls fine on chrome, without warning. Viewing on dev options > security > view certificate, I can see that it's my certificate, my domain etc. However Charles changes the domains on the cert automatically for any website you visit, which pass all Chrome validations / warnings.
-- >
How can I achieve this?
Preferably using Nginx or NodeJS via https.createServer(...)
Not worried about how to bypass chrome but how can a .cer be modified so instantly for each http request and be served to the browser.

Solved!
There are several options which include mitmproxy, sslsniff and my favorite SSLSPLIT
It is available for all distros, prepackaged, install via apt-get or yum install sslsplit and that's all.
You simply need to run 1 command, simply package your certificate with key and bundle into 1 pem file and run this:
Forward the port though NAT via iptables and then run sslsplit
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8888
sslsplit -p /path/anywhere.pid -c certbundle.pem -l connections.log ssl 0.0.0.0 8888 sni 443
It reissues new certificates on the fly with modified subject names as well as log all traffic if you wish. Bypasses all chrome validations and it is quite fast. It doesn't proxy through Nginx though as I was hoping.
.
--- Edit 1/6/2018
Also found a node solution which is beyond what I need https://www.npmjs.com/package/node-forge

Related

How to Connect Externally hosted website with AWS CLoudfront CDN

I am hosting my site on Vultr and I want to connect it to CLoudfront CDN. How to do this? I have tried but it shows error that origin connectivity issue.
You see, this is a very specific situation and Vultr does not have the same integration with Cloudfront as it does with Cloudflare. For this I had to do the following:
First:
Release the cloud front IPs on the server's firewall, as the cloudfront has 135 IPs and Vultr's firewall panel can only register 50 entries, so transfer this responsibility to the server.
Create a script that only adds Cloudfront IPs to UFW.
I got this repo: https://github.com/Paul-Reed/cloudflare-ufw
So I have this in CRON:
0 0 * * 1 /usr/local/bin/cloudflare-ufw > /dev/null 2>&1
And for my case the script looked like this:
#!/bin/sh
curl -s https://www.cloudflare.com/ips-v4 -o /tmp/cf_ips
curl -s https://www.cloudflare.com/ips-v6 >> /tmp/cf_ips
# Allow all traffic from Cloudflare IPs (no port restrictions)
#to cfip in `cat /tmp/cf_ips`; ufw enable tcp proto of $cfip comment 'Cloudflare IP'; done
ufw reload > /dev/null
OTHER EXAMPLES OF RULES
Restrict to port 80
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80 comment 'Cloudflare IP'; done
Restrict to ports 22 and 443
for cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 22443 comment 'Cloudflare IP'; done
Restrict to ports 80 and 443
to cfip in `cat /tmp/cf_ips`; ufw allows proto tcp from $cfip to any port 80.443 comment 'Cloudflare IP'; done
ufw reload > /dev/null
Second:
I configured cloudfront, my case was specific for wordpress traffic. followed the following steps:
I created an AWS Certificate Manager public certificate
As per documents on AWS: https://docs.aws.amazon.com/pt_br/acm/latest/userguide/gs-acm-request-public.html#request-public-console
I created the distribution on CloudFront: https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating.html
The distribution will be responsible for the security and performance of the application.
I created a certificate for the origin server: https://www.gocache.com.br/seguranca/como-gerar-certificado-ssl-via-terminal-certbot-com-wildcard/
It is necessary to install a valid SSL certificate inside your server to make a secure connection with CloudFront. I recommend Let’s Encrypt as a free solution for generating certificates.
I registered the record in the DNS table: https://docs.aws.amazon.com/pt_br/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html
For the distribution to be accessible by the website address, it is necessary to register the address in the DNS table.
The record is a CNAME and its value is a distribution domain name. You can find this information in the Details section on the CloudFront Distribution General tab.

"Timeout during connect (likely firewall problem)" while renewing Certbot

I am facing the following error when I try to renew my ssl certificate using
certbot renew
Challenge failed for domain ***********.com
Some challenges have failed.
The following errors were reported by the server:
Domain: arjunbroker.com
Type: connection
Detail: Fetching
http://arjunbroker.com/.well-known/acme-challenge/F9nlyrRQBpJGOpPLHGPCj1vzdJOd_rBISU7q2aX7t_o:
Timeout during connect (likely firewall problem)
I have checked UFW and firewalld. And both port 80 and 443 are open.
I finally realised that prior to installing SSL on this server, I used to forward port 80 to port 8080 using
sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
So I simply forwarded port 80 back to port 80.
Lesson learnt, for Certbot to work port 80 forwarding should be in place.
I finally realized that I ONLY had http/https open to my test client machines. I opened them wide for the certbot run then closed them again. I'll try to determine what IP needs to be open for letsencrypt probes so I can automate the certbot renewals.
For me the issue was that Let's Encrypt uses IPv6 if possible to do the http challenge and my site worked fine over IPv4 but not over IPv6 (as I had it setup wrong). You can use this site to test your IPv6 setup.
I solved this by disabling 'Permanent SEO-safe 301 redirect from HTTP to HTTPS' (in Hosting Settings for Plesk / CentOS Linux 7.9).
LetsEncrypt wouldn't assign or renew its SSL certificates otherwise. Spent a day re-configuring, DNS, panel.ini, firewall, etc., and eventually pinpointed this as the specific cause.
The issue surfaced about 10 months ago and we only realised what was happening recently.
I fixed that in AWS EC2 updating the Group Security like this:
More about EC2 Group Security: https://docs.aws.amazon.com/pt_br/AWSEC2/latest/UserGuide/ec2-security-groups.html

How to make Wildfly 10.1.0 work in port 80 and 443 (SSL) with h2 (HTTP/2) protocol in Linux Ubuntu 16.04

I'm trying to make the wildfly work on ubuntu in production.
I was able to make it work with its standard 8080 and 8443 ports, and managed to redirect ports 80 to 8080 and 443 to 8443 using iptables from ubuntu.
But when performing this redirection, the page opens in https but the h2 protocol (HTTP / 2) and gzip do not work.
If I go direct in the standard wildfly protocol (www.example.com:8443) gzip and h2 work perfectly.
Here is the iptables redirect command:
Iptables -t nat -A PREROUTING -i eth0 -p tcp -dport 80 -j REDIRECT
--to-port 8080
Iptables -t nat -A PREROUTING -i eth0 -p tcp -dport 443 -j REDIRECT
--to-port 8443
I've tried using nginx to do the redirect and the same problem happens.
I also tried configuring wildfly to use port 80 and 443 directly but Ubuntu does not allow it.
I have the following status in firewall:
ufw status verbose of server
If there is a way to make the wildfly in port 80 and 443 or make the redirect work in h2 and gzip.
System:
Ubuntu : 16.04.1
Wildfly : 10.1.0.Final
Please help me solve this problem.
Thank you very much.
I just found the solution.
The problem is in my Windows 10 Anti-Virus (More specifically BitDefender 2017).
All the tests I did was on a Windows 10 operating system, by the time I switched to Linux (I have dual boot) the site finally got http2
So I saw that the name of the issuer of the certificate that was being used was: Bitdefender Personal CA.Net-Defender.
It was at this point that I realized that my certificate created by letsencrypt was being overwritten by another bitdefender certificate.
SOLUTION: In BitDefender enter the module settings, and go to the internet module and disable the option to verify SSL certificates. Restart your browser and you're done.
So beware when testing a website using an antivirus.

Configure squid to handle relative urls

I built and installed squid 3.5.23 as follows:
./configure --prefix=/usr/local/squid
make all
make install
Here is the default squid.conf used by the version. I made minimal modifications to to the file to make my setup anonymous:
forwarded_for delete
request_header_access Via deny all
request_header_access Cache-Control deny all
After I got the (remote) proxy server running, I confirmed that I could configure my (local) browser to send traffic through it. I then took it to the next step, and had my router send all traffic originating from my local network to my proxy server:
iptables -t nat -A PREROUTING -s 192.168.11.0/24 -d ! 192.168.11.0/24 -p tcp -j DNAT --to-destination 100.200.30.40:3128
However, all of my requests came back with a 400 from squid (BAD REQUEST). On investigating further, I discovered that the request headers were using relative urls (my browser is smart enough to always use absolute urls if it knows it is talking to a proxy server).
I know HTTP 1.1 headers are required to have a Host header, which squid can use to determine the original destination of packets it receives. How do I configure the proxy server to use that header? I am looking for the squid 3.5 equivalent of httpd_accel_uses_host_header on
Running squid in accelerator mode fixed this:
http_port 3128 accel

trace a particular IP and port

I have an app running on port 9100 on a remote server serving http pages. After I ssh into the server I can curl localhost 9100 and I receive the response.
However I am unable to access the same app from the browser using http://ip:9100
I am also unable to telnet from my local PC. How do I debug it? Is there a way to traceroute a particular IP and port combination, to see where it is being blocked?
Any linux tools / commands / utilities will be appreciated.
Thanks,
Murtaza
You can use the default traceroute command for this purpose, then there will be nothing to install.
traceroute -T -p 9100 <IP address/hostname>
The -T argument is required so that the TCP protocol is used instead of UDP.
In the rare case when traceroute isn't available, you can also use ncat.
nc -Czvw 5 <IP address/hostname> 9100
tcptraceroute xx.xx.xx.xx 9100
if you didn't find it you can install it
yum -y install tcptraceroute
or
aptitude -y install tcptraceroute
you can use tcpdump on the server to check if the client even reaches the server.
tcpdump -i any tcp port 9100
also make sure your firewall is not blocking incoming connections.
EDIT: you can also write the dump into a file and view it with wireshark on your client if you don't want to read it on the console.
2nd Edit: you can check if you can reach the port via
nc ip 9100 -z -v
from your local PC.
Firstly, check the IP address that your application has bound to. It could only be binding to a local address, for example, which would mean that you'd never see it from a different machine regardless of firewall states.
You could try using a portscanner like nmap to see if the port is open and visible externally... it can tell you if the port is closed (there's nothing listening there), open (you should be able to see it fine) or filtered (by a firewall, for example).
it can be done by using this command: tcptraceroute -p destination port destination IP. like: tcptraceroute -p 9100 10.0.0.50 but don't forget to install tcptraceroute package on your system. tcpdump and nc by default installed on the system. regards
If you use the 'openssl' tool, this is one way to get extract the CA cert for a particular server:
openssl s_client -showcerts -servername server -connect server:443
The certificate will have "BEGIN CERTIFICATE" and "END CERTIFICATE" markers.
If you want to see the data in the certificate, you can do: "openssl x509 -inform PEM -in certfile -text -out certdata" where certfile is the cert you extracted from logfile. Look in certdata.
If you want to trust the certificate, you can add it to your CA certificate store or use it stand-alone as described. Just remember that the security is no better than the way you obtained the certificate.
https://curl.se/docs/sslcerts.html
After getting the certificate use keytool to install it.

Resources