How to activate 2 ports 80 & 4000 for a single SSL enabled domain? - linux

I am new to SSL encryptions and need help! (Using cert bot).
I recently activated SSL on a website that runs on apache and linux on port 80. So, the current website looks like:
http://example.com --> https://example.com (done)
However, I have backend running on port 4000 and want to encrypt that as well to avoid "Mixed Content" page error:
http://example.com:4000 --> https://example.com:4000 (Not done yet)
This is exactly what I need and no work around would help. Please guide.
Thanks in advance! :-)

You can create a new subdomain subdomain.example.com and point it to example.com:4000 and then request a new SSL certificate from LetsEncrypt and specify multiple subdomains when requesting a certificate using certbot.
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com -w /var/www/other -d other.example.net -d another.other.example.net
When you have the certificate and key, add it in your webserver config
Check out the official certbot documentation here

Related

Certbot DNS problem - not using /etc/hosts

I am trying to install a certificate using certbot from LetsEncrypt on a Raspberry Pi. I have installed Apache2 and created a webserver at http://subdomain.mydomain.com on the Raspberry Pi. The certbot command obtains a certificate and writes it to http://subdomain.mydomain.com/.well-known/acme-challenge/<etc.>
Background Info: I am doing this because I need a local server to address IoT devices and my Ajax calls are failing because I am not allowed to mix http with https. The IoT devices are incapable of a hosting a webserver with SSL - they use a simple http:/192.168.1.xx/<string> format
I don't want to create a DNS entry at my registrar/ISP because I am trying to create a scalable solution and creating hundreds (perhaps thousands if we do well) of subdomain entries there is impractical. Creating my own DNS server is a possibility, but I would rather just do it all on the Pi - my bash installation script will take care of everything (once I get it to work!).
I tried first to create an entry into the local hosts (/etc/hosts) file which looks like this:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 SubDomain
192.168.1.111 subdomain.mydomain.com
This works for commands like ping, but not for nslookup or dig and definitely not for certbot. The certbot command finds my main server - DNS is configured with a * to go to my Public IP for all unknown subdomains:
A * xx.xx.xx.xx //My public IP address
So then I installed dnsmasq (See: When using proxy_pass, can /etc/hosts be used to resolve domain names instead of "resolver"?) and followed the configuration options shown here: How to Setup a Raspberry Pi DNS Server
However, that doesn't work either. certbot still looks at my main (external DNS) and finds my Public (wildcard) IP. Here's a summary of the changes made in /etc/dnsmasq.conf
domain-needed ## enabled
bogus-priv ## enabled
no-resolv ## enabled
server=8.8.8.8 ## added (#server=/localnet/192.168.0.1 left as is)
server=8.8.4.4 ## added
cache-size=1500 ##increased from 150
How can I force certbot to find and use my local/private IP 192.168.1.111? Any alternative solutions using scripts/redirection?
Create a wildcard certificate using Let's Encrypt DNS validation. You will then have to renew the certificate manually. Otherwise, your server must be on the public Internet with correct DNS settings.
I finally solved my problem but I abandoned LetsEncrypt entirely. The answer was not in DNS, but in approaching it from a completely different angle. This was pretty much 95% of the solution.
Important! This only works if you have control over the browser. We do, since it is for our kiosk application which runs in a browser.
Step 1: Become your own CA
Step 2: Sign your SSL certificate as a CA
Step 3: Import the signed CA (.pem file) into the browser (under Authorities)
Step 4: Point your Apache conf file to the local SSL (the process generates .key and .crt files for this as well).

How to setup Caddy to get HTTPS on my server

I've been issues to get the HTTPS address for my server. Let's say I have a domain www.mydomain.com
If I run this command it just works fine. I can get the HTTPS.
caddy -host www.domain.com
But I have some proxies that I use for django. So I have a CaddyFile. This is how the CaddyFile is set:
# Django
www.mydomain.com {
root /root/my_projects/my_project
proxy / 127.0.0.1:8000 {
transparent
except /static
}
log /var/log/caddy.log
So if I run this command
caddy -host CaddyFile
, it's not giving me HTTPS. Instead this is what the output is:
Activating privacy features... done.
Serving HTTP on port 2015
http://.:2015/caddyfile
So how should I configure the file or what command should I use to get HTTPS on my server with the proxy and the root folder that I set in the CaddyFile?
Thanks.
I'm guessing you use caddy v1.
From the caddy docs said:
-host
The default hostname or IP address to listen on. Sites defined in the Caddyfile without a hostname will assume this one. This is usually used with -port to quickly get simple sites up and running without a Caddyfile.
The -host option maybe ignored your Caddyfile.
If your Caddyfile is in the same directory with caddy binary, try remove all args, just run caddy. It will automatically picks up the Caddyfile.
Otherwise, try this caddy -conf <path/to/your/Caddyfile>

Subdomain www connection is not secure / SSL on VPS with different Domainhost

following problem:
My webpage example.eu works perfect with SSL connection certified by certbot with apache on ubuntu VPS. If I now enter www.example.eu Firefox says "Your connection is not secure" and "The certificate is only valid for example.eu". The domain is hosted with a different service-provider than the VPS.
First attempt was to direct A-records of the subdomain www.example.eu to the same IP (VPS) as example.eu. Same problem.
Next attempt was to create a CNAME-record for www.example.eu that directs to example.eu. Same problem.
Now I am out of ideas. What can I do?
Thanks in advance and best regards from Germany, Joachim
Ok, I got it now. Problem was with apache/certbot on the VPS:
I checked the virtual host:
sudo nano /etc/apache2/sites-available/example.eu.conf
it showed
ServerName example.eu
ServerAlias www.example.eu
So no issue here. But certbot seemed to be only configured for example.eu.
ls /etc/letsencrypt/live
So I had to
sudo certbot --apache -d www.example.eu
and configure the A-record of the www subdomain to point to the IP of the VPS and: works :)

Change the domains of SSL certificate - like Charles Proxy

When generating an SSL paid or self signed you assign a set of specific domains (wildcard or not), known as canonical names. If you use this SSL to open domains which are not on the list Chrome gives warning - NET::ERR_CERT_COMMON_NAME_INVALID - you know, click advanced > Proceed Unsafe.
I use the same certificate on Charles Proxy which opens all urls fine on chrome, without warning. Viewing on dev options > security > view certificate, I can see that it's my certificate, my domain etc. However Charles changes the domains on the cert automatically for any website you visit, which pass all Chrome validations / warnings.
-- >
How can I achieve this?
Preferably using Nginx or NodeJS via https.createServer(...)
Not worried about how to bypass chrome but how can a .cer be modified so instantly for each http request and be served to the browser.
Solved!
There are several options which include mitmproxy, sslsniff and my favorite SSLSPLIT
It is available for all distros, prepackaged, install via apt-get or yum install sslsplit and that's all.
You simply need to run 1 command, simply package your certificate with key and bundle into 1 pem file and run this:
Forward the port though NAT via iptables and then run sslsplit
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8888
sslsplit -p /path/anywhere.pid -c certbundle.pem -l connections.log ssl 0.0.0.0 8888 sni 443
It reissues new certificates on the fly with modified subject names as well as log all traffic if you wish. Bypasses all chrome validations and it is quite fast. It doesn't proxy through Nginx though as I was hoping.
.
--- Edit 1/6/2018
Also found a node solution which is beyond what I need https://www.npmjs.com/package/node-forge

Varnish/Nginx cached SSL Certificate mystery

I have Varnish load balancing three front end Rails servers with Nginx acting as a reverse proxy for FastCGI workers. Yesterday, our certificate expired, and I got a new certificate from GoDaddy, and installed it. When accessing static resources directly, I see the updated certificate, but when accessing them from a "virtual subdomain" I'm seeing the old certificate. My nginx config only cites my new chained certificate, so I'm wondering how the old certificate is being displayed. I've even removed it from the directory.
example:
https://www212.doostang.com/javascripts/base_packaged.js?1331831461 (no certificate problem with SSL)
https://asset5.doostang.com/javascripts/base_packaged.js?1331831461 (the old certificate is being used!) (maps to www212.doostang.com)
I've reloaded and even stopped-and-restarted nginx, tested nginx to make sure that it's reading from the right config, and restarted varnish with a new cache file.
When I curl the file at asset5.doostang.com I get a certificate error:
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
When I add the -k option, I get the file requested, and I can see it in my nginx access log. I don't get an nginx error when I don't provide the -k; nginx is silent about the certificate error.
10.99.110.27 - - [20/Apr/2012:18:02:52 -0700] "GET /javascripts/base_packaged.js?1331831461 HTTP/1.0" 200 5740 "-"
"curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o
zlib/1.2.3.4 libidn/1.18"
I've put what I think is the relevant part of the nginx config, below:
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
server_name www.doostang.com, *.doostang.com;
passenger_enabled on;
rails_env production;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
ssl_protocols SSLv3;
# doc root
root /.../public/files;
if ($host = 'doostang.com' ) {
rewrite ^/(.*)$ https://www.doostang.com/$1 permanent;
}
}
# Catchall redirect
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
rewrite ^(.*)$ https://www.doostang.com$1;
}
Ba dum ching. My non-standardized load balancer actually had nginx running for SSL termination. I failed to notice this, but I think I did everything else correctly. Point being, when you take over operations upon acquisition, standardize and document! There are some really odd engineers out there :)

Resources