Varnish/Nginx cached SSL Certificate mystery - linux

I have Varnish load balancing three front end Rails servers with Nginx acting as a reverse proxy for FastCGI workers. Yesterday, our certificate expired, and I got a new certificate from GoDaddy, and installed it. When accessing static resources directly, I see the updated certificate, but when accessing them from a "virtual subdomain" I'm seeing the old certificate. My nginx config only cites my new chained certificate, so I'm wondering how the old certificate is being displayed. I've even removed it from the directory.
example:
https://www212.doostang.com/javascripts/base_packaged.js?1331831461 (no certificate problem with SSL)
https://asset5.doostang.com/javascripts/base_packaged.js?1331831461 (the old certificate is being used!) (maps to www212.doostang.com)
I've reloaded and even stopped-and-restarted nginx, tested nginx to make sure that it's reading from the right config, and restarted varnish with a new cache file.
When I curl the file at asset5.doostang.com I get a certificate error:
curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
When I add the -k option, I get the file requested, and I can see it in my nginx access log. I don't get an nginx error when I don't provide the -k; nginx is silent about the certificate error.
10.99.110.27 - - [20/Apr/2012:18:02:52 -0700] "GET /javascripts/base_packaged.js?1331831461 HTTP/1.0" 200 5740 "-"
"curl/7.21.3 (x86_64-pc-linux-gnu) libcurl/7.21.3 OpenSSL/0.9.8o
zlib/1.2.3.4 libidn/1.18"
I've put what I think is the relevant part of the nginx config, below:
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
server_name www.doostang.com, *.doostang.com;
passenger_enabled on;
rails_env production;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
ssl_protocols SSLv3;
# doc root
root /.../public/files;
if ($host = 'doostang.com' ) {
rewrite ^/(.*)$ https://www.doostang.com/$1 permanent;
}
}
# Catchall redirect
server {
# port to listen on. Can also be set to an IP:PORT
listen 443;
ssl on;
ssl_certificate /.../doostang_combined.crt;
ssl_certificate_key /.../doostang.com.key;
rewrite ^(.*)$ https://www.doostang.com$1;
}

Ba dum ching. My non-standardized load balancer actually had nginx running for SSL termination. I failed to notice this, but I think I did everything else correctly. Point being, when you take over operations upon acquisition, standardize and document! There are some really odd engineers out there :)

Related

How to prevent origin server IP address behind CDN from exposed, like Cloudflare?

I use Cloudflare/Google Cloud Platform as CDN, how to hide my server IP from detection via scanners?
There are some methods which can help your server from detection, such as IP whitelist, hostname/port change, OpenSSL/SNI patch, website/backend faking, header/client certificate authorization, etc.
In short: Think like a scanner, and you will be fine.
I also publish this answer in my blog, check if you are interested.
Before start detailing, if you need to protect your server completely, it is far from enough by doing things I introduce here. Security follows the Liebig's barrel; any minor inattention will cause an unpredictable consequence. In short, you need in charge of your security. The only thing I wrote here is about how to prevent IP leak from the webserver. If there is a neglected place, like design error in the application which causes the IP leak, this won't help.
In general, the way to find your original node is scanning every possible IP by requesting like regular user, and find target by filtering the results. In most situation, you can prevent them by setting IP whitelist. But it depends. You may probably don't know the IP that CDN nodes used for requesting your original server, or they're changing. Use this policy may likely cause service interruption.
Outline
IP Whitelist
Change hostname/listen port
Prevent certification leak from aimless batch scan
Domain info on original server would not be inputted the database based on this
If possible, change the port the webserver listened to
Give false information by feigning as other real-existing websites/CDN nodes
Prevent unauthorized access by feigning as other self-handcrafting websites/returning null
Needs to cooperate with other regulations that the CDN provided
Client certificate authentication is also an uncommon way
Conclusion
If you are confused, you can check the flow chart in the conclusion first, then continue reading.
Strategies
Assuming Debian/Ubuntu as OS, and Nginx as web server.
IP Whitelist
In fact, the most direct, efficient method to prevent original server IP leak is setting IP whitelist. If you can do so, you should do so. However, do remember the things:
If CDN provider does not provide IP list in use, do not use this strategy, or service interruption may occur;
If using HTTPS as scheme while requesting the original server, you should use iptables instead of Nginx's build-in access module, or the searcher still can find your server by detecting certificate's SNI;
Simply only applying IP whitelist if using Cloudflare as CDN may give a chance for searcher to bypass Cloudflare's protection and make them find your original IP address.
If worth, searcher can upload script to Cloudflare Worker and scan your IP by Cloudflare's IPs, which can bypass your IP whitelist setting;
Enabling Custom Hostname (Enterprise feature) or Authenticated Origin Pulls/Client Certificate Authentication "correctly" can avoid this issue.
If you are using iptables, do remember to install iptables-persistent, or you may lose your filter rules if reboot:
apt-get install iptables-persistent
Example of dropping requests from not whitelisted IPs:
Change hostname/listen port
Generally, aimed scanners will scan all IPs with standard ports(http/80, https/443) with your website's exposed domain/hostname. So if you can change them, it will usually be okay.
You can customize your origin hostname/domain for CDN nodes to request, to prevent searcher detect your origin server IP via hostname
Few CDN providers support customize port for requests to origin server
However, if you somehow let the searcher know your hostname, or IP ranges you use, your origin server has the risk to be exposed. So, do care.
Prevent Certificate SNI Leak patch
The intention of rejecting SSL handshake is preventing certificate's SNI info leak (or can be easily considered as domain info) from the aimless batch scan. The searcher can build a website-IP relation database based on this for quick search in the feature after the aimless batch scan.
Domain information is included in certificate, which can be for acknowledging what websites are running (Though they may not actually run):
If your Nginx version is higher than 1.19.4, you can simply use the ssl_reject_handshake feature to prevent SNI info leak. Otherwise, you will need to apply the strict-sni patch.
N.B. This measure only works if you want to use HTTPS as the scheme for CDN nodes requesting the original server. If you only tend to use HTTP as the scheme for the requests, you can simply return 444; in default server block and there is no need to continue reading or just skim this part.
Configuration of ssl_reject_handshake (Nginx ≥ 1.19.4)
Two parts are involved in the configuration of the ssl_reject_handshake, default block, and normal block:
server { # Default block returns null for SSL requests with wrong hostname
listen 443 ssl;
ssl_reject_handshake on;
}
server { # With the correct hostname, server will process requests
listen 443 ssl;
server_name test.com;
ssl_certificate test.com.crt;
ssl_certificate_key test.com.key;
}
If using Nginx 1.19.3 or below, you can use sni-strict patch instead. This patch is developed by Hakase, which can return a true empty response for invalid requests if your Nginx version is before 1.19.4.
Steps for installing sni-strict patch (Nginx ≤ 1.19.3)
First, install necessary packages:
apt-get install git curl gcc libpcre3-dev software-properties-common \
build-essential libssl-dev zlib1g-dev libxslt1-dev libgd-dev libperl-dev
Then, download OpenSSL version you need in release page.
Download repository openssl-patch:
git clone https://git.hakase.app/Hakase/openssl-patch.git
Based on OpenSSL version you choose before, switch directory to the OpenSSL code's directory, and then patch OpenSSL with the related patch:
cd openssl
patch -p1 < ../openssl-patch/openssl-equal-1.1.1d_ciphers.patch
Note from developer: OpenSSL 3.x has many API changes, and this patch
is no longer useful. (Chacha20 and Equal Preference patch) It is
recommended using version 1.1.x whenever possible.
Download Nginx package with the version you need.
Decompress Nginx package, switch directory into Nginx, and patch Nginx:
cd nginx/
curl https://raw.githubusercontent.com/hakasenyang/openssl-patch/master/nginx_strict-sni_1.15.10.patch | patch -p1
Specify OpenSSL directory in configure arguments:
./configure --with-http_ssl_module --with-openssl=/root/openssl
N.B. In actual practice, these arguments are far from making website work as expection, you need to plus what you need as what you want. For example, if you want your website deployed with http/2 protocol, argument --with-http_v2_module needs to be added, or module won't be built.
If you tend to feign your server as other real-existing websites for aimless batch scan, intend to give scanner false information instead of null, you can also plus extra arguments here:
./configure --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module --with-openssl=/root/openssl
P.S. This part is referring to "give false information by feigning as other real-existing websites/CDN nodes" in outline, which is only for giving false information to aimless scanner, and this is hard to work greatly for aimed scan. If you only want to show fake website to unauthorized clients, like handcrafting fake website, making reserved proxy, etc. (and return null information to aimless scanner), you should skip this part, or only add these arguments for advance.
After configuration, build and install Nginx.
make && make install
And installation is finished.
To be convenient, I prefer to do these also after then:
ln -s /usr/lib/nginx/modules/ /usr/share/nginx
ln -s /usr/share/nginx/sbin/nginx /usr/sbin
cat > /lib/systemd/system/nginx.service <<-EOF
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
EOF
systemctl enable nginx
Configuration of sni-strict patch (Nginx ≤ 1.19.3)
The configuration is similar to ssl_reject_handshake. There're 3 elements needs to be configured:
Control options
Fake(default) server block
Normal server blocks
http {
# control options
strict_sni on;
strict_sni_header on;
# fake server block
server {
server_name localhost;
listen 80;
listen 443 ssl default_server; # "default_server" is necessary
ssl_certificate /root/cert.crt; # Can be any certificate here
ssl_certificate_key /root/cert.key; # Can be any certificate here
location / {
return 444;
}
}
# normal server blocks
server {
server_name normal_domain.tld;
listen 80;
listen 443 ssl;
ssl_certificate /root/cert.crt; # Your real certificate here
ssl_certificate_key /root/cert/cert.key; # Your real certificate here
location / {
echo "Hello World!";
}
}
}
Now, aimless batch scanner cannot know what website you are running on this server, except situation that they already know and scan your server with hostname, which is called aimed scanner.
P.S. return 444; means return literally nothing when it comes to HTTP (not HTTPS) requests. If strict-sni not patched, certification information will still be returned while client trying to establish TLS connection.
N.B. After strict_sni on; be set, CDN nodes needs request with SNI or will encounter failure. See as: proxy_ssl_name.
Results
You can see certificate information is hidden when option turned on.
Before:
curl -v -k https://35.186.1.1
* Rebuilt URL to: https://35.186.1.1/
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to 35.186.1.1 (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
CApath: /etc/ssl/certs
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=normal_domain.tld
* start date: Nov 15 05:41:39 2019 GMT
* expire date: Nov 14 05:41:39 2020 GMT
* issuer: CN=normal_domain.tld
> GET / HTTP/1.1
> Host: 35.186.1.1
> User-Agent: curl/7.58.0
> Accept: */*
* Empty reply from server
* Connection #0 to host 35.186.1.1 left intact
curl: (52) Empty reply from server
After:
curl -v -k https://35.186.1.1
* Rebuilt URL to: https://35.186.1.1/
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to 35.186.1.1 (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, Server hello (2):
* error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
In case, you should know certification information will still be returned while requesting with target hostname. Even you have configured client check rules(like: HTTP header check, etc.) after then. This is also why this can only prevent aimless scan: it only works when attacker doesn't know what website you are running on this server. To cope with aimed scan, as original node, I highly recommend changing hostname, if possible.
Request with the wrong hostname: (Certificate info is not returned if the hostname is wrong)
curl -v -k --resolve wrong_domain.tld:443:35.186.1.1 https://wrong_domain.tld
* Added wrong_domain.tld:443:35.186.1.1 to DNS cache
* Rebuilt URL to: https://wrong_domain.tld/
* Hostname wrong_domain.tld was found in DNS cache
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to wrong_domain.tld (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, Server hello (2):
* error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
Request with the right hostname: (Only if the hostname is correct, the certificate info will be returned)
curl -v -k --resolve normal_domain.tld:443:35.186.1.1 https://normal_domain.tld
* Added normal_domain.tld:443:35.186.1.1 to DNS cache
* Rebuilt URL to: https://normal_domain.tld/
* Hostname normal_domain.tld was found in DNS cache
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to normal_domain.tld (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=normal_domain.tld
* start date: Nov 15 05:41:39 2019 GMT
* expire date: Nov 14 05:41:39 2020 GMT
* issuer: CN=normal_domain.tld
> GET / HTTP/1.1
> Host: normal_domain.tld
> User-Agent: curl/7.58.0
> Accept: */*
< HTTP/1.1 200 OK
< Server: nginx/1.17.5
< Date: Fri, 15 Nov 2019 05:53:19 GMT
< Content-Type: text/plain
< Connection: keep-alive
* Connection #0 to host normal_domain.tld left intact
P.S. If you know IP range that known aimless scanners used, you can use iptables to block them also, as another minor safe protect measure. Such as IP range of Censys's scanner listed below:
74.120.14.0/24
192.35.168.0/23
162.142.125.0/24
167.248.133.0/24
Give false information by feigning as other real-existing websites/CDN nodes
With this strategy, you can give some false information to the aimless scanner to let them build a database with false information. You may want to impose the scanner that your server is a CDN server; you may also want to combine your real site inside to confuse the aimed scanner to make it can not tell the server it detects is the original server or the CDN node, etc.
Personally, I am not willing to use this strategy, because it needs me to consider many factors, like which IDC provider will the real CDN node use (and host my website on the same IDC), the ASN (Autonomous System Number) its IP uses, the ports it opens, the HTTP header info added by CDN, etc., to make sure searcher will feel confused. This is very plaguy.
N.B. You should set HTTPS as the only scheme for CDN nodes to request your origin server if possible. Otherwise, you need to care if the behavior on HTTP port. Such as the target server/website you want to feign always redirect http/80 requests to https/443 port, but you forget to turn requests to your website on http/80 port to https/443.
P.S. In fact, it is not a bad but not good decision to feign the server as Cloudflare's CDN server. Because even though we can find Cloudflare's IP range on official website, which makes you may consider that Cloudflare will only use these IP for CDN nodes, there are some currently-existing servers are actually running Cloudflare's CDN node application, whose IPs are not included inside the IP list (or they are running the forward-proxy like what I will write next). Once upon a time, I ran a scan and found some servers without using Cloudflare's IPs which are doing the thing above. Thus, feigning as Cloudflare's CDN server is a thing: you would not actually need to have/use Cloudflare's IPs.
However, It is also not a thing, because you must use your own-created(includes both the self-signed or the not) certificate for your real website. As we know, most Cloudflare users use the certificate signed by Cloudflare. If you do want to feign your server as Cloudflare's, do consider for what purpose you want to do this.
Configuration
P.S. If you don't know how to install ngx_stream_module, check the steps for installing sni-strict patch for Nginx 1.19.3 or below. The relative is there.
There are 3 main points in the configuration:
Feigning/default block for the port http/80 in the http block;
Feigning/default block for the port https/443 in the stream block;
The block to route your real domain/website to the backend.
Example of the configuration:
load_module "modules/ngx_stream_module.so";
http{ # Design the http block by yourself
server {
listen 80 default_server;
server_name localhost;
location / {
proxy_pass http://104.27.184.146:80; # Feign as Cloudflare's CDN node
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name yourwebsite.com; # If you set https as the only scheme for CDN nodes requesting your origin server, you should not configure the block of your real website in the http{} block, aka here (except that the listen address is "localhost" instead of the public network IP)
location / {
proxy_pass http://127.0.0.1:8080; # Your backend
proxy_set_header Host $host;
}
}
}
stream{
map $ssl_preread_server_name $name {
yourwebsite.com website-upstream; # Your real website's route
default cloudflare; # Default route
}
upstream cloudflare {
server 104.27.184.146:443; # Cloudflare IP
}
upstream website-upstream {server 127.0.0.1:8080;} # Your real website's backend
server {
listen 443;
proxy_pass $name;
proxy_ssl_name $ssl_preread_server_name;
proxy_ssl_protocols TLSv1.2 TLSv1.3;
ssl_preread on;
}
}
Result
It will return the content with the real-existing certificate of other websites:
curl -I -v --resolve www.cloudflare.com:443:127.0.0.1 https://www.cloudflare.com/
* Expire in 0 ms for 6 (transfer 0x55f3f0ae0f50)
* Added www.cloudflare.com:443:127.0.0.1 to DNS cache
* Hostname www.cloudflare.com was found in DNS cache
* Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55f3f0ae0f50)
* Connected to www.cloudflare.com (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: businessCategory=Private Organization; jurisdictionC=US; jurisdictionST=Delaware; serialNumber=4710875; C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.; CN=cloudflare.com
* start date: Oct 30 00:00:00 2018 GMT
* expire date: Nov 3 12:00:00 2020 GMT
* subjectAltName: host "www.cloudflare.com" matched cert's "www.cloudflare.com"
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert ECC Extended Validation Server CA
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55f3f0ae0f50)
> HEAD / HTTP/2
> Host: www.cloudflare.com
> User-Agent: curl/7.64.0
> Accept: */*
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!
< HTTP/2 200
HTTP/2 200
< date: Tue, 06 Oct 2020 06:26:50 GMT
* Connection #0 to host www.cloudflare.com left intact
(Succeed to feign as the real website, some results are omitted)
Prevent unauthorized access by feigning as other self-handcrafting websites/returning null
Before starting this section, you should know this tactic can only be used while CDN nodes can return something different from normal user. Here is an example:
HTTP header settings for requesting origin server in GCP
HTTP header check is a common way to authorize whether the request is from CDN.
P.S. GCP(Google Cloud Platform)'s HTTP Load balancing service provide an option to set the request headers that GCP CDN nodes should provide while origin servers receiving the data from GCP CDN nodes[^1]. This makes the origin server can know the CDN nodes requests from the normal/spiteful clients.
[^1]: Though GCP load balancing/CDN service only accept GCP VM instances as backends, the mechanism is the same.
P.S. In some products, some engineers would like to add some header while requesting the origin server for debug, but not as a feature, which means it won't appear in the document of their products (such as CDN.net), the customer service staff are not acknowledged also. If you want discover there's an special header included in the header or not within the CDN product you use, write a simple script to dump all headers you received will be a good choice. This won't be detailed here.
The configuration is literate, no need to explain.
Configuration if you want to return null:
server {
listen 80;
server_name yourweb.site;
if ($http_auth_tag != "here_is_the_credential") {
return 444;
}
location / {
echo "Hello World!";
}
}
Configuration if you want to return fake website/backend:
server {
listen 80;
server_name yourweb.site;
if ($http_auth_tag != "here_is_the_credential") {
return #fake;
}
location / {
echo "Hello World!";
}
location #fake {
root /var/www/fakesite/; # Highly recommend to build a hand-crafting fake website by yourself
}
}
P.S. If you tend to configure these in https/443 port, I recommend you to self-sign certificate with unknown domain. Using real certificate with exposed domain may let scanner find your origin server easily. Nginx allows you to use certificate without matching SNI info with server_name.
N.B. Some may consider using real certificate with the subdomain of the exposed domain, and most probably use Let's Encrypt to get free certificates. It would be best if you cared about the Certificate Transparency, which can tell what certificates you have within the specific domain. Especially, Let's Encrypt submits all certificates it issues to CT logs. (Reference: Original, Archive.ph)
If you want to see whether your certificate is logged into the CT log, you can visit crt.sh.
If you cannot tell whether CA you want to apply for the certificate submits all certificates it issues to CT logs, you'd better self-sign certificate.
The self-sign commands are below:
cat > csrconfig.txt <<-EOF
[ req ]
default_md=sha256
prompt=no
req_extensions=req_ext
distinguished_name=req_distinguished_name
[ req_distinguished_name ]
commonName=yeet.com
countryName=SG
[ req_ext ]
keyUsage=critical,digitalSignature,keyEncipherment
extendedKeyUsage=critical,serverAuth,clientAuth
subjectAltName=#alt_names
[ alt_names ]
DNS.0=yeet.com
EOF
cat > certconfig.txt <<-EOF
[ req ]
default_md=sha256
prompt=no
req_extensions=req_ext
distinguished_name=req_distinguished_name
[ req_distinguished_name ]
commonName=yeet.com
countryName=SG
[ req_ext ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
keyUsage=critical,digitalSignature,keyEncipherment
extendedKeyUsage=critical,serverAuth,clientAuth
subjectAltName=#alt_names
[ alt_names ]
DNS.0=yeet.com
EOF
openssl genpkey -outform PEM -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out cert.key
openssl req -new -nodes -key cert.key -config csrconfig.txt -out cert.csr
openssl req -x509 -nodes -in cert.csr -days 365 -key cert.key -config certconfig.txt -extensions req_ext -out cert.pem
Considering some readers may use the commands I wrote above to generate CSR file, which can be used to apply for the real certificate, I reserve the field countryName (some CA needs this field exists while receiving CSR file). If you don't need it, you can simply delete it.
N.B. Self-sign certificate may rise risk of the MITM (man-in-the-middle attack), unless the underlying facilities are credible, or CDN provider does support requests with provided client certificate, aka Authenticated Origin Pulls in Cloudflare.
Enable "Authenticated Origin Pulls" in Cloudflare
Client Certificates check is also the way to authorize whether the request is from CDN nodes. Only seldom CDN providers support requesting with the client certificate. Whatever which provider has this feature, the configuration on your sever are likely. Here's the example:
server {
listen 443;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
server_name yourdomain.com;
ssl_client_certificate cloudflare.crt;
ssl_verify_client on;
error_page 495 496 = #444; # For specifying the return instead of giving the default return while the error is related to the client certificate auth error
location #444 {return 444;}
location / {
echo "Hello World!";
}
}
It will return null while facing the client certificate errors
P.S. Feigning as other websites/backends is also possible, just simply imitate the one in the "HTTP header check" part.
N.B. Whatever what method you want to use, do care the default return. Make the default return same as the return if the requests are invalid.
Make the default return as null as the return for invalid requests:
server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/certs/cert.crt;
ssl_certificate_key /etc/nginx/certs/cert.key;
server_name localhost;
location / {
return 444;
}
}
Rusult
curl http://127.0.0.1:80
curl: (52) Empty reply from server
curl -k https://127.0.0.1:443
curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
Conclusion
In simple, to protect your origin server IP from detection, you can:
Set IP whitelist if possible
Change the hostname of your website on your origin server if possible/Change the listen port if possible
Set default return for unmatched hostname
Set authorization method for matched hostname
Think if you are scanner itself, how will you think about the server behavior
The whole process can be roughly drawn like this:

Nginx ssl_trusted_certificate directive problem

I have my nginx configured with client_certificate authentication:
ssl_client_certificate /etc/nginx/ssl/cas.pem;
ssl_verify_client optional;
ssl_verify_depth 2;
And is working fine, but I need to NOT send the CAs to the client during the handshake.
I've seen http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate in the documentation. So, I've changed it to:
ssl_trusted_certificate /etc/nginx/ssl/cas.pem;
ssl_verify_depth 2;
But now ssl_client_verify is always to NONE, like if there wasn't sent any certificate info in the request.
[EDIT] Saw in wireshark that actually the client is not sending the certificate.
What am I doing wrong?

Difficulties configuring nginx for Https

I'm currently configuring two rapsberry pi's on my home network. One which serves data from sensors on a node server to the second pi (a webserver, also running on node). Both of them are behind a nginx proxy. After a lot of configuring and searching i found a working solution. The Webserver is using dataplicity to make it accessible for www. I don't use dataplicity on the second pi (the server of sensordata) :
server {
listen 80;
server:name *ip-address*
location / {
proxy_set_header X-forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}
}
server {
listen 443 ssl;
server_name *ip-address*
ssl on;
ssl_certificate /var/www/cert.pem
ssl_certificate_key /var/www/key.pem
location /{
add_header "Access-control-allow-origin" *;
proxy_pass http://127.0.0.1:3000;
}
}
This config works. however, ONLY on my computer. From other computers i get ERR_INSECURE_RESPONSE when trying to access the api with ajax-request. the certificates is self-signed.. Help is much appriciated.
EDIT:
Still no fix for this problem. I signed up for dataplicity for my second device as well. This fixed my problem but it now runs through a third party. Will look into this in the future. So if anyone has an answer to this, please do tell.
It seems that your certificate aren't correct, root certificate missing ? (it can work on your computer if you have already accept insecure certificate on your browser).
Check if your certificates are good, the following command must give the same result :
openssl x509 -noout -modulus -in mycert.crt | openssl md5
openssl rsa -noout -modulus -in mycert.key | openssl md5
openssl x509 -noout -modulus -in mycert.pem | openssl md5
If one ouput differs from the other, the certificate has been bad generated.
You can also check it directly on your computer with curl :
curl -v -i https://yourwebsite
If the top of the ouput show an insecure warning the certificate has been bad generated.
The post above looks about right.
The certificates and/or SSL is being rejected by your client.
This could be a few things, assuming the certificates themselves are publicly signed (they probably are not).
Date and time mismatch is possible (certificates are sensitive to the system clock).
If your certs are self-signed, you'll need to make sure your remote device is configured to accept your private root certificate.
Lastly, you might need to configure your server to use only modern encryption methods. Your client may be rejecting some older methods if it has been updated since the POODLE attacks.
This post should let you create a certificate https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04, though I think you've already made it this far.
This post https://unix.stackexchange.com/questions/90450/adding-a-self-signed-certificate-to-the-trusted-list will let you add your new private root cert to the trusted list on your client.
And finally this is recommended SSL config in Ubuntu (sourced from here https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-on-ubuntu-14-04).
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
Or if you get really stuck, just PM me your account details I'll put a second free device on your Dataplicity account:)
Cool project, keen to help out.
Dataplicity Wormhole redirects a service listening on port 80 on the device to a public URL in the form https://*.dataplicity.io, and puts a dataplicity certificate in front. Due to the way HTTPS works, the port being redirected via dataplicity cannot use HTTPS, as it would mean we are unable to forward the traffic via the dataplicity.io domain. The tunnel from your device to Dataplicity is encrypted anyway.
Is there a reason you prefer not to run Dataplicity on the second Pi? While you can run a webserver locally of course, this would be a lot easier and more portable across networks if you just installed a second instance of Dataplicity on your second device...

How to activate 2 ports 80 & 4000 for a single SSL enabled domain?

I am new to SSL encryptions and need help! (Using cert bot).
I recently activated SSL on a website that runs on apache and linux on port 80. So, the current website looks like:
http://example.com --> https://example.com (done)
However, I have backend running on port 4000 and want to encrypt that as well to avoid "Mixed Content" page error:
http://example.com:4000 --> https://example.com:4000 (Not done yet)
This is exactly what I need and no work around would help. Please guide.
Thanks in advance! :-)
You can create a new subdomain subdomain.example.com and point it to example.com:4000 and then request a new SSL certificate from LetsEncrypt and specify multiple subdomains when requesting a certificate using certbot.
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com -w /var/www/other -d other.example.net -d another.other.example.net
When you have the certificate and key, add it in your webserver config
Check out the official certbot documentation here

Use Docker registry with SSL certifictae without IP SANs

I have a private Docker registry (using this image) running on a cloud server. I want to secure this registry with basic auth and SSL via nginx. But I am new to SSL and run in some problems:
I created SSL certificates with OpenSSL like this:
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private.key -out certificate.crt
Then I copied both files to my cloud server and used it in nginx like this:
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
ssl on;
ssl_certificate /var/certs/certificate.crt;
ssl_certificate_key /var/certs/private.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/sites-enabled/.htpasswd;
proxy_pass http://XX.XX.XX.XX;
}
}
Nginx and the registry are starting and running both. I can go to my server in my browser which presents me a warning about my SSL certificate (so nginx runs and finds the SSL certificate) and when I enter my credentials I can see a ping message from the Docker registry (so the registry is also running).
But when I try to login via Docker I get the following error:
vagrant#ubuntu-13:~$ docker login https://XX.XX.XX.XX
Username: XXX
Password:
Email:
2014/05/05 08:30:59 Error: Invalid Registry endpoint: Get https://XX.XX.XX.XX/v1/_ping: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs
I know this exception means that I have no IP address of the server in my certificate, but is it possible to use the Docker client and ignore the missing IP?
EDIT:
If I use a certificate with the IP of the server it works. But is there any chance to use a SSL certificate without the IP?
It's a Go issue. Actually it's a tech issue and go refused to follow the industry hack thus that's why it's not working. See this https://groups.google.com/forum/#!topic/golang-nuts/LjhVww0TQi4

Resources