Certificate issue with Curl - linux

My CentOS 7 server which is in AWS private cloud (company network), is unable to connect to some sites. After some work I managed to narrow the problem down to following problem.
(1) The following internal site is not accessible (SSL by public CA):
curl -v https://git.example.com
which returns:
About to connect() to git.example.com port 443 (#0)
Trying 10.62.124.6...
Connected to git.example.com (10.62.124.6) port 443 (#0)
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
(2) But following internal site works (SSL by public CA):
curl -v https://alm.example.com
which returns:
About to connect() to alm.example.com port 443 (#0)
Trying 10.64.167.137...
Connected to alm.example.com (10.64.167.137) port 443 (#0)
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
...
...
...
Accept: */*
Any idea why number (1) is not working? These are both internal sites trusted by same public CA.
Thanks for the help.

It turned out to be the following case in our company.
git.example.com was hosted in private Azure and
alm.example.com was hosted in private AWS. And my working server also happens to be in AWS which is why the Azure one was having some trouble in the network. As advised by network team I set the MTU size in linux kernal to 1350 and this was resolved.
Moreover, our company had also started intercepting SSL traffic for which they have installed a intermediate certificate in the proxy and expect all internal servers to trust this certificate. My problem stated above was due to mix of both of these issues, where the certificate issue could have been sorted by trusting it or ignoring SSL verification.
Hope this helps someone.

Related

How to implement tls on dns Doh?

I wrote a tool to generate a dns server under docker by a simplified method http://tobelucky.fr
My bind server does not recognize my instructions
listen-on port 5050 tls local-tls http default {any;};
https://github.com/Maissacrement/automate_dns/blob/main/etc/bind/named.conf.options

Azure Databricks connectivity from Azure VM

We are having following setup :-
Azure Linux VM in subnet1 inside VNET01
Azure databricks hosted using custom connected VNET inside VNET01.
While making a connection from Azure VM with ADB we are facing following issue :-
export DATABRICKS_TOKEN=sdgsdgsyd2382732
curl -X GET --header "Authorization: Bearer $DATABRICKS_TOKEN" https://adb-xyz.azuredatabricks.net/api/2.0/clusters/list -vvv
About to connect() to adb-xyz.azuredatabricks.net port 443 (#0)
Trying 40.74.30.80...
Connected to adb-xyz.azuredatabricks.net (30.85.20.80) port 443 (#0)
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
NSS error -5938 (PR_END_OF_FILE_ERROR)
Encountered end of
It is able to connect to ADB host however fails afterward, we suspect its related to certificates, if it is how can this be resolved, if not, can someone please explain how to handle this issue and make a connection?

"Timeout during connect (likely firewall problem)" while renewing Certbot

I am facing the following error when I try to renew my ssl certificate using
certbot renew
Challenge failed for domain ***********.com
Some challenges have failed.
The following errors were reported by the server:
Domain: arjunbroker.com
Type: connection
Detail: Fetching
http://arjunbroker.com/.well-known/acme-challenge/F9nlyrRQBpJGOpPLHGPCj1vzdJOd_rBISU7q2aX7t_o:
Timeout during connect (likely firewall problem)
I have checked UFW and firewalld. And both port 80 and 443 are open.
I finally realised that prior to installing SSL on this server, I used to forward port 80 to port 8080 using
sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
So I simply forwarded port 80 back to port 80.
Lesson learnt, for Certbot to work port 80 forwarding should be in place.
I finally realized that I ONLY had http/https open to my test client machines. I opened them wide for the certbot run then closed them again. I'll try to determine what IP needs to be open for letsencrypt probes so I can automate the certbot renewals.
For me the issue was that Let's Encrypt uses IPv6 if possible to do the http challenge and my site worked fine over IPv4 but not over IPv6 (as I had it setup wrong). You can use this site to test your IPv6 setup.
I solved this by disabling 'Permanent SEO-safe 301 redirect from HTTP to HTTPS' (in Hosting Settings for Plesk / CentOS Linux 7.9).
LetsEncrypt wouldn't assign or renew its SSL certificates otherwise. Spent a day re-configuring, DNS, panel.ini, firewall, etc., and eventually pinpointed this as the specific cause.
The issue surfaced about 10 months ago and we only realised what was happening recently.
I fixed that in AWS EC2 updating the Group Security like this:
More about EC2 Group Security: https://docs.aws.amazon.com/pt_br/AWSEC2/latest/UserGuide/ec2-security-groups.html

How to prevent origin server IP address behind CDN from exposed, like Cloudflare?

I use Cloudflare/Google Cloud Platform as CDN, how to hide my server IP from detection via scanners?
There are some methods which can help your server from detection, such as IP whitelist, hostname/port change, OpenSSL/SNI patch, website/backend faking, header/client certificate authorization, etc.
In short: Think like a scanner, and you will be fine.
I also publish this answer in my blog, check if you are interested.
Before start detailing, if you need to protect your server completely, it is far from enough by doing things I introduce here. Security follows the Liebig's barrel; any minor inattention will cause an unpredictable consequence. In short, you need in charge of your security. The only thing I wrote here is about how to prevent IP leak from the webserver. If there is a neglected place, like design error in the application which causes the IP leak, this won't help.
In general, the way to find your original node is scanning every possible IP by requesting like regular user, and find target by filtering the results. In most situation, you can prevent them by setting IP whitelist. But it depends. You may probably don't know the IP that CDN nodes used for requesting your original server, or they're changing. Use this policy may likely cause service interruption.
Outline
IP Whitelist
Change hostname/listen port
Prevent certification leak from aimless batch scan
Domain info on original server would not be inputted the database based on this
If possible, change the port the webserver listened to
Give false information by feigning as other real-existing websites/CDN nodes
Prevent unauthorized access by feigning as other self-handcrafting websites/returning null
Needs to cooperate with other regulations that the CDN provided
Client certificate authentication is also an uncommon way
Conclusion
If you are confused, you can check the flow chart in the conclusion first, then continue reading.
Strategies
Assuming Debian/Ubuntu as OS, and Nginx as web server.
IP Whitelist
In fact, the most direct, efficient method to prevent original server IP leak is setting IP whitelist. If you can do so, you should do so. However, do remember the things:
If CDN provider does not provide IP list in use, do not use this strategy, or service interruption may occur;
If using HTTPS as scheme while requesting the original server, you should use iptables instead of Nginx's build-in access module, or the searcher still can find your server by detecting certificate's SNI;
Simply only applying IP whitelist if using Cloudflare as CDN may give a chance for searcher to bypass Cloudflare's protection and make them find your original IP address.
If worth, searcher can upload script to Cloudflare Worker and scan your IP by Cloudflare's IPs, which can bypass your IP whitelist setting;
Enabling Custom Hostname (Enterprise feature) or Authenticated Origin Pulls/Client Certificate Authentication "correctly" can avoid this issue.
If you are using iptables, do remember to install iptables-persistent, or you may lose your filter rules if reboot:
apt-get install iptables-persistent
Example of dropping requests from not whitelisted IPs:
Change hostname/listen port
Generally, aimed scanners will scan all IPs with standard ports(http/80, https/443) with your website's exposed domain/hostname. So if you can change them, it will usually be okay.
You can customize your origin hostname/domain for CDN nodes to request, to prevent searcher detect your origin server IP via hostname
Few CDN providers support customize port for requests to origin server
However, if you somehow let the searcher know your hostname, or IP ranges you use, your origin server has the risk to be exposed. So, do care.
Prevent Certificate SNI Leak patch
The intention of rejecting SSL handshake is preventing certificate's SNI info leak (or can be easily considered as domain info) from the aimless batch scan. The searcher can build a website-IP relation database based on this for quick search in the feature after the aimless batch scan.
Domain information is included in certificate, which can be for acknowledging what websites are running (Though they may not actually run):
If your Nginx version is higher than 1.19.4, you can simply use the ssl_reject_handshake feature to prevent SNI info leak. Otherwise, you will need to apply the strict-sni patch.
N.B. This measure only works if you want to use HTTPS as the scheme for CDN nodes requesting the original server. If you only tend to use HTTP as the scheme for the requests, you can simply return 444; in default server block and there is no need to continue reading or just skim this part.
Configuration of ssl_reject_handshake (Nginx ≥ 1.19.4)
Two parts are involved in the configuration of the ssl_reject_handshake, default block, and normal block:
server { # Default block returns null for SSL requests with wrong hostname
listen 443 ssl;
ssl_reject_handshake on;
}
server { # With the correct hostname, server will process requests
listen 443 ssl;
server_name test.com;
ssl_certificate test.com.crt;
ssl_certificate_key test.com.key;
}
If using Nginx 1.19.3 or below, you can use sni-strict patch instead. This patch is developed by Hakase, which can return a true empty response for invalid requests if your Nginx version is before 1.19.4.
Steps for installing sni-strict patch (Nginx ≤ 1.19.3)
First, install necessary packages:
apt-get install git curl gcc libpcre3-dev software-properties-common \
build-essential libssl-dev zlib1g-dev libxslt1-dev libgd-dev libperl-dev
Then, download OpenSSL version you need in release page.
Download repository openssl-patch:
git clone https://git.hakase.app/Hakase/openssl-patch.git
Based on OpenSSL version you choose before, switch directory to the OpenSSL code's directory, and then patch OpenSSL with the related patch:
cd openssl
patch -p1 < ../openssl-patch/openssl-equal-1.1.1d_ciphers.patch
Note from developer: OpenSSL 3.x has many API changes, and this patch
is no longer useful. (Chacha20 and Equal Preference patch) It is
recommended using version 1.1.x whenever possible.
Download Nginx package with the version you need.
Decompress Nginx package, switch directory into Nginx, and patch Nginx:
cd nginx/
curl https://raw.githubusercontent.com/hakasenyang/openssl-patch/master/nginx_strict-sni_1.15.10.patch | patch -p1
Specify OpenSSL directory in configure arguments:
./configure --with-http_ssl_module --with-openssl=/root/openssl
N.B. In actual practice, these arguments are far from making website work as expection, you need to plus what you need as what you want. For example, if you want your website deployed with http/2 protocol, argument --with-http_v2_module needs to be added, or module won't be built.
If you tend to feign your server as other real-existing websites for aimless batch scan, intend to give scanner false information instead of null, you can also plus extra arguments here:
./configure --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module --with-openssl=/root/openssl
P.S. This part is referring to "give false information by feigning as other real-existing websites/CDN nodes" in outline, which is only for giving false information to aimless scanner, and this is hard to work greatly for aimed scan. If you only want to show fake website to unauthorized clients, like handcrafting fake website, making reserved proxy, etc. (and return null information to aimless scanner), you should skip this part, or only add these arguments for advance.
After configuration, build and install Nginx.
make && make install
And installation is finished.
To be convenient, I prefer to do these also after then:
ln -s /usr/lib/nginx/modules/ /usr/share/nginx
ln -s /usr/share/nginx/sbin/nginx /usr/sbin
cat > /lib/systemd/system/nginx.service <<-EOF
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
EOF
systemctl enable nginx
Configuration of sni-strict patch (Nginx ≤ 1.19.3)
The configuration is similar to ssl_reject_handshake. There're 3 elements needs to be configured:
Control options
Fake(default) server block
Normal server blocks
http {
# control options
strict_sni on;
strict_sni_header on;
# fake server block
server {
server_name localhost;
listen 80;
listen 443 ssl default_server; # "default_server" is necessary
ssl_certificate /root/cert.crt; # Can be any certificate here
ssl_certificate_key /root/cert.key; # Can be any certificate here
location / {
return 444;
}
}
# normal server blocks
server {
server_name normal_domain.tld;
listen 80;
listen 443 ssl;
ssl_certificate /root/cert.crt; # Your real certificate here
ssl_certificate_key /root/cert/cert.key; # Your real certificate here
location / {
echo "Hello World!";
}
}
}
Now, aimless batch scanner cannot know what website you are running on this server, except situation that they already know and scan your server with hostname, which is called aimed scanner.
P.S. return 444; means return literally nothing when it comes to HTTP (not HTTPS) requests. If strict-sni not patched, certification information will still be returned while client trying to establish TLS connection.
N.B. After strict_sni on; be set, CDN nodes needs request with SNI or will encounter failure. See as: proxy_ssl_name.
Results
You can see certificate information is hidden when option turned on.
Before:
curl -v -k https://35.186.1.1
* Rebuilt URL to: https://35.186.1.1/
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to 35.186.1.1 (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
CApath: /etc/ssl/certs
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=normal_domain.tld
* start date: Nov 15 05:41:39 2019 GMT
* expire date: Nov 14 05:41:39 2020 GMT
* issuer: CN=normal_domain.tld
> GET / HTTP/1.1
> Host: 35.186.1.1
> User-Agent: curl/7.58.0
> Accept: */*
* Empty reply from server
* Connection #0 to host 35.186.1.1 left intact
curl: (52) Empty reply from server
After:
curl -v -k https://35.186.1.1
* Rebuilt URL to: https://35.186.1.1/
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to 35.186.1.1 (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, Server hello (2):
* error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
In case, you should know certification information will still be returned while requesting with target hostname. Even you have configured client check rules(like: HTTP header check, etc.) after then. This is also why this can only prevent aimless scan: it only works when attacker doesn't know what website you are running on this server. To cope with aimed scan, as original node, I highly recommend changing hostname, if possible.
Request with the wrong hostname: (Certificate info is not returned if the hostname is wrong)
curl -v -k --resolve wrong_domain.tld:443:35.186.1.1 https://wrong_domain.tld
* Added wrong_domain.tld:443:35.186.1.1 to DNS cache
* Rebuilt URL to: https://wrong_domain.tld/
* Hostname wrong_domain.tld was found in DNS cache
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to wrong_domain.tld (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, Server hello (2):
* error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name
Request with the right hostname: (Only if the hostname is correct, the certificate info will be returned)
curl -v -k --resolve normal_domain.tld:443:35.186.1.1 https://normal_domain.tld
* Added normal_domain.tld:443:35.186.1.1 to DNS cache
* Rebuilt URL to: https://normal_domain.tld/
* Hostname normal_domain.tld was found in DNS cache
* Trying 35.186.1.1...
* TCP_NODELAY set
* Connected to normal_domain.tld (35.186.1.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=normal_domain.tld
* start date: Nov 15 05:41:39 2019 GMT
* expire date: Nov 14 05:41:39 2020 GMT
* issuer: CN=normal_domain.tld
> GET / HTTP/1.1
> Host: normal_domain.tld
> User-Agent: curl/7.58.0
> Accept: */*
< HTTP/1.1 200 OK
< Server: nginx/1.17.5
< Date: Fri, 15 Nov 2019 05:53:19 GMT
< Content-Type: text/plain
< Connection: keep-alive
* Connection #0 to host normal_domain.tld left intact
P.S. If you know IP range that known aimless scanners used, you can use iptables to block them also, as another minor safe protect measure. Such as IP range of Censys's scanner listed below:
74.120.14.0/24
192.35.168.0/23
162.142.125.0/24
167.248.133.0/24
Give false information by feigning as other real-existing websites/CDN nodes
With this strategy, you can give some false information to the aimless scanner to let them build a database with false information. You may want to impose the scanner that your server is a CDN server; you may also want to combine your real site inside to confuse the aimed scanner to make it can not tell the server it detects is the original server or the CDN node, etc.
Personally, I am not willing to use this strategy, because it needs me to consider many factors, like which IDC provider will the real CDN node use (and host my website on the same IDC), the ASN (Autonomous System Number) its IP uses, the ports it opens, the HTTP header info added by CDN, etc., to make sure searcher will feel confused. This is very plaguy.
N.B. You should set HTTPS as the only scheme for CDN nodes to request your origin server if possible. Otherwise, you need to care if the behavior on HTTP port. Such as the target server/website you want to feign always redirect http/80 requests to https/443 port, but you forget to turn requests to your website on http/80 port to https/443.
P.S. In fact, it is not a bad but not good decision to feign the server as Cloudflare's CDN server. Because even though we can find Cloudflare's IP range on official website, which makes you may consider that Cloudflare will only use these IP for CDN nodes, there are some currently-existing servers are actually running Cloudflare's CDN node application, whose IPs are not included inside the IP list (or they are running the forward-proxy like what I will write next). Once upon a time, I ran a scan and found some servers without using Cloudflare's IPs which are doing the thing above. Thus, feigning as Cloudflare's CDN server is a thing: you would not actually need to have/use Cloudflare's IPs.
However, It is also not a thing, because you must use your own-created(includes both the self-signed or the not) certificate for your real website. As we know, most Cloudflare users use the certificate signed by Cloudflare. If you do want to feign your server as Cloudflare's, do consider for what purpose you want to do this.
Configuration
P.S. If you don't know how to install ngx_stream_module, check the steps for installing sni-strict patch for Nginx 1.19.3 or below. The relative is there.
There are 3 main points in the configuration:
Feigning/default block for the port http/80 in the http block;
Feigning/default block for the port https/443 in the stream block;
The block to route your real domain/website to the backend.
Example of the configuration:
load_module "modules/ngx_stream_module.so";
http{ # Design the http block by yourself
server {
listen 80 default_server;
server_name localhost;
location / {
proxy_pass http://104.27.184.146:80; # Feign as Cloudflare's CDN node
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name yourwebsite.com; # If you set https as the only scheme for CDN nodes requesting your origin server, you should not configure the block of your real website in the http{} block, aka here (except that the listen address is "localhost" instead of the public network IP)
location / {
proxy_pass http://127.0.0.1:8080; # Your backend
proxy_set_header Host $host;
}
}
}
stream{
map $ssl_preread_server_name $name {
yourwebsite.com website-upstream; # Your real website's route
default cloudflare; # Default route
}
upstream cloudflare {
server 104.27.184.146:443; # Cloudflare IP
}
upstream website-upstream {server 127.0.0.1:8080;} # Your real website's backend
server {
listen 443;
proxy_pass $name;
proxy_ssl_name $ssl_preread_server_name;
proxy_ssl_protocols TLSv1.2 TLSv1.3;
ssl_preread on;
}
}
Result
It will return the content with the real-existing certificate of other websites:
curl -I -v --resolve www.cloudflare.com:443:127.0.0.1 https://www.cloudflare.com/
* Expire in 0 ms for 6 (transfer 0x55f3f0ae0f50)
* Added www.cloudflare.com:443:127.0.0.1 to DNS cache
* Hostname www.cloudflare.com was found in DNS cache
* Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55f3f0ae0f50)
* Connected to www.cloudflare.com (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: businessCategory=Private Organization; jurisdictionC=US; jurisdictionST=Delaware; serialNumber=4710875; C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.; CN=cloudflare.com
* start date: Oct 30 00:00:00 2018 GMT
* expire date: Nov 3 12:00:00 2020 GMT
* subjectAltName: host "www.cloudflare.com" matched cert's "www.cloudflare.com"
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert ECC Extended Validation Server CA
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55f3f0ae0f50)
> HEAD / HTTP/2
> Host: www.cloudflare.com
> User-Agent: curl/7.64.0
> Accept: */*
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!
< HTTP/2 200
HTTP/2 200
< date: Tue, 06 Oct 2020 06:26:50 GMT
* Connection #0 to host www.cloudflare.com left intact
(Succeed to feign as the real website, some results are omitted)
Prevent unauthorized access by feigning as other self-handcrafting websites/returning null
Before starting this section, you should know this tactic can only be used while CDN nodes can return something different from normal user. Here is an example:
HTTP header settings for requesting origin server in GCP
HTTP header check is a common way to authorize whether the request is from CDN.
P.S. GCP(Google Cloud Platform)'s HTTP Load balancing service provide an option to set the request headers that GCP CDN nodes should provide while origin servers receiving the data from GCP CDN nodes[^1]. This makes the origin server can know the CDN nodes requests from the normal/spiteful clients.
[^1]: Though GCP load balancing/CDN service only accept GCP VM instances as backends, the mechanism is the same.
P.S. In some products, some engineers would like to add some header while requesting the origin server for debug, but not as a feature, which means it won't appear in the document of their products (such as CDN.net), the customer service staff are not acknowledged also. If you want discover there's an special header included in the header or not within the CDN product you use, write a simple script to dump all headers you received will be a good choice. This won't be detailed here.
The configuration is literate, no need to explain.
Configuration if you want to return null:
server {
listen 80;
server_name yourweb.site;
if ($http_auth_tag != "here_is_the_credential") {
return 444;
}
location / {
echo "Hello World!";
}
}
Configuration if you want to return fake website/backend:
server {
listen 80;
server_name yourweb.site;
if ($http_auth_tag != "here_is_the_credential") {
return #fake;
}
location / {
echo "Hello World!";
}
location #fake {
root /var/www/fakesite/; # Highly recommend to build a hand-crafting fake website by yourself
}
}
P.S. If you tend to configure these in https/443 port, I recommend you to self-sign certificate with unknown domain. Using real certificate with exposed domain may let scanner find your origin server easily. Nginx allows you to use certificate without matching SNI info with server_name.
N.B. Some may consider using real certificate with the subdomain of the exposed domain, and most probably use Let's Encrypt to get free certificates. It would be best if you cared about the Certificate Transparency, which can tell what certificates you have within the specific domain. Especially, Let's Encrypt submits all certificates it issues to CT logs. (Reference: Original, Archive.ph)
If you want to see whether your certificate is logged into the CT log, you can visit crt.sh.
If you cannot tell whether CA you want to apply for the certificate submits all certificates it issues to CT logs, you'd better self-sign certificate.
The self-sign commands are below:
cat > csrconfig.txt <<-EOF
[ req ]
default_md=sha256
prompt=no
req_extensions=req_ext
distinguished_name=req_distinguished_name
[ req_distinguished_name ]
commonName=yeet.com
countryName=SG
[ req_ext ]
keyUsage=critical,digitalSignature,keyEncipherment
extendedKeyUsage=critical,serverAuth,clientAuth
subjectAltName=#alt_names
[ alt_names ]
DNS.0=yeet.com
EOF
cat > certconfig.txt <<-EOF
[ req ]
default_md=sha256
prompt=no
req_extensions=req_ext
distinguished_name=req_distinguished_name
[ req_distinguished_name ]
commonName=yeet.com
countryName=SG
[ req_ext ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer
keyUsage=critical,digitalSignature,keyEncipherment
extendedKeyUsage=critical,serverAuth,clientAuth
subjectAltName=#alt_names
[ alt_names ]
DNS.0=yeet.com
EOF
openssl genpkey -outform PEM -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out cert.key
openssl req -new -nodes -key cert.key -config csrconfig.txt -out cert.csr
openssl req -x509 -nodes -in cert.csr -days 365 -key cert.key -config certconfig.txt -extensions req_ext -out cert.pem
Considering some readers may use the commands I wrote above to generate CSR file, which can be used to apply for the real certificate, I reserve the field countryName (some CA needs this field exists while receiving CSR file). If you don't need it, you can simply delete it.
N.B. Self-sign certificate may rise risk of the MITM (man-in-the-middle attack), unless the underlying facilities are credible, or CDN provider does support requests with provided client certificate, aka Authenticated Origin Pulls in Cloudflare.
Enable "Authenticated Origin Pulls" in Cloudflare
Client Certificates check is also the way to authorize whether the request is from CDN nodes. Only seldom CDN providers support requesting with the client certificate. Whatever which provider has this feature, the configuration on your sever are likely. Here's the example:
server {
listen 443;
ssl_certificate cert.crt;
ssl_certificate_key cert.key;
server_name yourdomain.com;
ssl_client_certificate cloudflare.crt;
ssl_verify_client on;
error_page 495 496 = #444; # For specifying the return instead of giving the default return while the error is related to the client certificate auth error
location #444 {return 444;}
location / {
echo "Hello World!";
}
}
It will return null while facing the client certificate errors
P.S. Feigning as other websites/backends is also possible, just simply imitate the one in the "HTTP header check" part.
N.B. Whatever what method you want to use, do care the default return. Make the default return same as the return if the requests are invalid.
Make the default return as null as the return for invalid requests:
server {
listen 80 default_server;
listen 443 ssl default_server;
ssl_certificate /etc/nginx/certs/cert.crt;
ssl_certificate_key /etc/nginx/certs/cert.key;
server_name localhost;
location / {
return 444;
}
}
Rusult
curl http://127.0.0.1:80
curl: (52) Empty reply from server
curl -k https://127.0.0.1:443
curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
Conclusion
In simple, to protect your origin server IP from detection, you can:
Set IP whitelist if possible
Change the hostname of your website on your origin server if possible/Change the listen port if possible
Set default return for unmatched hostname
Set authorization method for matched hostname
Think if you are scanner itself, how will you think about the server behavior
The whole process can be roughly drawn like this:

AWS Load Balancer, enable listener for HTTPS and route to 80

I've an AWS LoadBalancer in front of two servers running SailsJS with PM2. The LB works very well and routes the incoming HTTP requests to the server, which is perfect.
Now, I need to add support for HTTPS, so I followed this guide:
AWS Create a Classic Load Balancer with an HTTPS Listener, using a self-generated SSL certificate, and used this configuration for the ports
LB Port 80 - Instance 80
LB Port 443 - Instance 80
And the security group has these ports opened:
22,
80,
443
So, if I understood correctly, the LB will receive the HTTPS request on port 443 and will forward it to port 80 of the instance. My instance, of course, is listening on port 80.
The problem is, this don't work! I can make HTTP requests to the LB and all is routed perfectly to the Sails instance and the response is perfect. But if I use exactly the same URL but with HTTPS, then it doesn't work and I get a "ERR_SSL_PROTOCOL_ERROR".
What am I doing wrong, what am I missing?
Thank you!
EDIT 1
This is what I get if I try curl -v https://example.com
* Trying xx.xx.xx.xx...
* Connected to example.com (xx.xx.xx.xx) port 443 (#0)
* Unknown SSL protocol error in connection to example.com:-9838
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to mydomain.com:-9838
EDIT 2
Found another thread which suggested a different way of creating the certificate. So I tried it, but now AWS don't even accept the private key and the certificate
Server Certificate not found for the key: arn:aws:iam::111111111:server-certificate/CertificateMyName
EDIT 3
OK, so found more info on why I couldn't load the certificates to AWS and after trying some times, I managed to load it and use it.
After these, it appears to be working (with the warnings that is not a valid cert and bla, bla, bla, which is expected)
* Trying XXX.XXX.XXX.XXX...
* Connected to example.com (XXX.XXX.XXX.XXX) port 443 (#0)
* SSL certificate problem: Invalid certificate chain
* Closing connection 0
curl: (60) SSL certificate problem: Invalid certificate chain
So it appears to be working, and it appears, as #MarkB suggested that the certificate was wrong. Using the info found on EDIT 2, I created a new one, and upload it (with the info of EDIT 3) and it appears to be working.
I'll perform more tests to make 100% sure that this works and will report back soon.
Ok, so the problem was a wrongly generated certificate. I used the first method I found and it wasn't working, so just use these:
openssl genrsa -out client-key.pem 2048
openssl req -new -key client-key.pem -out client.csr
openssl x509 -req -in client.csr -signkey client-key.pem -out client-cert.pem
Even after that, AWS told me:
Server Certificate not found for the key: arn:aws:iam::111111111:server-certificate/CertificateMyName
But the last error is wrong, even with the error the certificate WAS added to the ACM, and I used it in my HTTPS 443 listener, tested again the services and all was working. So just create the certificate with the instructions above, import it in ACM and if it gives you an error like above, just ignore it, your cert will be on place and ready to use.
Hope this helps others!

Resources