Nginx: curl: (60) SSL certificate problem: unable to get local issuer certificate - linux

I am runing nginx in docker over ssl, when I try to access using url I get below error
root#54a843786818:/# curl --location --request POST 'https://10.1.1.100/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "username": "testuser",
> "password": "testpassword"
> }'
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
With No check certificate option it is working
curl -k --location --request POST 'https://10.1.1.100/login' --header 'Content-Type: application/json' --data-raw '{
"username": "testuser",
"password": "testpassword"
}'
{"access_token": "xxxxxxxxxxxxxxxxxxxxxxxkkkkkkkkkkkkkkkkkkkk", "refresh_token": "qqqqqqqqqoooooooooxxxx"}
My Config file
root#54a843786818:/# cat /etc/nginx/sites-enabled/api.conf
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /root/certs/my_hostname.my.domain.name.com.pem;
ssl_certificate_key /root/certs/my_hostname.my.domain.name.com.key;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_pass http://10.1.1.100:5000;
proxy_redirect off;
}
}
I am suspecting something wrong with my certificates setup.
Below are steps exactly I followed it.
1) Taken private key and removed password using below commands
# openssl rsa -in my_hostname.my.domain.name.com_password_ask.key -out my_hostname.my.domain.name.com.key
2) Converted .crt file .pem
# openssl x509 -in my_hostname.my.domain.name.com.crt -out my_hostname.my.domain.name.com.pem -outform PEM
3) Next copied .pem and .key and pasted under /root/certs on nginx docker container using cat and vim editor
4) Verified private keys and public keys are matching below are the commands used
root#54a843786818:~/certs# openssl rsa -noout -modulus -in my_hostname.my.domain.name.com.key | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
root#54a843786818:~/certs# openssl x509 -noout -modulus -in my_hostname.my.domain.name.com.pem | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
I got below certs separately, not sure I need bundle them, if yes what is the command
1) Certificate.pem
2) private_key
3) ca_intermediate_certificate.pem
4) ca_trusted_root
Can someone help me to fix the issue, I am not sure what I am doing wrong, Is there way I can validate my certificates and check those are able to serve https
or other than certificate, is there any issues like config, setup,

An SSL/TLS server, including HTTPS, needs to send the certificate chain, optionally excluding the root cert. Assuming your filenames are not actively perverse, you have a chain of 3 certs (server, intermediate, and root) and the server must send at least the entity cert and the 'ca_intermediate' cert; it may or may not include the 'trusted_root'.
In nginx this is done by concatenating the certs into one file; see the overview documentation which links to the specific directive ssl_certificate.
Also, the root cert for your server's cert must be in the truststore of the client (every client if more than one). If that root cert is one of the established public CAs (like Digicert, GoDaddy, LetsEncrypt/ISRG) it will normally be already in the default truststores of most clients, usually including curl -- although curl's default can vary depending on how it was built; run curl -V (upper-vee) to see which SSL/TLS implementation it uses. If the root cert is a CA run by your company, the company's sysadmins will usually add it to the truststores on company systems, but if you are using a system that they don't know about or wasn't properly acquired and managed it may not have been set up correctly. If you need curl to accept a root cert that isn't in its default truststore, see the --cacert option on the man page, either on your system if Unixy or on the web. Other clients are different, but you didn't mention any.
Finally, as discussed in comments, the hostname you use in the URL must match the identity(ies) specified in the cert, and certificates are normally issued using only the domain name(s) of the server(s), not the IP address(es). (It is technically possible to have a cert for an IP address, or several, but by cabforum policy public CAs, if they issue certs for addresses at all, must not do so for private addresses such as yours -- 10.0.0.0/8 is one of the private ranges in RFC 1918. A company-run CA might certify such private addresses or it might not.) If your cert specifies domain name(s), you must use that name or one of those names as the host part of your URL; if you don't have your DNS or hosts file set up to resolve that domain name correctly to the host address, you can use curl option --resolve (also on the man page) to override.

Related

Simulate MQTT TLS login by MQTT.FX in linux?

I am having some problems using mosquitto client in linux, more specifically I need to use mosquitto_sub but I don't really get how I should authenticate.
All I have is a json config file for MQTT.Fx, that works fine when imported in that application. I can see there are username and password, as well as host information, and that SSL/TSL is enabled.
My question is: how can I do the same thing that MQTT.Fx does automatically since option CA signed server certificate is selected? I have been trying a lot of alternatives, like downloading server certificate and passing it as --cafile, generating new certificate, signing them, editing mosquitto.conf, but I didn't match the right combination of operations.
Any suggestion, please?
Edit: here is current command:
mosquitto_sub -h myhost.example -p 8883 -i example1 -u myusername -P mypassword -t XXXXXXXXXXXX/# --cafile /etc/mosquitto/trycert.crt
where file trycert.crt contains the response to following request (of course only part between BEGIN CERTIFICATE and END CERTIFICATE)
openssl s_client -showcerts -servername myhost.example -connect myhost.example:8883 </dev/null
All the times I had problems with MQTT over SSL its been that the server cert chain of trust broken on my client. In other words, the server i am connecting to has a cert. This cert is authorized by another cert and so forth. Each of the certs in the chain need to be on the client.
If any of these certs are missing, the chain of trust is broken and the stack will abort the connection.

Difficulties configuring nginx for Https

I'm currently configuring two rapsberry pi's on my home network. One which serves data from sensors on a node server to the second pi (a webserver, also running on node). Both of them are behind a nginx proxy. After a lot of configuring and searching i found a working solution. The Webserver is using dataplicity to make it accessible for www. I don't use dataplicity on the second pi (the server of sensordata) :
server {
listen 80;
server:name *ip-address*
location / {
proxy_set_header X-forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}
}
server {
listen 443 ssl;
server_name *ip-address*
ssl on;
ssl_certificate /var/www/cert.pem
ssl_certificate_key /var/www/key.pem
location /{
add_header "Access-control-allow-origin" *;
proxy_pass http://127.0.0.1:3000;
}
}
This config works. however, ONLY on my computer. From other computers i get ERR_INSECURE_RESPONSE when trying to access the api with ajax-request. the certificates is self-signed.. Help is much appriciated.
EDIT:
Still no fix for this problem. I signed up for dataplicity for my second device as well. This fixed my problem but it now runs through a third party. Will look into this in the future. So if anyone has an answer to this, please do tell.
It seems that your certificate aren't correct, root certificate missing ? (it can work on your computer if you have already accept insecure certificate on your browser).
Check if your certificates are good, the following command must give the same result :
openssl x509 -noout -modulus -in mycert.crt | openssl md5
openssl rsa -noout -modulus -in mycert.key | openssl md5
openssl x509 -noout -modulus -in mycert.pem | openssl md5
If one ouput differs from the other, the certificate has been bad generated.
You can also check it directly on your computer with curl :
curl -v -i https://yourwebsite
If the top of the ouput show an insecure warning the certificate has been bad generated.
The post above looks about right.
The certificates and/or SSL is being rejected by your client.
This could be a few things, assuming the certificates themselves are publicly signed (they probably are not).
Date and time mismatch is possible (certificates are sensitive to the system clock).
If your certs are self-signed, you'll need to make sure your remote device is configured to accept your private root certificate.
Lastly, you might need to configure your server to use only modern encryption methods. Your client may be rejecting some older methods if it has been updated since the POODLE attacks.
This post should let you create a certificate https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04, though I think you've already made it this far.
This post https://unix.stackexchange.com/questions/90450/adding-a-self-signed-certificate-to-the-trusted-list will let you add your new private root cert to the trusted list on your client.
And finally this is recommended SSL config in Ubuntu (sourced from here https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-on-ubuntu-14-04).
listen 443 ssl;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
Or if you get really stuck, just PM me your account details I'll put a second free device on your Dataplicity account:)
Cool project, keen to help out.
Dataplicity Wormhole redirects a service listening on port 80 on the device to a public URL in the form https://*.dataplicity.io, and puts a dataplicity certificate in front. Due to the way HTTPS works, the port being redirected via dataplicity cannot use HTTPS, as it would mean we are unable to forward the traffic via the dataplicity.io domain. The tunnel from your device to Dataplicity is encrypted anyway.
Is there a reason you prefer not to run Dataplicity on the second Pi? While you can run a webserver locally of course, this would be a lot easier and more portable across networks if you just installed a second instance of Dataplicity on your second device...

Download SSL Corproate Proxy Cert via Linux CLI

How do I download a proxy's SSL cert and save it to a file using the Linux command line.
It's possible to download an ssl cert via the openssl tool: https://superuser.com/questions/97201/how-to-save-a-remote-server-ssl-certificate-locally-as-a-file. But this does not work when behind a corporate proxy that re-writes the SSL cert. I would like to download the proxy's ssl cert. Changing the HOST and PORT to my proxy's host and port does not work either.
Downloading the cert using my browser works but I need to do this in a bash script.
You can only extract certificates from the connection which actually get send inside the connection. Within a MITM proxy the root CA you want to have usually does not get send since it is expected to be installed locally as trusted, similar to a public root CA. And the reason you can extract this MITM CA within your browser is because the browser already has this CA as trusted in the CA store and can thus export it.
As mentioned here, openssl 1.1.0 and above support the -proxy argument so you can get the proxy's certificates with a command like (jcenter.bintray.com is just an example host to connect to)
openssl s_client -showcerts -proxy $https_proxy -connect jcenter.bintray.com:443
Also see this script for a more complete example how to import the certificate(s) to a JVM keystore and the system certificates.

How to read TLS certificate sent by a client on the server side?

I am trying to call a service with mutual TLS using the Curl command.
Curl command used:
curl -k -vvvv \
--request POST \
--header "Content-Type: application/json" \
--cert client.pem:password \
--key key.pem \
"https://test.com:8443/testing"
I am trying to find out the following:
What is the HTTP request header that I should be looking at the server-side to pull out the client certificate from the HTTP request?
If I cannot pull out the client certificate on the server side from the HTTP request, can I add a custom request header in the HTTP request and send the client certificate as a value of that custom header? It would be great if someone could provide an example of this approach.
This is how I did it:
curl -v \
--key ./admin-key.pem \
--cert ./admin.pem \
https://xxxx/api/v1/
A client sends a TLS certificate when mutual TLS is used.
In the mutual TLS handshake, the TLS client certificates are not sent in HTTP headers. They are transmitted by the client as part of the TLS messages exchanged during the handshake, and the server validates the client certificate during the handshake.
Broadly there are two parts to the mTLS handshake, the client validates and accepts the server certificate and the server validates and accepts the client certificate.
If the client certificate is accepted, most web servers can be configured to add headers for transmitting the certificate or information contained on the certificate to the application. Environment variables are populated with certificate information in Apache and Nginx which can be used in other directives for setting headers.
As an example of this approach, the following Nginx config snippet will validate a client certificate, and then set the SSL_CLIENT_CERT header to pass the entire certificate to the application. This will only be set when then certificate was successfully validated, so the application can then parse the certificate and rely on the information it bears.
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/chainedcert.pem; # server certificate
ssl_certificate_key /path/to/key; # server key
ssl_client_certificate /path/to/ca.pem; # client CA
ssl_verify_client on;
proxy_set_header SSL_CLIENT_CERT $ssl_client_cert;
location / {
proxy_pass http://localhost:3000;
}
}

Use Docker registry with SSL certifictae without IP SANs

I have a private Docker registry (using this image) running on a cloud server. I want to secure this registry with basic auth and SSL via nginx. But I am new to SSL and run in some problems:
I created SSL certificates with OpenSSL like this:
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private.key -out certificate.crt
Then I copied both files to my cloud server and used it in nginx like this:
upstream docker-registry {
server localhost:5000;
}
server {
listen 443;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
ssl on;
ssl_certificate /var/certs/certificate.crt;
ssl_certificate_key /var/certs/private.key;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/sites-enabled/.htpasswd;
proxy_pass http://XX.XX.XX.XX;
}
}
Nginx and the registry are starting and running both. I can go to my server in my browser which presents me a warning about my SSL certificate (so nginx runs and finds the SSL certificate) and when I enter my credentials I can see a ping message from the Docker registry (so the registry is also running).
But when I try to login via Docker I get the following error:
vagrant#ubuntu-13:~$ docker login https://XX.XX.XX.XX
Username: XXX
Password:
Email:
2014/05/05 08:30:59 Error: Invalid Registry endpoint: Get https://XX.XX.XX.XX/v1/_ping: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs
I know this exception means that I have no IP address of the server in my certificate, but is it possible to use the Docker client and ignore the missing IP?
EDIT:
If I use a certificate with the IP of the server it works. But is there any chance to use a SSL certificate without the IP?
It's a Go issue. Actually it's a tech issue and go refused to follow the industry hack thus that's why it's not working. See this https://groups.google.com/forum/#!topic/golang-nuts/LjhVww0TQi4

Resources