Description:
Disable the use of TLSv1.0 protocol in favor of a cryptographically stronger protocol such as TLSv1.2.
The following openssl commands can be used to do a manual test: openssl s_client -connect ip:port -tls1 If the test is successful, then the target support TLSv1
Env:
Nuxt application in dockerfile (same as any other nodejs web application)
To fix this issue either you'll need to add in your nginx conf file:
ssl_protocols TLSv1.2;
don't forget to replace the default configuration in nginx using docker file:
COPY ./nginx/conf.d /etc/nginx/conf.d
this may not solve the issue, if it's the case upgrade your node version (in my case i'm using 16.17.0-alphine)
you can test it locally before deploying running this cmd:
openssl s_client -connect ip:port -tls1 (and openssl s_client -connect ip:port -tls1_1)
should not show your certificate, then run: openssl s_client -connect ip:port -tls1_2
if now you're able to see your certificate only in the second command then you have successfully fixed the issue.
Related
I am having some problems using mosquitto client in linux, more specifically I need to use mosquitto_sub but I don't really get how I should authenticate.
All I have is a json config file for MQTT.Fx, that works fine when imported in that application. I can see there are username and password, as well as host information, and that SSL/TSL is enabled.
My question is: how can I do the same thing that MQTT.Fx does automatically since option CA signed server certificate is selected? I have been trying a lot of alternatives, like downloading server certificate and passing it as --cafile, generating new certificate, signing them, editing mosquitto.conf, but I didn't match the right combination of operations.
Any suggestion, please?
Edit: here is current command:
mosquitto_sub -h myhost.example -p 8883 -i example1 -u myusername -P mypassword -t XXXXXXXXXXXX/# --cafile /etc/mosquitto/trycert.crt
where file trycert.crt contains the response to following request (of course only part between BEGIN CERTIFICATE and END CERTIFICATE)
openssl s_client -showcerts -servername myhost.example -connect myhost.example:8883 </dev/null
All the times I had problems with MQTT over SSL its been that the server cert chain of trust broken on my client. In other words, the server i am connecting to has a cert. This cert is authorized by another cert and so forth. Each of the certs in the chain need to be on the client.
If any of these certs are missing, the chain of trust is broken and the stack will abort the connection.
I am runing nginx in docker over ssl, when I try to access using url I get below error
root#54a843786818:/# curl --location --request POST 'https://10.1.1.100/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "username": "testuser",
> "password": "testpassword"
> }'
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
With No check certificate option it is working
curl -k --location --request POST 'https://10.1.1.100/login' --header 'Content-Type: application/json' --data-raw '{
"username": "testuser",
"password": "testpassword"
}'
{"access_token": "xxxxxxxxxxxxxxxxxxxxxxxkkkkkkkkkkkkkkkkkkkk", "refresh_token": "qqqqqqqqqoooooooooxxxx"}
My Config file
root#54a843786818:/# cat /etc/nginx/sites-enabled/api.conf
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /root/certs/my_hostname.my.domain.name.com.pem;
ssl_certificate_key /root/certs/my_hostname.my.domain.name.com.key;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_pass http://10.1.1.100:5000;
proxy_redirect off;
}
}
I am suspecting something wrong with my certificates setup.
Below are steps exactly I followed it.
1) Taken private key and removed password using below commands
# openssl rsa -in my_hostname.my.domain.name.com_password_ask.key -out my_hostname.my.domain.name.com.key
2) Converted .crt file .pem
# openssl x509 -in my_hostname.my.domain.name.com.crt -out my_hostname.my.domain.name.com.pem -outform PEM
3) Next copied .pem and .key and pasted under /root/certs on nginx docker container using cat and vim editor
4) Verified private keys and public keys are matching below are the commands used
root#54a843786818:~/certs# openssl rsa -noout -modulus -in my_hostname.my.domain.name.com.key | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
root#54a843786818:~/certs# openssl x509 -noout -modulus -in my_hostname.my.domain.name.com.pem | openssl md5
(stdin)= xcccxxxxxxxxxxxxxxxxxxxxxxxxxx
I got below certs separately, not sure I need bundle them, if yes what is the command
1) Certificate.pem
2) private_key
3) ca_intermediate_certificate.pem
4) ca_trusted_root
Can someone help me to fix the issue, I am not sure what I am doing wrong, Is there way I can validate my certificates and check those are able to serve https
or other than certificate, is there any issues like config, setup,
An SSL/TLS server, including HTTPS, needs to send the certificate chain, optionally excluding the root cert. Assuming your filenames are not actively perverse, you have a chain of 3 certs (server, intermediate, and root) and the server must send at least the entity cert and the 'ca_intermediate' cert; it may or may not include the 'trusted_root'.
In nginx this is done by concatenating the certs into one file; see the overview documentation which links to the specific directive ssl_certificate.
Also, the root cert for your server's cert must be in the truststore of the client (every client if more than one). If that root cert is one of the established public CAs (like Digicert, GoDaddy, LetsEncrypt/ISRG) it will normally be already in the default truststores of most clients, usually including curl -- although curl's default can vary depending on how it was built; run curl -V (upper-vee) to see which SSL/TLS implementation it uses. If the root cert is a CA run by your company, the company's sysadmins will usually add it to the truststores on company systems, but if you are using a system that they don't know about or wasn't properly acquired and managed it may not have been set up correctly. If you need curl to accept a root cert that isn't in its default truststore, see the --cacert option on the man page, either on your system if Unixy or on the web. Other clients are different, but you didn't mention any.
Finally, as discussed in comments, the hostname you use in the URL must match the identity(ies) specified in the cert, and certificates are normally issued using only the domain name(s) of the server(s), not the IP address(es). (It is technically possible to have a cert for an IP address, or several, but by cabforum policy public CAs, if they issue certs for addresses at all, must not do so for private addresses such as yours -- 10.0.0.0/8 is one of the private ranges in RFC 1918. A company-run CA might certify such private addresses or it might not.) If your cert specifies domain name(s), you must use that name or one of those names as the host part of your URL; if you don't have your DNS or hosts file set up to resolve that domain name correctly to the host address, you can use curl option --resolve (also on the man page) to override.
I am trying to import a SSL certificate on Ubuntu 14.04. I have downloaded an SSL certificate using
openssl s_client -showcerts -connect pypi.python.org:443
Afterward, I copied the certificate to /etc/ssl/certs (this is where $SSL_CERT_DIR points to and where my working certificates are). However when I re-run the command
openssl s_client -showcerts -connect pypi.python.org:443
I still get
Verify return code: 20 (unable to get local issuer certificate)
What am I doing wrong?
How do I download a proxy's SSL cert and save it to a file using the Linux command line.
It's possible to download an ssl cert via the openssl tool: https://superuser.com/questions/97201/how-to-save-a-remote-server-ssl-certificate-locally-as-a-file. But this does not work when behind a corporate proxy that re-writes the SSL cert. I would like to download the proxy's ssl cert. Changing the HOST and PORT to my proxy's host and port does not work either.
Downloading the cert using my browser works but I need to do this in a bash script.
You can only extract certificates from the connection which actually get send inside the connection. Within a MITM proxy the root CA you want to have usually does not get send since it is expected to be installed locally as trusted, similar to a public root CA. And the reason you can extract this MITM CA within your browser is because the browser already has this CA as trusted in the CA store and can thus export it.
As mentioned here, openssl 1.1.0 and above support the -proxy argument so you can get the proxy's certificates with a command like (jcenter.bintray.com is just an example host to connect to)
openssl s_client -showcerts -proxy $https_proxy -connect jcenter.bintray.com:443
Also see this script for a more complete example how to import the certificate(s) to a JVM keystore and the system certificates.
I'm building a mqtt server. I used the mosquitto with the TLS on the server as a broker.
I encountered this problem:
I created the ca.crt, server certificate, server key, client certificate, client key via generate-CA.sh
I can connect the broker and publish and subscribe msg via MQTT.fx, but when I tried to connect the broker with the mosquitto_sub, it came out Error:A TLS error occurred on the client PC(ubuntu), at the same time, the server prints
New connection from xx.xx.xx.xx on port 8883.
Openssl Error: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown
Openssl Error: error:140940E5:SSL routines:SSL3_READ_BYTES:ssl handshake failure
my command used is:
mosquitto_sub -p 8883 -i test -t mqtt -h 150.xx.xx.xx --cafile ca.crt --cert xx.crt --key xx.key in which, the 150.xx.xx.xx is the IP of my broker.
when I used the option --insecure with the command above, the problem disappeared.
so I think it is the server hostname which leads to this problem.
In the mosquitto_sub command the option -h specifies the hostname, but i need to use this parameter to point to the IP address of my broker, so how could i specify the hostname of my server??
Old question but perhaps this might help someone:
If the --insecure option makes it work, you have a certificate problem. What hostname did you set whilst signing the certificate? What does openssl s_client -showcerts -connect 150.xx.xx.xx:8883 say?
Related: although it should be possible to use SSL certs for your servers using public IP addresses (see Is it possible to have SSL certificate for IP address, not domain name?), I'd recommend not doing this and just using DNS, even if this means server.localdomain and/or editing your /etc/hosts file if necessary.