(60) Peer's certificate issuer has been marked as not trusted by the user: Linux/Apache - linux

I am trying to find out why my HTTPS link is not working for my website:
So I ran this command to try:
curl https://localhost/
I am using a valid signed SSL certificate and my HTTP link is working fine. I am using a Multi Domain certificate that was exported from an IIS 6 server. My instance on AWS has the 443 port enabled.
Here is a picture of my CA certificates:
I have tried to change the http.conf file's Virtual Host following the instructions in here: http://ananthakrishnanravi.wordpress.com/2012/04/15/configuring-ssl-and-https-for-your-website-amazon-ec2/
Is there any suggestions on how to get my website properly working on a HTTPS protocol?
Let me know if you need anymore information.
Thanks,

If you're not sure of the certificate that your web server is serving, you can use this command to view the certificate:
openssl s_client -showcerts -connect hostname.domain.tld:443
Also, the hostname in the certificate must match the site that you are requesting. For example, if you request a page from localhost, but your certificate is for www.yourdomain.com, the certificate check will fail.

This means that you are using a self-signed certificate.
In order for this warning not to appear, you need to purchase a certificate from a Certificate Authority.

If you are using Self Signed SSL certificate then you faced this issue.
For this you can use curl command with -k option.
curl -k https://yourdomain.com/
And if you are trying with Postman that time disable the SSL Certificate option in setting.

I got a same error but not similar to your, but summary here hope useful for others:
OS: CentOS 7
Run Python's pyspider error:
File "/usr/local/lib64/python3.6/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "", line 4, in raise_exc_info
Exception: HTTP 599: Peer's certificate issuer has been marked as not trusted by the user.
root cause and steps to fix:
previously existed a soft link:
/usr/lib64/libcurl.so.4 -> /usr/lib64/libcurl.so.4.3.0_openssl
which is invalid one, so changed to valid:
/usr/lib64/libcurl.so.4 -> /usr/lib64/libcurl.so.4.3.0
while two file is:
-rwxr-xr-x 1 root root 435192 Nov 5 2018 /usr/lib64/libcurl.so.4.3.0
-rwxr-xr-x 1 root root 399304 May 10 09:20 /usr/lib64/libcurl.so.4.3.0_openssl
then for pyspider reinstall pycurl:
pip3 uninstall pycurl
export PYCURL_SSL_LIBRARY=nss
export LDFLAGS=-L/usr/local/opt/openssl/lib;export CPPFLAGS=-I/usr/local/opt/openssl/include;pip install pycurl --compile --no-cache-dir
in which PYCURL_SSL_LIBRARY is nss, due to current curl backend is nss according to
# curl --version
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.36 zlib/1.2.7 libidn/1.28 libssh2/1.4.3
...
can fix my problem.

Related

Installing ghcup gives an error on MacBook Air

For reference, my MacBook Air is running version 10.14.1.
I've tried to download ghcup by running this command in the terminal:
curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org | sh
However, when I run this command, I recveieve the following error:
curl: (60) SSL certificate problem: certificate has expired
More details here: https://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
HTTPS-proxy has similar options --proxy-cacert and --proxy-insecure.
Could this error be caused by my MacBook running an old version?

Git clone from gitlab fails on linux, while working in Windows git bash

I'm new to Linux, just installed Lubuntu and faced the problem -
when i'm trying to clone my remote work repo from my company's git:
$ sudo git clone https://path/to/repo.git
I keep on receiving error:
Cloning into 'repo'...
fatal: unable to access 'https://path/to/repo.git/': server certificate verification failed. CAfile: none CRLfile: none
I know it's mentioning certificates, but i do not have any. And before, i worked on windows and was able to simply git clone this repo without any certs.
This error means that the git client cannot verify the integrity of the certificate chain or root. The proper way to resolve this issue is to make sure the certificate from the remote repository is valid, and then added to the client system.
Update list of public CA
The first thing I would recommend is to simply update the list of root CA known to the system as show below.
# update CA certificates
sudo apt-get install apt-transport-https ca-certificates -y
sudo update-ca-certificates
This may help if you are dealing with a system that has not been updated for a long time, but of course won’t resolve an issue with private certs.
Fetch certificates, direct connection
The error from the git client will be resolved if you add the certs from the remote git server to the list of locally checked certificates. This can be done by using openssl to pull the certificates from the remote host:
openssl s_client -showcerts -servername git.mycompany.com -connect git.mycompany.com:443 </dev/null 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' > git-mycompany-com.pem
This will fetch the certificate used by “https://git.mycompany.com”, and copy the contents into a local file named “git-mycompany-com.pem”.
Fetch certificates, web proxy
If this host only has access to the git server via a web proxy like Squid, openssl will only be able to leverage a squid proxy if you are using a version of OpenSSL 1.1.0 and higher. But if you are using an older version of OpenSSL, then you will need to workaround this limitation by using something like socat to bind locally to port 4443, and proxy the traffic through squid and to the final destination.
# install socat
sudo apt-get install socat -y
# listen locally on 4443, send traffic through squid "squidhost"
socat TCP4-LISTEN:4443,reuseaddr,fork PROXY:squidhost:git.mycompany.com:443,proxyport=3128
Then in another console, tell OpenSSL to pull the certificate from the localhost at port 4443.
openssl s_client -showcerts -servername git.mycompany.com -connect 127.0.0.1:4443 </dev/null 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' > git-mycompany-com.pem
Add certificate to local certificate list
Whether by proxy or direct connection, you now have a list of the remote certificates in a file named “git-mycompany-com.pem”. This file will contain the certificate, its intermediate chain, and root CA certificate.
The next step is to have this considered by the git client when connecting to the git server. This can be done by either adding the certificates to the file mentioned in the original error, in which case the change is made globally for all users OR it can be added to this single users’ git configuration.
** Adding globally **
cat git-mycompany-com.pem | sudo tee -a /etc/ssl/certs/ca-certificates.crt
** Adding for single user **
git config --global http."https://git.mycompany.com/".sslCAInfo ~/git-mycompany-com.pem
Which silently adds the following lines to ~/.gitconfig
[http "https://git.mycompany.com/"]
sslCAInfo = /home/user/git-mycompany-com.pem
Avoid workarounds
Avoid workarounds that skip SSL certification validation. Only use them to quickly test that certificates are the root issue, then use the sections above to resolve the issue.
git config --global http.sslverify false
export GIT_SSL_NO_VERIFY=true
I know there is an answer already. Just for those who use a private network, like Zscaler or so, this error can occur if your rootcert needs to be updated. Here a solution on how this update can be achieve if using WSL on a Windows machine:
#!/usr/bin/bash
# I exported the Zscaler certifcate out of Microsoft Cert Manager. It was located under 'Trusted Root Certification > Certificates' as zscaler_cert.cer.
# Though the extension is '.cer' it really is a DER formatted file.
# I then copied that file into Ubuntu running in WSL.
# Convert DER encoded file to CRT.
openssl x509 -inform DER -in zscaler_cert.cer -out zscaler_cert.crt
# Move the CRT file to /usr/local/share/ca-certificates
sudo mv zscaler_cert.crt /usr/local/share/ca-certificates
# Inform Ubuntu of new cert.
sudo update-ca-certificates

azure-cli login getting "self signed certificate in certificate chain"

I have installed azure-cli via npm install -g azure-cli but and am getting self signed certificate in certificate chain:
Any hints appreciated.
You can use troubleshooting guide for the Microsoft Azure Explorer tool for same issue https://support.microsoft.com/en-us/help/4021389/storage-explorer-troubleshooting-guide.
It helped me, with only one tweak needed: openssl command should check connection to management.azure.com and not microsoft.com
So, the command in OpenSSL console will be:
OpenSSL> s_client -showcerts -connect management.azure.com:443
I have found a self-signed certificate in my case (subject = issuer) added by my company IT department. After importing that certificate into ASE (so that ASE can trust that self-issued cert) the problem was solved.
As per the these bug reports it seems that this issue is related to npm using a system/network wide proxy which intercepts the SSL traffic. Update npm config with the https proxy and this issue should be fixed. some have reported that updating npm also has fixed it. check it out.
https://github.com/npm/npm/issues/7519
https://github.com/npm/npm/issues/6916

Failed to install Gitlab - curl (60) ssl certificate

I was trying to install gitlab on my linux server following this guide and got stucked in the second setp that says
curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
any idea on how can I solve this?
ANSWER be sure to have http_proxy and https_proxy variables correctly set.
---- UPDATE ----
After setting the variables I got the following answer from curl
Detected operating system as Ubuntu/trusty.
Checking for curl...
Detected curl...
Running apt-get update... done.
Installing apt-transport-https... done.
Installing /etc/apt/sources.list.d/gitlab_gitlab-ce.list...curl: (60) SSL certificate problem: self signed certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Unable to run:
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/config_file.list?os=Ubuntu&dist=trusty&source=script
Double check your curl installation and try again.
Tell curl to ignore SSL warnings with -k/--insecure. Documented in man curl.
Edit: also check your proxy settings, as the host you're trying to curl to does, in fact, have a valid SSL certificate. See the --proxy option of curl.

Where is the default CA certs used in nodejs?

I'm connecting to a server whos cert is signed by my own CA, the ca's cert had installed into system's keychain.
connecting with openssl s_client -connect some.where says Verify return code: 0 (ok)
but i cant connect with nodejs's tls/https module, which fails with
Error: SELF_SIGNED_CERT_IN_CHAIN
but connecting to a normal server (i.e google.com:443) works fine.
seems that nodejs's openssl is not sharing same keychain with system's openssl.
but I cannt find where is it. i tried overide with SSL_CERT_DIR but not seemed working.
BTW: i can bypass the server verifying by setting NODE_TLS_REJECT_UNAUTHORIZED=0 , but that's not pretty enough ;)
Im using OSX 10.8.3 with OpenSSL 0.9.8r, node v0.9.8
The default root certificates are static and compiled into the node binary.
https://github.com/nodejs/node/blob/v4.2.0/src/node_root_certs.h
You can make node use the system's OpenSSL certificates. This is done by starting node via:
node --use-openssl-ca
See the docs for further information.
See this answer on how system certificates are extended for Debian and Ubuntu
If you're using the tls module (and it seems like you are) with tls.connect you can pass a ca param in the options that is an array of strings or buffers of certificates you want to trust.

Resources