https.createServer ignoring ca array - node.js

I use the following options passed to https.createServer,
options =
ca: splitca fs.readFileSync sslpaths.capath, encoding:'utf8'
key: fs.readFileSync sslpaths.keypath, encoding:'utf8'
cert: fs.readFileSync sslpaths.certpath
Where splitca just to splits the two pem blocks of the CA crt bundle file. However sometime Chrome does not like this and when I load my domain it says that the certificate cannot be trusted. Then sometimes it works just fine and shows two Comodo CA nodes coming from the addTrust root, followed by my servers certificate. When I use openssl s_client -connect mydomain.com:443 -showcerts I get the error 'unable to get local issuer certificate'. When I remove the ca parameter completely then Chrome will still sometimes work, but openssl still does not have the two CA pem blocks in the certificate chain. I am guessing that Chrome is doing its own lookup of the CA and caching the certs?
I also tried prepending the CA crt file to in front of my server crt file and before upgrading my nodejs (v0.12) I would get some kind of handshake sslv3 failure from openssl. Now when I try with nodejs v5.0, I get error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch being thrown from nodejs. Any help would be appreciated.

Related

SSL Security Error for some mobile users

I moved my website arvandkala.ir to https recently.
The problem is that some user (specially on mobile) get SSL pravicy Error
the user mobile clock is ok,
don't have a any mixing data on website.
firefox error code:
SEC_ERROR_UNKNOWN_ISSUER
the issuer is Certum and trusted by firefox.
TLDR: it's the chain cert
You need to get the correct chain cert from the CA and configure it in your server.
Normally a CA provides the correct chain cert (or sometimes certs plural) when you buy or obtain your server cert, and also makes all its chain certs (usually several) available on its website, but since I don't know Polish and don't know any customers of your CA certum.pl I can't address these approaches here. Nowadays a common alternative is for the cert itself to specify a way to obtain its parent cert, in the caIssuers attribute in the AuthorityInfoAccess extension. This can be seen with many tools, including (at least) desktop browsers, OpenSSL (x509 -noout -text -in $file), and Java keytool (-printcert -v -file $file), and your cert does have it, pointing to http://repository.certum.pl/dvcasha2.cer . Fetching that URL with a tool that does not interpret the content (i.e. not a browser, but things like curl wget perl python or javascript) does yield the correct cert, in DER format.
Configuring your server varies hugely depending on the server, which you didn't identify. Your server identifies in a response as Server: Apache/2.4.7 (Ubuntu) but this could be falsified because some people consider that a good way to confuse attackers (not very) or mistaken because some other terminator is in front. If true, although there are other possibilities I'll assume the common default mod_ssl. The documentation for Apache 2.4 mod_ssl is located on the Apache website under docs / 2.4 / modules / mod_ssl . As this page tells you for 2.4.8 up you can include the PEM-format chain cert with the server cert in the file specified by SSLCertificateFile, but below that you must put them both in a file specified by SSLCertificateChainFile instead. This config (certificate including chain, plus privatekey) can be per virtualhost, or if you don't need them to be different it can be global. On Ubuntu the usual practice (though not mandatory) is to put each virtualhost config in a separate file under /etc/apache2/sites-available and link it under (same)/sites-enabled.
Since the certficate obtained from the CA was in DER format you must first convert it to PEM format. This can be done directly by OpenSSL with openssl x509 -inform der -in $derfile -out $pemfile or by numerous other programs that can import DER format and then write out PEM format (including at least Windows, Firefox/NSS, and Java).

iOS 11: ATS (App Transport Security) no longer accepts custom anchor certs?

I am leasing a self signed certificate using NSMutableURLRequest and when the certificate is anchored using a custom certificate with SecTrustSetAnchorCertificates IOS 11 fails with the following error message:
refreshPreferences: HangTracerEnabled: 1
refreshPreferences: HangTracerDuration: 500
refreshPreferences: ActivationLoggingEnabled: 0 ActivationLoggingTaskedOffByDA:0
ATS failed system trust
System Trust failed for [1:0x1c417dc40]
TIC SSL Trust Error [1:0x1c417dc40]: 3:0
NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9802)
Task <721D712D-FDBD-4F52-8C9F-EEEA28104E73>.<1> HTTP load failed (error code: -1200 [3:-9802])
Task <721D712D-FDBD-4F52-8C9F-EEEA28104E73>.<1> finished with error - code: -1200
What used to work for IOS 10 no longer works in IOS 11.
I am aware that IOS 11 no longer supports the following:
RC4 3DES-CBC AES-CBC
MD5 SHA-1
<2048-bit RSA Pub Keys - All TLS connections to servers
http://
SSLv3
TLS 1.0
TLS 1.1
And the certificate does not use these except for one fingerprint, which is SHA-1, but a SHA-256 fingerprint is also listed.
And by adding the following we can bypass the ATS (App Transport Security) error:
<key>NSAppTransportSecurity</key>
<dict>
<key>NSExceptionDomains</key>
<dict>
<key>mydomain.example</key>
<dict>
<!--Include to allow subdomains-->
<key>NSIncludesSubdomains</key>
<true/>
<key>NSExceptionRequiresForwardSecrecy</key>
<false/>
</dict>
</dict>
</dict>
By installing the root / anchor certificate onto the phone itself also works without the need to whitelist the mydomain.example.
Does this mean that ATS no longer supports self-signed certificates?
The following worked in IOS 10:
SecTrustSetAnchorCertificates(serverTrust, (__bridge CFArrayRef)certs);
Using nscurl on a Mac shows many failures, and after installing the root certificate into the "System" Keystore, nscurl succeeds.
I did this on macOS 10.12.6.
nscurl --verbose --ats-diagnostics https://
How can I make this work with a custom certificate, but without the need to install certificates or whitelist the domain?
Some time ago macOS started enforcing a requirement that CA certificates can't also be used as end-entity (eg webserver) certificates. Is it possible that iOS added this requirement between 10 and 11?
If so, the workaround is simple: you create your self-signed CA certificate, and use that certificate to issue the webserver certificate. The CA certificate (basicConstraints: CA=True) is the trust anchor that goes in your trust store; the end-entity certificate (omit basicConstraints; extendedKeyUsage=serverAuth) is presented by the web server. You're just not allowed to use the exact same certificate for both any more.
(This should be a comment but I don't have enough points to comment yet.)

"This Connection is Untrusted" but only on firefox

I have a NodeJS server on Amazon EC2.
I'm trying to set up SSL using certificates from "COMODO RSA Domain Validation Secure Server CA".
I got it working for all browsers except Firefox. Is this a common issure?
Please check that the server provides all intermediate certificates (trust chain). A common issue is to forget the intermediate certificates and then get errors on some browsers an no errors on others. This is caused by the browsers caching the intermediate certificates, e.g. if you've visited a site using the same intermediate certificates before, the browser will dutifully use these cached intermediates if the server forgot to server them. But, if the browser never visited such site before the intermediates are not cached and thus the verification will fail.
A good test is to use openssl s_client -connect your.https.server:443 and look at the chain of certificates it provides. Also, https://www.ssllabs.com/ssltest/analyze.html will point out such problems.

SSL Error: unable to get local issuer certificate

I'm having trouble configuring SSL on a Debian 6.0 32bit server. I'm relatively new with SSL so please bear with me. I'm including as much information as I can.
Note: The true domain name has been changed to protect the identity and integrity of the server.
Configuration
The server is running using nginx. It is configured as follows:
ssl_certificate /usr/local/nginx/priv/mysite.ca.chained.crt;
ssl_certificate_key /usr/local/nginx/priv/mysite.ca.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_verify_depth 2;
I chained my certificate using the method described here
cat mysite.ca.crt bundle.crt > mysite.ca.chained.crt
where mysite.ca.crt is the certificate given to me by the signing authority, and the bundle.crt is the CA certificate also sent to me by my signing authority. The problem is that I did not purchase the SSL certificate directly from GlobalSign, but instead through my hosting provider, Singlehop.
Testing
The certificate validates properly on Safari and Chrome, but not on Firefox. Initial searching revealed that it may be a problem with the CA.
I explored the answer to a similar question, but was unable to find a solution, as I don't really understand what purpose each certificate serves.
I used openssl's s_client to test the connection, and received output which seems to indicate the same problem as the similar question. The error is as follows:
depth=0 /OU=Domain Control Validated/CN=*.mysite.ca
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 /OU=Domain Control Validated/CN=*.mysite.ca
verify error:num=27:certificate not trusted
verify return:1
A full detail of openssl's response (with certificates and unnecessary information truncated) can be found here.
I also see the warning:
No client certificate CA names sent
Is it possible that this is the problem? How can I ensure that nginx sends these CA names?
Attempts to Solve the Problem
I attempted to solve the problem by downloading the root CA directly from GlobalSign, but received the same error. I updated the root CA's on my Debian server using the update-ca-certificates command, but nothing changed. This is likely because the CA sent from my provider was correct, so it led to the certificate being chained twice, which doesn't help.
0 s:/OU=Domain Control Validated/CN=*.mysite.ca
i:/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G2
1 s:/O=AlphaSSL/CN=AlphaSSL CA - G2
i:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
2 s:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
i:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
Next Steps
Please let me know if there is anything I can try, or if I just have the whole thing configured incorrectly.
jww is right — you're referencing the wrong intermediate certificate.
As you have been issued with a SHA256 certificate, you will need the SHA256 intermediate. You can grab it from here: http://secure2.alphassl.com/cacert/gsalphasha2g2r1.crt

Node.js HTTPS server ERR_EMPTY_RESPONSE

I created the server.key and server.csr files using openssl req -nodes -newkey rsa:2048 -keyout server.key -out server.csr. I created a SSL certificate with startssl.com which gave me a certificate file. Then in my node.js application I read the key and certificate files:
var app = module.exports = express.createServer({
key: fs.readFileSync('server.key'),
cert: fs.readFileSync('server.cert')
});
But, now I get an empty response from my application, a "No data received" message. What could be causing this? I'm very new to SSL and how it all works, so any help with this is very much appreciated.
More info: I generated the two files, key and csr files, on my VPS server (production server), and now I'm trying to get them to work on my localhost (firstly, before I commit my code to production; I have to test that it works before making a git commit). So, it could be due to the fact that my localhost (development environment) is on a different domain from my VPS server (production environment). Could this be the case? If so, how can I make it to where the localhost and production environment use the same certificate?
Or, would you suggest I create another certificate for my development environment? The only problem I see with that, is that I wouldn't have a domain for my dev environment because it's done locally. I'd rather much use the same certificate (even if that means a broken lock icon or something on localhost) for the sake of simplicity.
I know this is an old question, but I encountered the same thing today. I would get the same result back from express (ERR_EMPTY_RESPONSE).
The fix? Be sure to specify https, and not http, in your test browser (e.g., https://localhost:8443).
If you previously used middleware to forward all http requests to https you wouldn't have seen this problem before. Also, expect your browser to complain about the certificate, but proceed through anyway (in chrome this takes several clicks).
You can troubleshoot errors by connecting to your application with curl --insecure --verbose. Generally you shouldn't use an SSL certificate on more than one host. You can make a self-signed one to test locally and use the startssl one in production. But in any case, the CN in the cert needs to match the hostname used to connect to the site to avoid annoying browser warnings. You can always make up a domain name for your machine like sam.local and put that in your /etc/hosts file and use that in your self-signed certificate as well as your browser address bar.

Resources