Node.js HTTPS server verification failed - node.js

I create a Node.js app with HTTPS protocol. I followed a tutorial from nodejitsu https://docs.nodejitsu.com/articles/HTTP/servers/how-to-create-a-HTTPS-server/
But when I sent a request to the server, I git the following error:
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
More details here: http://curl.haxx.se/docs/sslcerts.html
When I opened from Chrome, I can only access the page after pressing advanced and proceed to the page.
This is what I filled when generating the certificate:
Country Name (2 letter code) [AU]:ID
State or Province Name (full name) [Some-State]:East Java
Locality Name (eg, city) []:[my city name]
Organization Name (eg, company) [Internet Widgits Pty Ltd]:[some string]
Organizational Unit Name (eg, section) []:[some string]
Common Name (e.g. server FQDN or YOUR name) []:[IP address of the server (Azure server) without port and 'https://']
Email Address []:[my personal yahoo email]
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:[empty]
An optional company name []:[empty]
The app is hosted on Azure server.
How can I fix it?

You are using a Self-Signed certificate which is causing verification failure. You need to get a signed certificate to remove the validation error.

Related

TLS handshake fail. HTTPS request to HAproxy to http and then encrypt it again to forward request to ssl server

Need help!!! . I have an https request and need to intercept it, read values and forward the same ssl request to the destination. I have all the required crt, key, CA with me. I am aware that Haproxy ACL does not work with L4 layer but I'm trying to find a workaround to decrypt the message, read the message, encrypt it again and forward. The reason for reading message is to using ACL i need to read the path difference in carious request and route the request to different servers accordingly. I am trying to intercept the client request to server, the request by default is SSL and server is expecting an SSL request
ssl crt: created a new user with new crt-key pair and used Certificate Signing Requests of server to authenticate it against CA in server
The scenario is that I have an incoming SSL request which I'm capturing into frontend of haproxy with the server certificate, while forwarding that request to a test webserver I am able to see that it has changed from HTTPS to HTTP. Now when I try re-encrypt it, the original destination is not able to accept the request since it is not SSL, I have tried to add the certs in the backend but not useful. Please check my current Haproxy config and please help if possible. I am not an expert in Network communication/ Encryption/ HaProxy.
frontend test
bind IP:6443 ssl crt <location>
option httplog
mode http
default_backend testback
backend testback
mode http
balance roundrobin
option http-check
server <host> IP:6443 check fall 3 rise 2 ssl verify required ca-file <loc> crt <loc>
To verify my certicates are valid and connecting:
openssl s_client -connect :6443 -cert myuser.crt -key myuser.key -CAfile ca.crt
Output:
SSL handshake has read 1619 bytes and written 2239 bytes
Verification: OK
So no problem with Certicates i presume, problem while using Ha proxy for connection
Error:
Unable to connect to the server: x509: certificate specifies an incompatible key usage
Ha proxy error:
2021-08-12T14:45:36.930478+02:00 parasilo-27 haproxy[21562]: :34672 [12/Aug/2021:14:45:36.927] server/1: SSL handshake failure
2021-08-12T14:45:37+02:00 localhost haproxy[21562]: :34674 [12/Aug/2021:14:45:37.438] server/1: SSL handshake failure
To sum up what was analyzed in the comments, as asked. Perhaps it will be useful to somebody someday.
Haproxy's config turned out to be correct, but generated certificates had wrong extended key usage (X509v3 extension).
Command to list extended key usage:
openssl x509 -in /path/to/cert.pem -noout -ext extendedKeyUsage
Often, when bought on internet, it shows X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication. Original Poster used self-signed, self-generated certificates and his certificate used on haproxy's frontend had only TLS Web Client Authentication, where frontend requires TLS Web Server Authentication if this extensions is used at all.
That resulted in the error message:
Error: kubectl get po: Unable to connect to the server: x509: certificate specifies an incompatible key usage
As a consequence haproxy logged SSL handshake failure without any more details, as is its habit.
After adding TLS Web Server Authentication to certificate in haproxy's frontend section and TLS Web Client Authentication to certificate in haproxy's backend section Original Poster reported success.

Node wont make connection to server with self signed certificate

A little background:
I have a Tesla Powerwall which has it's own built in web server that can be accessed on the local network. It only allows SSL connections and uses a self signed certificate. I have setup port forwarding that allows me to connect to the web server remotely. For a while, i've had working node.js apps both on a local Pi and also a remote AWS instance that made requests to the Powerwall web server to retrieve bits of information.
Since yesterday, Tesla updated my Powerwall and now everything has stopped working. I can only assume they have changed something regarding how the web server handles it's self signed SSL certificate.
Firstly, my Pi running on the local network would not make successful node.js requests to the local server. I managed to get this working by adding an entry to my /etc/hosts file like this:
192.168.1.42 powerwall
and now my node.js app can successfully connect again using https://powerwall
When using Safari or Chrome to connect remotely, I can connect if I use my IP address (After trusting the self signed cert) but cannot connect when using my DDNS address that points to home. (I have confirmed the DDNS is working). It gives me the error:
Safari can’t open the page “https://home.xxxxxx.com:4444” because Safari can’t establish a secure connection to the server “ home.xxxxxx.com”.
My AWS node.js app will not connect regardless of me using the IP address or DDNS address giving me the error:
Client network socket disconnected before secure TLS connection was established
This is how I am trying to connect:
request({
url: 'https://xx.xx.xx.xx:xxxx/api/system_status/soe',
method: 'GET',
rejectUnauthorized: false,
requestCert: true,
agent: false,
headers: headers
}
I have tried adding:
secureProtocol: 'TLSv1_method'
and attempted with the methods TLSv1_method TLSv1_1_method TLSv1_2_method in case it needed a specific method, with no luck.
Does the above sound like the SSL settings on the server have been screwed down?
What can I do to:
a) access the site remotely through a browser using the DDNS address
b) force node.js to not be interested in the SSL certificate at all and just connect
----- EDIT
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
46:.....
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=US, ST=California, L=Palo Alto, O=Tesla, OU=Tesla Energy Products, CN=335cbec3e3d8baee7742f095bd4f8f17
Validity
Not Before: Mar 29 22:17:28 2019 GMT
Not After : Mar 22 22:17:28 2044 GMT
Subject: C=US, ST=California, L=Palo Alto, O=Tesla, OU=Tesla Energy Products, CN=335cbec3e3d8baee7742f095bd4f8f17
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:
04:ca...
ASN1 OID: prime256v1
NIST CURVE: P-256
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name:
DNS:teg, DNS:powerwall, DNS:powerpack, IP Address:192.168.90.1, IP Address:192.168.90.2, IP Address:192.168.91.1
With HTTPS, the domain needs to match what’s signed in the cert; it’s usually the public domain.
It’s not supposed to be the IP, and it certainly won't be the DDNS hostname (if I understood correctly) you’re pointing at it.
There are 3 possible approaches;
Add the certificate from the powerwall as a ‘known’ rootCA (as already suggested),
Tell node.js to skip checking the validity of the certificate, or
Try with HTTP 😬
Proper operation of the HTTPS connection process will also depend on you accessing the powerwall using the domain name registered in the certificate (which may require your DNS server to respond with the appropriate IP when the lookup is made ~> like DNS spoofing proof-of-concept for a CTF).
Also, to your musings in comments, while some browsers may allow you to override an expired or self-signed cert (or when connecting via IP), but it’s very sketchy to connect with a domain and get a cert that specifies and entirely different domain (which is why the browser might not even present you the option).
HTH
Post-resolution update:
How to get the DNS name to match what's on the certificate:
add an entry in the client system's /etc/hosts or equivalent
connect using the hostname (not the IP)
When connecting over public Internet:
How to get public-internet connections through to the local host:
get a public-facing HTTPS cert (e.g.) that matches your DDNS domain or /etc/hosts entry
Host a HTTP-proxy
relay requests from Internet (hopefully with filtering/validation) to the powerwall
(you will have 2 HTTPS connections: one from AWS -> proxy, one from proxy->powerwall)
Host a custom API that will return exactly the [minimum] info needed by the AWS service
How to trust a self-signed certificate? (this wasn't the blocking factor)
Try this for debugging:
openssl s_client \
-connect 192.168.1.42:4444 \
-CAfile /path/to/self-signed-cert \
-verify_hostname powerwall \
-debug
Can find more options in openssl s_client -help
Do you have any servers running on your home network (apache, nginx, etc)? You're probably trying to connect to https://my.ddns.com and you're passing it directly to powerwall, which has a certificate for powerwall.
Connecting to a host that returns a certificate which does not contain that hostname will cause a TLS error. You probably want to run a forward proxy, where your server hosts my.ddns.com, sets up the TLS connection and then forwards the traffic (without TLS) to 192.168.1.44.

Azure oauth v2.0 interaction_required error with trusted ip and MFA

I have set up this Azure AD authentication workflow on my web server :
1 - The user login from this url :
https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/authorize?
client_id={client_id}
&redirect_uri=https://example.com/callback
&scope=openid%20https%3A%2F%2Fgraph.windows.net%2Fuser.read
&response_mode=query
&response_type=code
2 - (MFA) The user submit a form with a code received on its phone
3 - The user is redirected to https://example.com/callback?code={azure_given_code}
4 - I exchange the {azure_given_code} for a token via the following POST request server side :
POST https://login.microsoftonline.com/{tenant_id}/oauth2/token
{
"client_id": "{client_id}",
"client_secret": "{client_secret}",
"code": "{azure_given_code}",
"grant_type": "authorization_code",
"redirect_uri": "https://example.com/callback"
}
5 - I receive an access token and can retrieve the logged-in user from this url, again, server side :
https://graph.windows.net/me?api-version=1.6
I added our office ip address to the trusted ip list so that users can bypass MFA when connecting from our network.
Everything works fine if I am doing this workflow outside of the office network (from an untrusted ip that triggers MFA).
But with my office ip, the step 2 is bypassed (as expected) and at step 3 I get the following error :
{
"error": "interaction_required",
"error_description": "AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access …", "error_codes": [50076],
"timestamp": "2020-03-13 12:54:58Z",
"trace_id": '...'
}
What am I missing here to have this workflow working in both case (from a trusted and untrusted ip) ?
I am really stuck with this issue, many thanks for your help.
Here is how I solved my problem.
When the user is redirected to the login url https://login.microsoftonline.com/{tenant_id}/oauth2/v2.0/authorize?... his IP is used to determine whether or not he can bypass the MFA according to trusted ip rules.
Then when the azure_given_code is retrieved, the request for the token is made server side, using the server IP which is the cause of the error (server IP is not a trusted one).
Doing the POST https://login.microsoftonline.com/{tenant_id}/oauth2/token client side did solve the issue since the IP used for the request is a trusted one.

Insomnia and NodeJS: "Error: Peer certificate cannot be authenticated with given CA certificates"

I'm trying to send a GET request using the Insomnia app to a NodeJS server app -- I didn't write the app but have joined the team.
Although I get a reasonable JSON response when I hit the URL -- https://127.0.0.1:9999 -- from the browser, I get the error "Error: Peer certificate cannot be authenticated with given CA certificates" when I'm using Insomnia. Using a Mac, MacOS 10.12.4. Node v6.3.1.
The Insomnia timeline says:
* Preparing request to https://127.0.0.1:9999/
* Enable automatic URL encoding
* Enable SSL validation
* Enable cookie sending with jar of 2 cookies
* Hostname in DNS cache was stale, zapped
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 9999 (#8)
* WARNING: using IP address, SNI is being disabled by the OS.
* SSL certificate problem: Invalid certificate chain
* Curl_http_done: called premature == 1
* Closing connection 8
Thanks for any help!
There is little documentation on how Insomnia handles certificates. As long as they are normal certificates that are signed by a typical CA authority there is typically no problem. Since you also refer to your loopback address (127.0.0.1) I assume that you're also testing with a self-signed certificate.
I noticed that Insomnia uses the Mozilla list of certificate authorities. It does not use the list of your operating system.
The list is stored in a text-file in a directory like C:\Temp\insomnia_5.12.4. In my case it was for example 2017-01-18.pem. You can add your own signing authority certificate to this file.
I didn't look into how stable this file is or how it is created.
You can also workaround the certificate errors by disabling them in your settings (settings > Validate SSL Certificates).

Will a Windows Store app always disallow a self-signed certificate even if explicitly trusted?

I've seen both this and this — same problem, different question.
I'm trying to connect my Windows 8.1 Store app to an ASP.NET Web API web service, secured over HTTPS using a self-signed certificate. It's a proof-of-concept application that will end up on < 5 different machines and seen only internally, so I was planning to just install the certificate as trusted on each of the target machines.
When I try this on my development setup, both HttpClient APIs fail to establish the trust relationship when calling the service.
Windows.Web.Http.HttpClient exception: "The certificate authority is invalid or incorrect"
System.Net.Http.HttpClient exception: "The remote certificate is invalid according to the validation procedure."
My self-signed certificate (public-key-only .cer version) is installed in both the "User" and "Local Machine" Trusted Root Certification Authorities on the client. I'm really surprised that this isn't enough to get WinRT to trust it. Is there something I'm missing, or is there just no way to set up the trust relationship for a self-signed SSL certificate that will make HttpClient happy?
Details on my setup:
ASP.NET Web API
Azure web role running in Azure emulator
Cert issuer: 127.0.0.1
Cert subject: 127.0.0.1
Cert key: 2048-bit
Windows 8.1 Store application
Certificate (.cer file with public key only) installed in User\Trusted Root Certification Authorities
Certificate (.cer file with public key only) installed in Local Machine\Trusted Root Certification Authorities
Certificate (.cer file with public key only) added to Windows Store app manifest under "CA"
I am not asking for a workaround to configure HttpClient to accept self-signed or invalid certificates in general — I just want to configure a trust relationship with THIS one. Is this possible?
You should be able to find out what is the problem with the certificate by doing a request like this:
// using Windows.Web.Http;
private async void Foo()
{
HttpRequestMessage request = null;
try
{
request = new HttpRequestMessage(
HttpMethod.Get,
new Uri("https://localhost"));
HttpClient client = new HttpClient();
HttpResponseMessage response = await client.SendRequestAsync(request);
}
catch (Exception ex)
{
// Something like: 'Untrusted, InvalidName, RevocationFailure'
Debug.WriteLine(String.Join(
", ",
request.TransportInformation.ServerCertificateErrors));
}
}
Using a HttpBaseProtocolFilter you can ignore certificate errors:
// using Windows.Web.Http;
// using Windows.Web.Http.Filters;
// using Windows.Security.Cryptography.Certificates;;
HttpBaseProtocolFilter filter = new HttpBaseProtocolFilter();
filter.IgnorableServerCertificateErrors.Add(ChainValidationResult.Untrusted);
filter.IgnorableServerCertificateErrors.Add(ChainValidationResult.InvalidName);
filter.IgnorableServerCertificateErrors.Add(ChainValidationResult.RevocationFailure);
HttpClient client = new HttpClient(filter);
HttpResponseMessage response = await client.SendRequestAsync(request);
The piece I was missing turned out to be that the certificate wasn't in the list of of IIS Server Certificates on my local machine!
Opening IIS Manager and checking out the Server Certificates section, I did find a 127.0.0.1 SSL certificate already set up by the Azure emulator:
CN = 127.0.0.1
O = TESTING ONLY
OU = Windows Azure DevFabric
However, my own self-signed certificate that I made outside of IIS, also with CN=127.0.0.1, was not in the list. I imported it, and now my Windows Store app's HttpClient connects happily (certificate warnings went away in Chrome and IE as well!)
If anyone can firm up the technical details on this, please comment — this fix feels a bit magical and I'm not sure I can pinpoint precisely why this worked. Possibly some confusion on my part between the two certs for 127.0.0.1, even though the thumbprint I had configured in my Azure project was always the one I was intending to use?

Resources