Securing Service calls from node using ssl - node.js

I am trying to securing the service calls that are made from my node to other services secure. All the other services have enabled https. I tried the following methods
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'
which as per my understanding ignores all error so removed from the code becuase of certificates
I am using request module. where we can configure
key - provided my private key file,
cert - provided my certificate file
ca - certificate authorty chain
then it was throwing UNABLE_TO_VERIFY_LEAF_SIGNATURE
I found out that node doesn't read ca from the system.
it has its own ca chain So I included node-ssl-root-cas
which fetched latest cas from internet.
Then using ssl-analyser, i was able to find my domain doesn't have intermediate ca certificate
I downloaded that from our ca and made a ca chain and attached it to ssl-root-cas
Then i was able to make requests successfully
But Even if I remove key and cert from my request i am able to make request and get result.
How can I check my request and response are actually encrypted?
Or node just ignoring errors,

FYI, Node will use the certificate auhtorities installed on the system if you don't provide your own with the "ca" property. When you do provide your own, the system ones are ignored. This is by design, as providing your own CA likely means that you want to only trust certificates signed by your own CA. If you aren't using your own CA, you can skip setting the "ca" property. If you are, then I'm not sure why you would need to provide the full list of commonly trusted CAs as well. That seems like a pretty odd use case.
You can use the https module to make requests without providing your own key and cert. This is expected and documented behaviour. For technical reasons, when making any https requests, more specifically opening any TLS socket, the client also needs to have a private key and certificate. In the default case, the server doesn't verify the client in any way, so browsers use what's commonly referred to as a "snakeoil" certificate - a bundled self signed certificate.
The use case for providing your own key and cert when performing https requests, is when the server has client certificate checks enabled. For example, when connecting to Apple's servers for delivering push messages to iOS, you have a client certificate issued by Apple that Apple's servers uses to verify that you have access to send push messages (the certificate was issued by Apple) and which app you are sending to (the fingerprint/checksum of the certificate).
Unless the https services you talk to require specific client certificates, you're better off not setting "key" and "cert" - there's no real reason to do that.
So, in summary, you can probably skip setting all three of key, cert and ca, as the real problem seemed to be your mis-configured server (it didn't serve the CA chain).

Related

PKI: web server certificate

I'm learning the process of authentication in PKi environement.
Imagine I have two web servers (example IIS) that are configured with same TLS certificate to bind websites.
Now, if I remove and add a new certificate on one of two servers. It's still have the same common name but generated with a different private key.
I wonder if having two certificate with the same common name (subject) with two different private keys will be a problem?
What information is used by the client to choose a specific public key to encrypt information?
I hope that my question is clear. Let me know if you need more information.
Many thanks for your help.
No matter how many certificates you installed to Windows certificate stores, IIS only uses the mappings in Windows HTTP API to determine the actual server certificate to use for a specific site,
https://docs.jexusmanager.com/tutorials/https-binding.html#background
If you are familiar with SSL/TLS handshake process, you can easily see in that way the browser always knows which public key to use.
Will it be a problem? The answer is maybe.
Assuming a modern browser, then the PKI works broadly like this:
the browser first uses DNS to resolve the FQDN into an address;
then the browser sends a TLS client hello to it, which contains an
SNI matching the FQDN;
the server receives the TLS client hello, checks whether the FQDN matches ones it owns, then responds with a TLS server hello containing the X.509 certificate;
the browser receives the TLS server hello, then checks that the certificate is valid;
that the SubjectAltName (SAN) field contains the FQDN (SAN has replaced
CN); and
that the certificate is signed by a CA that it trusts.
At this point the browser doesn't actually care whether you have multiple servers with multiple different certificates (which can be normal during a patch where they are being revoked and replaced).
The maybe comes afterwards, because there are a collection of HTTP extensions that can cause the browser to reject the certificate if it does not comply with what it is expecting, such as Expect-CT and HSTS.

Azure API Management - Client Certificate Authentication Responsibilities?

When using Azure API Management Gateway its possible to implement client certification authentication to secure access to APIs. You can validate incoming request certs using policy expressions such as thumb checks etc.
When using client cert authentication method, what's the recommended process for certificate generation/management?
Cert responsibility?
Should I/gateway owner be generating the .pfx file (either self signed or by trusted CA), importing it to the gateway service and providing external clients with the .cer to install locally and auth with?
Should I/gateway owner be generating the .pfx file (either self signed or by trusted CA), importing the .pfx to the API Management gateway service (normally I'd imagine importing the .cer on a server/gateway but doesn't seem possible in Azure) and providing external clients with the .pfx to install locally and auth with?
Should the external client be responsible for generating their public/private key pair in their Org, signing it with a CA, installing it locally and providing me/gateway owner with a .cer file to import to the gateway (as above, not sure its possible to import .cer, I read only .pfx accepted in import process) or provide thumb for me to store/validate in policy?
Does anyone have any advice whether to issue clients requiring access to the same API the same (shared) cert or generate a new cert per client? They would all be using the cert to access the same API (+ additional auth methods, cert is just an extra step).
I've ready online tutorials describing all above bullets and where client-specific or single cert-per-API have been implemented so a little confused which is recommended approach?
The easiest way would be to have a single issuing CA certificate, you'd only need to upload its public key to APIM as that is all that's needed for APIM to validate incoming certificate. Then you'll be responsible to generate client certificates and distribute them to clients. In APIM you can setup a policy that would require certificate, check its issuer and validate, that should be enough to ensure that certificate is valid and issued by you.
Relying on self-signed certificates will be a hassle as you'd have to somehow let APIM know of each new certificate, having common issuing CA frees you of that worry.
Same goes for allowing remote clients to generate certificate - they would have to let you know of certificate and you'd need to list it in APIM one way or another.
You're free to decide how exactly to distribute certificates, a few things to consider:
Likely certificate will be your main way to tell clients apart. If that is important you may want to have different clients have different certificates.
If you want to deny access to a particular client you'll "revoke" that certificate, you need to make sure that other legitimate clients won't be affected.

Secure frontend connecting to backend with self signed certificates

Our frontend at domain.com uses an API located at api.domain.com.
For domain.com, we use SSL by LetsEncrypt. For the backend, however, it is much simpler to use self signed certificates.
Will users be presented with Red Warning Banner if they go to domain.com, which connects to https://api.domain.com with self signed certificates? Is this okay practice? Furthermore, can we replace https://api.domain.com with external IP instead?
That's not a good idea in general. The purpose of a certificate is to allow a client to verify that it is actually talking to the right server, and not for example to a man-in-the-middle attacker. The way it does that (somewhat simplified) is the certificate includes the public key of the server, and the domain name ("common name") of the server the client intended to communicate with. The certificate is then signed by another certificate of about the same type and contents and so on, until the chain reaches a certificate that does not need to be signed by another, because it is already trusted by the client (it is in the trusted roots list of your OS for example).
A self-signed certificate doesn't have this chain of signatures, it is called self-signed, because the certificate used to sign it is itself. There is no way for a client to verify the certificate (unless it explicitly has it listed as trusted of course). This means an attacker can spoof (impersonate) your API by self-signing a different certificate for the same domain name, but with a different keypair. This might allow stealing credentials or serving fake data. Note that the attacker might also relay the information entered by the user (all the requests made) to the real API, so responses (also received by the attacker first, but relayed to the victim) can easily look very real without much background information.
This could (in theory) be solved by certificate pinning, but in case of a Javascript client that's going to be difficult (if at all possible). HPKP would seem like a solution, but HPKP won't work with a self-signed (not verifiable) certificate. I'm not sure if Javascript has an appropriate level of access to the server certificate to implement pinning.
Even if you did implement pinning, a self-signed certificate also cannot be revoked. Think about what would happen if you discovered a compromise of the TLS key used for https in your api domain. You would have no way to revoke the key, so clients would still accept a MitM attacker serving the compromised key.
There goes a lot of effort, and you could implement something that is not standard, hard to get right and error-prone.
Or you could just use a free letsencrypt certificate for api.domain.com as well, for which all the necessary infrastructure and setup is already done on the main domain. :)

Client Authentication good enough for k8s?

I'm trying to secure my k8s cluster, and I'm looking at client-authentication authorization support for k8s. My requirement is that I want to be able to uniquely identify myself (e.g. client) to the k8s apiserver, but everything I read so far about client authentication is not the solution.
My understanding is that the server will just ensure that the client certificate provided is in fact signed by the certificate authority. What if a hacker gets another certificate signed by the same certificate authority (which isn't hard to do in my org) and uses that to talk to my server? It appears that popular orchestrations like Swarm and k8s support this option and touted it as most secure so there must be a reason for doing this. Can someone shed some light?
It is not only verified that the certificate is authorized by the CA. The client certificate also contains the Common Name (CN) which can be used with a simple ABAC Authorization to limit the access to specific users or groups.
Also it shouldn't be easy to get a signed certificate. IMO the access to the root CA should be very limited and it should be comprehensible who is allowed to sign certs and when it happened. Ideally the root CA should life on a offline host.
Beside that it sounds like the CA is also used for other purposes. If it is so, you could consider to create a separate root cert for the client authentication. You can use a different CA for the server certificate by setting different CA files for --client-ca-file and --tls-ca-file on the apiserver. That way you can restrict who is able to create client certificates and still verify the server identity with the CA of your organization (which might already be distributed on all org computers).
Other Authentication Methods
As mentioned Kubernetes also has some other authenication methods. The static token file and the static password file have the disadvantage that the secrets have to be stored plain text on the disk. Also the apiserver has to be restarted on every change.
Service account tokens are designated to be used by applications which run in the cluster.
OpenID might be a secure alternative to client certificates, but AFAIK it is way harder to set up. Especially when there is no OpenID server, yet.
I don't know much about the other authenication methods, but they look like they are intended for integrating with existing single-sign-on services.

Can I use a self-signed X.509 certificate on a different HTTPS server?

I have created my SSL certificate using Selssl7.exe on server1 but used Cn as Server2 and hosted the certificate on server2. I started to get a certificate error when browsing from linux firefox saying:
This certificate is invalid, the certificate is not trusted and is self signed, the certificate is only valid for server1
But when I browse the URL from Windows IE I just get the regular error saying that it's not trusted and I can easily add it to exceptions.
Can we use self-signed certificates generated on server1 on a different servers?
You can and you may but you are pretty much undermining each and every aspect of authenticity by doing so.
A self-signed certificate is generally a problem because other users will not know this certificate in advance. So their browser dutifully issues a warning. That's why you have to pay for TLS certificates that will be recognized - they are issued by CAs whose certificates are contained in the default trust store of your browser. CAs had to pay to "be part of the club", but otherwise, anyone can create certificates. It's just the matter of being recognized by default settings.
But you open another hole by reusing a certificate that was issued for a dedicated server on a different server. TLS certificates' subject distinguished names must match the host name of the server they are deployed on. This is mandated by the TLS spec because this is the only effective measure to prevent man-in-the-middle attacks when using TLS. After you open a TLS connection to a server, your code will check whether the host name that you are connected to matches the subject DN of the server's certificate that was sent. Only if it does you can be sure to be talking to the right server.
So, in conclusion, if you reuse a server certificate on a different host, then you are severely impacting the security of TLS. It's still possible, sure, but if you cripple security to this extent, then you are probably better off using plain HTTP in the first place.

Resources