Nifi: how to make ListenHTTP work with SSL - security

Objective
Because of Nifi integration with other tools through HTTP, I have to make ListenHTTP processor public facing. API Gateway on all 3 environments is too expensive for me. So I closed all VM ingress ports (except the one needed for ListenHTTP) for outer networks.
Issue
My configuration of ListenHTTP with StandardRestrictedSSLContextService doesn't work. Without SSL it worked, but was unsecure.
user$ curl -X POST -H "Content-Type: application/json" --data "test" https://localhost:7070/test
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
.....
user$ curl -X POST -H "Content-Type: application/json" --data "test" --cacert cacerts.jks https://localhost:7070/test
curl: (77) error setting certificate verify locations:
CAfile: cacerts.jks
CApath: none
Question
How to make ListenHTTP work with SSL certificates? What am I doing wrong?
More detailed questions:
Should I copy cacerts.jsk to the machine from which I issue the query? As far as I understand, StandardRestrictedSSLContextService will verify if the client has certificate in TrustStore.
If I need to protect only a single port with ListenHTTP processor - then I don't need nifi.security.needClientAuth property or all environment variables defined at "Standalone Instance, Two-Way SSL" section, right? I'm little bit confused because both Docker Image and StandardRestrictedSSLContextService contains the same configs, i.e. KEYSTORE_TYPE.
Already done
I have a general idea about KeyStore & TrustStore from this question and the documentation.
I have launched Nifi Docker container v1.10.0 with up & running ListenHTTP processor on 7070 port.
I have created keystore.jks and cacerts.jks files due to the instruction inside Nifi container.
I have configured ListenHTTP to use StandardRestrictedSSLContextService controller with the following configs:
.

The SSLContextService you're using probably doesn't contain a certificate which is signed by a publicly-accessible certificate authority (CA) like (for explanation purposes only; not endorsement) Comodo, Verisign, Let's Encrypt, etc.
Certificates signed by those CAs are generally trusted automatically by arbitrary clients because whoever builds the client (Java, Google/Microsoft/Mozilla/Apple for a browser, Microsoft/Apple/Linux Distro for the OS) has preemptively included those top-level public certs in the truststore of the client. The truststore you created cacerts.jks is in Java Keystore format, which curl doesn't happen to understand. You can export the public certificate from that keystore to a standalone file in PEM format using the commands here, but that will only solve the immediate problem of allowing curl with an arbitrary truststore to connect.
If you want generic external clients to be able to connect over TLS, you'll need to use a certificate in NiFi's keystore that is signed by a well-known CA. You can use any commercial CA for this purpose, but Let's Encrypt does offer this service for free and is very widely used. Once you are using a certificate signed by a CA, any* client will be able to connect.
If this is for internal/enterprise use only, and all allowed clients are controllable by you, then you can use a self-signed certificate (like you are doing now if you followed Simon's instructions), and export the public certificate to whatever format your other clients need in order to establish trust with this particular server. Theoretically, you could also enforce that each client attempting to connect also needs to present a certificate that the server (NiFi) can verify -- this is called mutual-authentication TLS and adds another layer of security because only authenticated clients will be able to make requests to this server. If you choose to do so, that's when the SSLContextService in ListenHTTP would need a truststore component as well.
Without knowing your explicit situation, I would heavily recommend option 1 (the signed cert).

Related

How to get axios to work with an AWS ACM public certificate?

I'm surprised to discover that public certificates issued by AWS ACM trigger the error "unable to verify the first certificate" when using axios and node-fetch. However, when I use curl from the command line, I don't get an error. So my questions are:
Why does node behave this way? Curl can use the underlying OS, it seems, which recognizes the CA authority of the AWS ACM issued certificates; does node have its own set of CA authorities?
How can I solve this problem without enabling the rejectUnauthorized option within a configured httpsAgent? Is there a way to get node to behave like curl by e.g. using the OS's set of recognized CA authorities? Is there some setting within the AWS ACM console that might make the certificates more amenable to axios?
NOTE: I am not interested in the solution of configuring axios to recognize any particular CA certificate (I'd like a general solution to enable me to ping multiple AWS ACM issued certificates that I do not necessarily control).
Edit: I'm using OSX 11.3.
Thanks!

Implementing HTTPS Server using Rustls with Hyper in Rust

I am trying to implement implement HTTPS server using Rustls with Hyper, but am not able to get proper example of how to implement the same. And for that i have followed and tried example given on hyper-rustls repository here (Hyper Rustls server example)
It always gives this error
FAILED: error accepting connection: TLS Error: Custom { kind: InvalidData, error: AlertReceived(CertificateUnknown) }
I am completely new to Rust and hence don't know how to properly implement the HTTPS over Hyper. I also gone through question related to this here
But still not able to find the solution. If more information is required do let me know for the same.
It looks like your problem is not with Hyper or Rust, it is with TLS. By default, when you establish connection via HTTPS, client verifies server certificate authenticity. The certificate needs to be signed by a trusted authority: for details, see, for example, this page.
To verify, use curl:
$ curl https://localhost:1337/echo -X POST -v --insecure
...
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
...
< HTTP/2 200
< date: Sun, 12 Apr 2020 12:45:03 GMT
<
So this works fine. If you remove --insecure flag, curl will refuse to establish connection:
$ curl https://localhost:1337/echo -X POST -v
...
curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
To fix this, you need to either:
Use a properly-signed certificate instead of a self-signed one, or
Configure your client not to verify certificate, or
Configure your client to trust your particular self-signed certificate.
In production, your only choice is (1). While you are developing, you can get away with (2) or (3).

OpenSMTPD Mail won't send from client, reason=ca-failure

Any time I attempt to send mail from my mail client (In this case, thunderbird), it comes up with an arbitrary error for why it couldn't send the email (The error doesn't matter, as it simply is telling me that the connection got dropped). When I run tail -f /var/log/maillog I see:
smtp disconnected reason=ca-failure
I can't seem to find anywhere online talking about this and how to fix it.
I've attempted to use several different matching keys and certificates, locally sourced (openssl) and from letsencrypt. OpenSMTP accepts all of these no problem. I have also went as far as to specify the root CA certificate for letsencrypt with their certificates.
Did you define the mail hostname for the OpenSMTPD server?
This file is supposed to be found in /etc/mail/mailname, and it should match the pkiname thats in the smtpd.conf file "pki 'hostname' cert /etc/letsencrypt/live/www.domain.com/cert.pem"
I spent an hour or two fighting with this.
This is defined in the manual:
pki pkiname cert certfile
Associate certificate file certfile with host pkiname, and use that file to prove the identity of the mail server to clients. pkiname
is the server's name, derived from the default hostname or set using
either /etc/mail/mailname or using the hostname directive. If a
fallback certificate or SNI is wanted, the ‘*’ wildcard may be used as
pkiname.
A certificate chain may be created by appending one or many certificates, including a Certificate Authority certificate, to certfile. The creation of certificates is documented in starttls(8).

How to implement valid https in web2py

I am using the following web2py slice in attempt to use https for a service worker function in a page.
http://www.web2pyslices.com/slice/show/1507/generate-ssl-self-signed-certificate-and-key-enable-https-encryption-in-web2py
I have tried opening web2py with the following line (with and without [-i IP and -p PORT]):
python web2py.py -c myPath/ssl_certificate.crt -k myPath/ssl_self_signed.key -i 127.0.0.1 -p 8000
but https is declared 'not private' and is crossed out. Because of this, I am getting a SSL certificate error when the registration of the service worker is attempted.
Please indicate what is going wrong or whether more information is needed
You mention "https is declared 'not private' and is crossed out". This has to do with browsers disliking not trusted (self-signed) certificates, because that's what trust is all about. If any hacker could just make up a certificate and the https client wouldn't respond with at least a frown, you could still be hacked or sniffed without noticing. Since you don't mention any other error, I assume you get otherwise valid results from the web2py server?
If so, you have setup your self-signed certificate well. If you don't get any valid html response (outside your browsers complaint, of course), you still have an issue with the setup.
If your service worker won't accept the certificate, what you can do (in a test environment at least) is import the self-signed certificate into the machine or service worker certificate repository. The process differs per OS and version.
Hope this helps. If it doesn't, please provide more detail.
The best way to use ssl with web2py is use of the deployment recipes with prodution-grade webservers like apache, nginx or Lighttpd.
Any of the mentioned scripts create a self-signed certificate, and then, you have to fix the generated server config files to a real certificate.
You can buy a real ssl certificate from any of many resellers or get for free from Let's Encript, if you have a real IP, like in a VPS or server.
A simple way to fix the config files is create a simbolic link from the real certificate to the one mentioned in the server config file.
To just test your service worker in your machine or a internal test server, just use a non-ssl port, or like Remco sugested, import the self-signed certificate to client environment.

Importing Pem/der certificate into kdb file

I have an IBM HTTP Server and Server [X] ,
I need to create secure connection [SSL] :
by creating KDB file : ibmhttpserverkey.kdb in IBM HTTP Server using iKeyman utility and importing Server[X]'s certificates [cert.PEM] or [cert.der] in ibmhttpserverkey.kdb
it's do-able or not?
I have tried a lot and every time it returns "Error Handshake, no certificate found" even if i installed it using certification manager!
You should be able to import certificates from other key file types such as a p12 database or another kdb. After doing the import check the personal certificates using IKEYMAN to see if the certificate is there. If you then see the "Error Handshake, no certificate found" in the IHS error log it may be you have not specified the certificate to be the default. Also check the VirtualHost entry for port 443 (or whatever ssl port is used) and see if an SSLServerCert directive is defined. This directive can be used to point at a label that identifies the needed certificate. The no certificate found message means that IHS opened the kdb defined by the keyfile directive and could not find either a default certificate or one that is specified using the SSLServerCert directive.
Guide to setting up SSL within IHS:
http://www-01.ibm.com/support/docview.wss?uid=swg21179559

Resources