NiFi: Configuring SSLContextService for GetHTTP or InvokeHTTP - get

I am trying to use WMATA's (the DC system) Metro API, and use NiFi to pull in some live Train Position data. I currently have tried to use both GetHTTP and InvokeHTTP, but no luck. My confusion comes from two areas:
1) How to configure the processor itself?
2) Configuring the SSLContextService?
The Metro website gives a Primary and Secondary key - but I'm not sure how to parse that information, when the SSLContextDriver config asks for KeyStore filename, etc.
My GetHTTP config:
And my SSL config:
I get errors when I run the GetHTTP processor:
I hope my issue makes sense. Thanks

For the specific error message you have show, the URL you specified has contentType={contentType} which is invalid. If you wanted to reference a flow file attribute, or variable, it would need to be ${contentType}. Otherwise if you really want to literally pass {contentType} then I think you would need to URL encode the brackets first.
For your SSL Context service, I believe in this case you want to set the truststore to CA certs instead of the keystore. It is similar to how your browser has truststores and verifies server identities when you go to an https page. You would only specify the keystore if you needed the GetHttp/InvokeHttp processor to also provide an identity so the other server could verify the identity of the processor.

Related

WebRTC over Local Network

I'm building a React website that I want to use WebRTC to basically be able to make audio/video calls to other devices, only on my local network. Because the getUserMedia requires HTTPS, I'm running into issues whereby I basically have to bypass the SSL warnings (the "visit website anyway" buttons), which I don't want to do.
I'm using my laptop to act as the connection broker/signaling server to allow the clients to connect with each other--if I downgrade the capabilities to HTTP for text chat only, this works great--but the whole purpose is to use audio/video, so I need that SSL layer.
My question is: how do I setup the SSL layer properly so that I don't have to bypass the warnings and accept a self-signed certificate?
Strictly speaking, the self-signed certificate does work and I can do this using it, but it seems self-defeating, so it's not really the way I want to go.
Again, this is only for intranet usage, so I don't know if that makes it easier or harder, but that's my constraint.
EDIT:
The server is written in NodeJS. I've found some documentation suggesting that Node can be given additional CAs (e.g. NODE_EXTRA_CA_CERTS). Is this something that I can leverage? Would a client html page utilize this in any meaningful way?
This link seems promising: https://engineering.circle.com/https-authorized-certs-with-node-js-315e548354a2. The main thing I'm not understanding is how I would utilize that ca: fs.readFileSync('ca-crt.pem') line for a given request, as it seems like the code there is actually making the request (but one would have already been made to the server in my case, no?). https://nodejs.org/api/https.html#https_https_request_options_callback seems to indicate something similar, as well.
It is totally possible to register a domain name, and then point it at something in the Private Address Range. I do this for local development sometimes. I registered pion.io and got a wildcard cert via LetsEncrypt.
You could also use mkcert. Then either in /etc/hosts or in your router itself you can give a FQDN to your signaling/web server.
There is also the --unsafely-treat-insecure-origin-as-secure argument for Chromium, I haven't used it lately though not sure if it still works.

JMX: How can I support both secure and insecure access at the same time (different URLs)

I've been asked to support 2 URLs for JMX access to our server:
A secure one (service:jmx:rmi://localhost/jndi/rmi://localhost:2020/jmxrmi)
An insecure one: (service:jmx:rmi://localhost/jndi/rmi://localhost:2020/insecure-jmxrmi)
The insecure one is primarily for demo purposes - no it won't be used during production.
I can create a custom ConnectorServer for /jmxrmi and provide an interceptor to use our security mechanism to verify credentials. If I just create a vanilla second ConnectorServer (no 'env' properties), however, using jconsole -debug to access it initially tries secure access, and puts up the dialog about that failing, then asking if I want to try it insecurely.
The docs I've read from Oracle/Sun indicate that I can disable password auth and SSL using a couple of command-line -D switches. But then does that not mess with the /jmxrmi secure access?
How do I support both secure and non-secure connections at the same time? Note that I don't need them using the same URL, of course.
Thanks!
This is a tough one. When you disable auth and SSL you do it per JVM.
The JMXRMP protocol can not distinguish between secured and non-secured connection. You either set up the security and it will be used or not. I think the best shot would be using a custom ConnectorServer and put up with the messages JConsole produces.

How can you test that an SSL client library is properly verifying the certificate of the server to which it connects?

I want to ensure that client libraries (currently in Python, Ruby, PHP, Java, and .NET) are configured correctly and failing appropriately when SSL certificates are invalid. Shmatikov's paper, The Most Dangerous Code in the World:
Validating SSL Certificates in Non-Browser Software, reveals how confusing SSL validation is so I want to thoroughly test the possible failures.
Based on research a certificate is invalid if:
It is used before its activation date
It is used after its expiry date
It has been revoked
Certificate hostnames don't match the site hostname
Certificate chain does not contain a trusted certificate authority
Ideally, I think I would have one test case for each of the invalid cases. To that end I am currently testing an HTTP site accessed over HTTPS, which leads to a failure that I can verify in a test like so:
self.assertRaises(SSLHandshakeError, lambda: api.call_to_unmatched_hostname())
This is incomplete (only covering one case) and potentially wrong, so...
How can you test that non-browser software properly validates SSL certificates?
First off, you'll need a collection of SSL certificates, where each has just one thing wrong with it. You can generate these using the openssl command line tool. Of course, you can't sign them with a trusted root CA. You will need to use your own CA. To make this validate correctly, you'll need to install your CA certificate in the client libraries. You can do this in Java, for example, using the control panel.
Once you have the certificates, you can use the "openssl s_server" tool to serve an SSL socket using each one. I suggest you put one certificate on each port.
You now have to use the client library to connect to a port, and verify that you get the correct error message.
I know that Python by default does no certificate validation (look at the manual for httplib.HTTPSConnection). However, m2crypto does do validation. Java by default does do validation. I don't know about other languages.
Some other cases you could test:
1) Wildcard host names.
2) Certificate chaining. I know there was a bug in old browsers where if you had a certificate A signed by the root, A could then sign B, and B would appear valid. SSL is supposed to stop this by having flags on certificates, and A would not have the "can sign" flag. However, this was not verified in some old browsers.
Good luck! I'd be interested to hear how you get on.
Paul
Certificate hostnames don't match the site hostname
This is probably the easiest to check, and failure (to fail) there is certainly a good indication that something is wrong. Most certificates for well-known services only use host names for their identity, not IP addresses. If, instead of asking for https://www.google.com/, you ask for https://173.194.67.99/ (for example) and it works, there's something wrong.
For the other ones, you may want to generate your own test CA.
Certificate chain does not contain a trusted certificate authority
You can generate a test certificate using your test CA (or a self-signed certificate), but let the default system CA list be used for the verification. Your test client should fail to verify that certificate.
It is used before its activation date, It is used after its expiry date
You can generate test certificates using your test CA, with notBefore/notAfter dates that make the current date invalid. Then, use your test CA as a trusted CA for the verification: your test client should fail to validate the certificate because of the dates.
It has been revoked
This one is probably the hardest to set up, depending on how revocation is published. Again, generate some test certificates that you've revoked immediately, using your own test CA.
Some tools expect to be configured with a set of CRL files next to the set of trusted CAs. This requires some setup for the test itself, but very little online setup: this is probably the easiest. You can also set up a local online revocation repository, e.g. using CRL distribution points or OCSP.
PKI testing can be more complex than that more generally. A full test suite would require a fairly good understanding of the specifications (RFC 5280). Indeed, you may need to check the dates for all intermediate certificates, as well as various attributes for each certificate in the chain (e.g. key usage, basic constraints, ...).
In general, client libraries separate the verification process into two operations: verifying that the certificate is trusted (the PKI part) and verifying that it was issued to the entity you want to connect to (the host name verification part). This is certainly due to the fact these are specified in different documents (RFC 3280/5280 and RFC 2818/6125, respectively).
From a practical point of view, the first two points to check when using an SSL library are:
What happens when you connect to a known host, but with a different identifier for which the certificate isn't valid (such as its IP address instead of the host)?
What happens when you connect to a certificate that you know cannot be verified by any default set of trusted anchors (for example, a self-signed certificate or from your own CA).
Failure to connect/verify should happen in both cases. If it all works, short of implementing a full PKI test suite (which require a certain expertise), it's often the case that you need to check the documentation of that SSL library to see how these verifications can be turned on.
Bugs aside, a fair number of problems mentioned in this paper are due to the fact that some library implementations have made the assumption that it was up to their users to know what they were doing, whereas most of their users seem to have made the assumption that the library was doing the right thing by default. (In fact, even when the library is doing the right thing by default, there is certainly no shortage of programmers who just want to get rid of the error message, even if it makes their application insecure.)
I would seem fair to say that making sure the verification features are turned on would be sufficient in most cases.
As for the status of a few existing implementations:
Python: there was a change between Python 2.x and Python 3.x. The ssl module of Python 3.2 has a match_hostname method that Python 2.7 doesn't have. urllib.request.urlopen in Python 3.2 also has an option to configure CA files, which its Python 2.7 equivalent doesn't have. (This being said, if it's not set, verification won't occur. I'm not sure about the host name verification.)
Java: verification is turned on by default for both PKI and host name for HttpsUrlConnection, but not for the host name when using SSLSocket directly, unless you're using Java 7 and you've configure its SSLParameters using setEndpointIdentificationAlgorithm("HTTPS") (for example).
PHP: as far as I'm aware, fopen("https://.../") won't perform any verification at all.

Encrypting Amazon S3 URL over the network to secure data access

I want to host copyrighted data on a Amazon S3 bucket (to have a larger bandwidth available than what my servers can handle) and provide access to these copyrighted data for a large numbers of authorized clients.
My problem is:
i create signed expiring HTTPS URL for these resources on the server side
these URL are sent to clients via a HTTPS connection
when the client uses these URL to download the contents, the URL can be seen in clear for any man-in-the-middle
In details, the URL are created via a Ruby On Rails server using the fog gem.
The mobile clients I'm talking about are iOS devices.
The proxy I've used for my test is mitmproxy.
The URL I generated looked like this:
https://mybucket.s3.amazonaws.com/myFileKey?AWSAccessKeyId=AAA&Signature=BBB&Expires=CCC
I'm not a network or security expert but I had found resources stating nothing was going clear over HTTPS connections (for instance, cf. Are HTTPS headers encrypted?). Is it a misconfiguration of my test that led to this clear URL? Any tip on what could have gone wrong here? Is there a real chance I can prevent S3 URL to go clear over the network?
So firstly, when sending a request over SSL all parameters are encrypted. If you were to look at the traffic going through a normal proxy you wouldn't be able to read them.
However, many proxies allow interception of SSL data by creating dummy certificates. This is exactly what mitmproxy does. You may well have enabled this and not realised it (although you would have had to install a client-side certificate to do this).
The bottom line is that your AWS URLs could be easily intercepted by somebody looking to reverse engineer your app, either through a proxy or by tapping into the binary itself. However, this isn't a 'bad thing' per se: Amazon themselves know this happens, and that's why they're not sending the secret key directly in the URL itself, but using a signature.
I don't think this is a huge problem for you: after all, you're creating URLs that expire, so even if someone can get hold of them through a proxy they'll only be able to access the URL for as long as it is valid. To access your resources post-expiry would require direct access to your secret key. Now, it actually turns out this isn't impossible (since you've probably hard-coded it into your binary), but it's difficult enough that most users won't be bothering with it.
I'd encourage you to be realistic with your security and copyright prevention: when you've got client-side native code it's not a matter of if it gets broken but when.

Keygen tag in HTML5

So I came across this new tag in HTML5, <keygen>. I can't quite figure out what it is for, how it is applied, and how it might affect browser behavior.
I understand that this tag is for form encryption, but what is the difference between <keygen> and having a SSL certificate for your domain. Also, what is the challenge attribute?
I'm not planning on using it as it is far from implemented in an acceptable range of browsers, but I am curious as to what EXACTLY this tag does. All I can find is vague cookie-cutter documentation with no real examples of usage.
Edit:
I have found a VERY informative document, here. This runs through both client-side and server-side implementation of the keygen tag.
I am still curious as to what the benefit of this over a domain SSL certificate would be.
SSL is about "server identification" or "server AND client authentication (mutual authentication)".
In most cases only the server presents its server-certificate during the SSL handshake so that you could make sure that this really is the server you expect to connect to. In some cases the server also wants to verify that you really are the person you pretend to be. For this you need a client-certificate.
The <keygen> tag generates a public/private key pair and then creates a certificate request. This certificate request will be sent to a Certificate Authority (CA). The CA creates a certificate and sends it back to the browser. Now you are able to use this certificate for user authentication.
You're missing some history. keygen was first supported by Netscape when it was still a relevant browser. IE, OTOH, supported the same use cases through its ActiveX APIs. Opera and WebKit (or even KHTML), unwilling to reverse-engineer the entire Win32 API, reverse-engineered keygen instead.
It was specified in Web Forms 2.0 (which has now been merged into the HTML specification), in order to improve interoperability between the browsers that implemented it.
Since then, the IE team has reiterated their refusal to implement keygen, and the specification (in order to avoid turning into dry science fiction) has been changed to not require an actual implementation:
Note: This specification does not
specify what key types user agents are
to support — it is possible for a user
agent to not support any key types at
all.
In short, this is not a new element, and unless you can ignore IE, it's probably not what you want.
If you're looking for "exactly" then I'd recommend reading the RFC.
The keygen element is for creating a key for authentication of the user while SSL is concerned about privacy of communication and the authentication of the server. Quoting from the RFC:
This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use TLS and certificate authentication.
Deprecated
This feature has been removed from the Web standards. Though some
browsers may still support it, it is in the process of being dropped.
Avoid using it and update existing code if possible. Be aware that
this feature may cease to work at any time.
Source
The doc is useful to elaborate on what is the keygen element. Its requirement arises in WebID that maybe understood to be part of Semantic Web of Linked Data as seen at https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#creating-a-certificate 2.1.1
This might be useful for websites that provide services, where people need to pay for the service, like video on demand, or news website for professionals like Bloomberg. With this keys people can only watch the content in their computer and not in simultaneous computers! You decide how data is stored and processed. you can specify a .asp or .php file that will receive the variables and your file will store that key in the user profile. This way your users will not be able to log in from a different computer if you want. You may force them to check their email to authorize that new computer, just like steam does. Basically it allows to individualize service access, if your licensing model is per machine, like Operating System.
You can check the specs here:
http://www.w3.org/TR/html-markup/keygen.html

Resources