I would like to know what are the recommended best practices and validation logic for handling TLS certificate exceptions, similar to OpenSSH's known_hosts file. I know the best practices would be to have certificates that can be validated automatically, but here I am talking of the cases where the certificate cannot be validated and the user wants to accept it anyway. I would like to know the following:
1) What information should be stored in each entry
2) How the information should be stored for secure access
3) How the hashing of the certificate should be performed
AFAIK, the known_hosts file contains the following information:
<hostname> <certificate hash>
The biggest problem I can see with this approach is when we connect to the same hostname with different ports, which happens frequently when using port forwarding to map to different machines behind NAT. In this case, extra information should probably be stored this way:
<hostname>:<port> <certificate hash>
As for storing the information itself, the known_hosts file is normally stored in a user directory with owner write permissions. Is this considered "secure"? I mean, any process running as the current user could just add new exceptions for certificates the user has not explicitly accepted.
As for the hashing, I assume it should be performed on the entire X.509 certificate? I just wanted to check, since the X.509 certificate has the "TBSCertificate" structure in it, which excludes the the signature. I am not sure what should really be done here. Also, I would like to know the current recommended algorithms for hashing the certificate for exception purposes.
Thank you in advance for your recommendations on the question!
Related
I have been asked to develop a highly secure B2B File Transfer system between three companies.
VPN is not an option and they prefer to use common ports like 80,443, etc, so no extra firewall configuration shall be done.
i found solutions like oftp2 and as2 to be sufficient enough. although, i have some questions before i can decide:
is not https file transfer secure enough. so i can use asp.net/C# to do the task.
what about existing tools like SFTP, rsync and other *nix tools.
what about using SOAP?
my main concern is to avoid any possible clear data exposing to the outer world.
all ideas are appreciated.
thanks in advance.
if you use a block cipher like AES to encrypt the data and send the result using RSA encryption that will do the job. For the RSA you encrypt using their "Public key" which you get them to send to you out of band (Courier service) then they decrypt with their private key. This is totally secure providing both companies keep their private key secret. You have a key pair for each of the 3 companies. The extra AES layer is if you are really paranoid and really really want to make sure even if someone got the private keys they still can't read the data. Also you should sign all messages: send a hash of the rest of the message encrypted (AES) with your private key then the recipient can decrypt with your public key, and hash the data themselves and if their hash is not the same as your one that was attached after it was decrypted then it was not from you. This prevents man in the middle, domain in the middle etc interceptions. This would only allow someone to interfere if they got both the public and private key and the AES password... at that point the estimated crack time is well over 2 billion years with 2048 bit RSA so I think you're safe.
Technically you can always do a scp/rsync over ssh, if port 22 is among the white-listed port. If not, you can run a ssh daemon on 80/443 etc.
To answer your question, yes https/SFTP are secure enough, so is rsync if done over a encrypted channel (refer http://troy.jdmz.net/rsync/index.html)
Another thing you can explore is stunnel ( http://www.stunnel.org/ )
I can think of more than one ways to go about it. Totally depends on your servers' OS and other restrictions you may have.
The main issue with SSL is certificate validation. By default all certificates matching the target domain which are signed by any of a plethora of CAs is considered valid. If you are paranoid, you should check the certificate used on the connection directly against the a certificate stored in your configuration.
Using a DHE handshake to achieve perfect forward privacy would also be nice, but the built in SSL API in .net doesn't expose a way to enforce that. So you might or might not get DHE depending on the version of windows and .net.
Another good choice is tunneling something over SSH. For example SCP is an existing file copying utility that does this.
OK, you don't want to expose the file contents, with files to be exchanged between three parties, to anyone else.
There are two things to consider:
1) Protect the transport. Here, the files are sent over an encrypted link. So, you're basically putting the normal bits into a tunnel that is encrypted to protect anyone from snooping over the link. This is usually done using SFTP for company-to-company communications and keys are exchanged and authenticated out-of-band before any transfers occur.
2) Protect the files. Here, each file is encrypted independently and then transported to the destination. You encrypt the files of the file before they leave your network and then they are decrypted once they arrive at their destination. This is usually done using PGP for company-to-company communications and the PGP keys are exchanged and authenticated out-of-band before any transfers occur.
If you protect the transport, you're just sending the data through a protected pipe, linking the companies. Once the file is received, it's not encrypted (it's only encrypted through the pipe). If you protect the file, you are block-encrypting files themselves, so it's more of a process to encrypt and decrypt the files; only the actual process/system that has the PGP keys at the receiving end can decrypt the file.
So, what do you want to do? That's a risk decision. If you're only concerned about someone intercepting the file contents that is not company A or B (or C), you need to protect the transport (SFTP, et al). If you're concerned about protecting each file independently and making sure that only specific processes at the receiving end can decrypt the file, you want to protect the files. If the data is very sensitive, and under high risk, you may want to do both.
Some very good points have been made in security issues of developing your own file transfer programs. There are software security, network security, and user authentication security issues involved here. Understanding all the various encryption algorithms and security rules take years to master and is a time consuming endeavor for the development team to just keep up with all of the intricate changes in digital security standards and laws.
Another option is that there are several very good and affordable managed file transfer (MFTP) solutions that have already developed and addressed all of these security issues. They also have mastered the workflow of file transfer management to make this process much much easier on the IT staff. One of these MFTP solutions that I've used for the past few years is Linoma Software's GoAnywhere product. It has saved our team months of time and headache, allowing us to focus on our core business.
I hope this helps...
Ok Advanced SSL gals and guys - I'll be adding a bounty to this after the two-day period as I think it's a complex subject that deserves a reward for anyone who thoughtfully answers.
Some of the assumptions here are simply that: assumptions, or more precisely hopeful guesses. Consider this a brain-teaser, simply saying 'This isn't possible' is missing the point.
Alternative and partial solutions are welcome, personal experience if you've done something 'similar'. I want to learn something from this even if my entire plan is flawed.
Here's the scenario:
I'm developing on an embedded Linux system and want its web server to be able to serve out-of-the-box, no-hassle SSL. Here's the design criteria I'm aiming for:
Must Haves:
I can't have the user add my homegrown CA certificate to their browser
I can't have the user add a statically generated (at mfg time) self-signed certificate to their browser
I can't have the user add a dynamically generated (at boot time) self-signed certificate to their browser.
I can't default to HTTP and have an enable/disable toggle for SSL. It must be SSL.
Both the embedded box and the web browser client may or may not have internet access so must be assumed to function correctly without internet access. The only root CAs we can rely on are the ones shipped with operating system or the browser. Lets pretend that that list is 'basically' the same across browsers and operating systems - i.e. we'll have a ~90% success rate if we rely on them.
I cannot use a fly-by-night operation i.e. 'Fast Eddie's SSL Certificate Clearing House -- with prices this low our servers MUST be hacked!'
Nice to Haves:
I don't want the user warned that the certificate's hostname doesn't match the hostname in the browser. I consider this a nice-to-have because it may be impossible.
Do not want:
I don't want to ship the same set of static keys for each box. Kind of implied by the 'can't' list, but I know the risk.
Yes Yes, I know..
I can and do provide a mechanism for the user to upload their own cert/key but I consider this 'advanced mode' and out of scope of this question. If the user is advanced enough to have their own internal CA or purchase keys then they're awesome and I love them.
Thinking Cap Time
My experience with SSL has been generating cert/keys to be signed by 'real' root, as well as stepping up my game a little bit with making my own internal CA, distributing internally 'self-signed' certs. I know you can chain certificates, but I'm not sure what the order of operations is. i.e. Does the browser 'walk up' the chain see a valid root CA and see that as a valid certificate - or do you need to have verification at every level?
I ran across the description of intermediate certificate authority which got me thinking about potential solutions. I may have gone from 'the simple solution' to 'nightmare mode', but would it be possible to:
Crazy Idea #1
Get an intermediate certificate authority cert signed by a 'real' CA. ( ICA-1 )
ROOT_CA -> ICA-1
This certificate would be used at manufacturing time to generate a unique passwordless sub-intermediate certificate authority pair per box.
ICA-1 -> ICA-2
Use ICA-2 to generate a unique server cert/key. The caveat here is, can you generate a key/pair for an IP (and not a DNS name?)? i.e. A potential use-case for this would be the user connects to the box initially via http, and then redirects the client to the SSL service using the IP in the redirect URL (so that the browser won't complain about mismatches). This could be the card that brings the house down. Since the SSL connection has to be established before any redirects can happen, I can see that also being a problem. But, if that all worked magically
Could I then use the ICA-2 to generate new cert/key pairs any time the box changes IP so that when the web server comes back up it's always got a 'valid' key chain.
ICA-2 -> SP-1
Ok, You're So Smart
Most likely, my convoluted solution won't work - but it'd be great if it did. Have you had a similar problem? What'd you do? What were the trade offs?
Basically, no, you can't do this the way you hope to.
You aren't an intermediate SSL authority, and you can't afford to become one. Even if you were, there's no way in hell you'd be allowed to distribute to consumers everything necessary to create new valid certificates for any domain, trusted by default in all browsers. If this were possible, the entire system would come tumbling down (not that it doesn't already have problems).
You can't generally get the public authorities to sign certificates issued to IP addresses, though there's nothing technically preventing it.
Keep in mind that if you're really distributing the private keys in anything but tamper-proof secured crypto modules, your devices aren't really secured by SSL. Anyone who has one of the devices can pull the private key (especially if it's passwordless) and do valid, signed, MITM attacks on all your devices. You discourage casual eavesdropping, but that's about it.
Your best option is probably to get and sign certificates for a valid internet subdomain, and then get the device to answer for that subdomain. If it's a network device in the outgoing path, you can probably do some routing magic to make it answer for the domain, similarly to how many walled-garden systems work. You could have something like "system432397652.example.com" for each system, and then generate a key for each box that corresponds to that subdomain. Have direct IP access redirect to the domain, and either have the box intercept the request, or do some DNS trickery on the internet so that the domain resolves to the correct internal IP for each client. Use a single-purpose host domain for that, don't share with your other business websites.
Paying more for certificates doesn't really make them any more or less legit. By the time a company has become a root CA, it's far from a fly-by-night operation. You should check and see if StartSSL is right for your needs, since they don't charge on a per-certificate basis.
I am building an application that will enable users to connect to the same server. Rather than the application/device using its own certificate/private key, it is important to ensure that each user has their own certificate/private key to use for encryption.
Now I know, from the OpenSSL website documents, that their internal certificate store of OpenSSL can hold one certificate/key pair for the RSA cipher. My question is this:
Presume I have a SSL struct named ssl1 that I created from my SSL_CTX where I didn't set the certificate/key to use in the SSL_CTX (thus not inheriting the certificate/key). I then go on to set the certificate/key for ssl1 that is associated with some user. Then suppose I have another SSL struct named ssl2 created from the same SSL_CTX. I then go on to set the certificate/key for ssl2 that is associated with a different user than the first one.
If at this point I call the SSL_connect() API on ssl1 will it use the certificate/key I set for ssl2? I ask since the store says it only holds one cert/key pair and I loaded the cert/key for ssl2 last, thus I presume it would overwrite the one I loaded first for ssl1.
Thanks for reading my post. I appreciate any help/wisdom/pointers you can provide.
As far as I understand the SSL_CTX acts as template for SSL objects. So when you create new SSL object it inherits the properties/attributes of the SSL_CTX from which it is created. This is clearly mentioned here
So for your question, both ssl1 and ssl2 objects will use their own certificate/key.
First, I'm confused about the implementation details you're asking for. Second, I can't think of a circumstance in which different keys would be useful unless you're trying to authenticate your users by their certificates -- which you're apparently not doing (users needing different encryption, not authentication.
I believe what you really want to do is establish your connections using settings that provide for Perfect Forward Secrecy. To do that, you want to use TLS ciphers all have names that start with TLS_DHE_. Those use Diffie-Hellman key exchanges, thus making the role of the server's key basically one of authentication of the server to users.
I have the following question:
In security deployments what is the standard practice, if revocation checks are made to the certificates but for some reason at some specific moment it is not possible to determine the status of the target certificate?
E.g. because the network is down or the OCSP is down etc (any reason that essentially would not give a conclusive indication of what actually is the status of the certificate).
At first, I thought that the certificate should be considered as rejected (and for example drop the session).
On the other hand though, if I was a valid user and was denied access to resources, due to unrelated issues (such as network problems) I would not like it at all.
So I am not sure, what will happen here, will it depend per security environment, or is there actually some standard approach to handle this?
Any input is highly welcome.
Web browsers have the same issue. When you connect to a site they check the site's certificate against revocation using OCSP. However if the OCSP server is down (which is pretty often occuring as CAs are not competing against OCSP uptime), they cannot. In that case they assume the certificate as valid. Of course it always relates to your use-case and threat model. If the cost of such an assumption is high --i.e. a country goes bankrupt or several people die--, then it might be wise to not assume valid unless revocation is checked.
Some systems cache revocation lists and/or revocation verification results for a fixed or configurable duration. Some request a user decision. Some do both (i.e.: request user decision only if cached result indicates certificate was not yet revoked).
Application I am developing does some kind of server-side authorization. Communication is done via secure channel (HTTPS in my case, with valid SSL cert). I plan to implement something that will verify if remote server is exactly who he claims to be.
I know that no client-side protection is unbreakable, especially given enough time and knowledge. But, if I implement what I described above, is this security approach "secure enough" to protect data from being tampered with, while in transit, or to prevent man-in-the-middle attacks, and to ensure its validity?
I am considering adding another layer of security around transfered data (by using private/public key pair), but I suspect it might be enough to rely on SSL, without reinventing the wheel.
SSL is secure enough with a valid certificate, but ...
A lot of people don't know that getting an invalid certificate error is something that means "Your data is possibly going to be intercept by someone else". They will just ignore the warning and Man-in-the-middle-attack will still work. Also, some older browser like IE6 might not even show you any warning if the certificate is invalid. The problem in this case would be the user, not the technology used. This means that instead of trying to build an other layer of security you should teach the people who use your application what does it means to get an invalid certificate error and why they should use a modern browser.
Mr. B,
As you mentioned that client is going to validate the server SSL certificate and that users are not part of process, I think you will be just fine validating the server SSL certificate. However, you must take good care of verification process. I've seen several client applications which doesn't verify the certificate well enough. By "well enough" I mean that client should verify - 1) Certifying authority 2) Duration 3) Site issued to
One of the app I was pen testing had the bug that it was verifying the "CN" of certificate - which can be spoofed (one could create a bogus certificate with arbitrary CN).