I have a couple questions about SSL certificates.
I never used them before but my current project requires me to do so.
Question 1.
Where should you use SSL? Like I know places like logging in, resetting passwords are definite places to put it. How about once they are logged in? Should all requests go through SSL even if the data in there account is not considered sensitive data? Would that slow down SSL for the important parts? Or does it make no difference?(sort of well you got SSL might as well make everything go through it no matter what).
Question 2.
I know in smtp you can enable SSL as well. I am guessing this would be pretty good to use if your sending say a rest password to them.
If I enable this setting how can I tell if SSL if it is working? Like how do I know if it really enabled it? What happens if the mail server does not have SSL enabled and your have that boolean value enabled. Will it just send it as non SSL then?
With an SSL connection, one of the most expensive portions (relatively speaking) is the establishment of the connection. Depending on how it is set up, for example, it might create an ephemeral (created on the fly) RSA key for establishing a session key. That can be somewhat expensive if many of them have to be created constantly. If, though, the creation of new connections is less common (and they are used for longer periods of time), then the cost may not be relevant.
Once the connection has been established, the added cost of SSL is not that great although it does depend on the encryption type. For example, using 256-bit AES for encryption will take more time than using 128-bit RC4 for the encryption. I recently did some testing with communications all on the same PC where both client and server were echoing data back and forth. In other words, the communications made up almost the entire cost of the test. Using 128-bit RC4 added about 30% to the cost (measured in time), and using 256-bit AES added nearly 50% to the cost. But remember, this was on one single PC on the loopback adapter. If the data were transmitted across a LAN or WAN, then the relative costs is significantly less. So if you already have an SSL connection established, I would continue to use it.
As far as verifying that SSL is actually being used? There are probably "official" ways of verifying it, using a network sniffer is a poor man's version. I ran Wireshark and sniffed network traffic and compared a non-SSL connection and an SSL connection and looked at the raw data. I could easily see raw text data in the non-SSL version while the SSL "looked" encrypted. That, of course, means absolutely nothing. But it does show that "something" is happening to the data. In other words, if you think you are using SSL but can recognize the raw text in a network sniff, then something is not working as you expected. The converse is not true, though. Just because you can't read it, it does not mean it is encrypted.
Use SSL for any sensitive data, not just passwords, but credit card numbers, financial info, etc. There's no reason to use it for other pages.
Some environments, such as ASP.NET, allow SSL to be used for encryption of cookies. It's good to do this for any authentication or session-ID related cookies, as these can be used to spoof logins or replay sessions. You can turn these on in web.config; they're off by default.
ASP.NET also has an option that will require all authenticated pages to use SSL. Non-SSL requests get tossed. Be careful with this one, as it can cause sessions to appear hung. I'd recommend not turning on options like this, unless you really need them.
Sorry, can't help with the smtp questions.
First off, SSL is used to encrypt communications between client and server. It does this by using a public key that is used for encryption. In my opinion it is a good practice to use it for as anything that has personally identifiable information or sensitive information.
Also, it is worth pointing out that there are two types of SSL authentication:
One Way - in which there is a single, server certificate - this is the most common
Two Way - in which there is a server certificate and a client certificate - the client first verifies the server's identity and then the server ids the client's id - example is DOD CAC
With both, it is important to have up to date, signed, certificates by a reputable CA. This verifies your site's identity.
As for question 2, yes, you should use SSL over SMTP if you can. If your emails are routed through an untrusted router, they can be eavesdropped if sent without encryption. I am not sure about the 'boolean value enabled' question. I don't believe setting up SSL is simply as easy as checking a box though.
A couple people have already answered your Question 1.
For question 2 though, I wouldn't characterize SMTP over SSL as protecting the message. There could be plenty of points at which the message is exposed. If you want to protect the message itself, you need S/MIME, or something similar. I'd say SMTP over SSL is more useful for protecting your SMTP credentials, so that someone cannot grab your password.
Related
Living in Syria, I feel really unhappy when a (Facebook, G+, Twitter... etc) plugin doesn't work on 90% of the web.
The problem is that these (social) websites are not welcome in Syria (gov decisions), but still work perfectly using https. However, because their plugins use relative protocols, and most websites use http, then these plugins will eventually try (and fail) to load using http.
The question is: Why ever use relative protocols if you can use https?, isn't it always better to use https and have your users' data transferred securely?
I don't think giant websites care about https overhead, so what am I missing about the whole thing?
The only reason I find for not using SSL (HTTPS) is performance if you want to have responses within say 300ms.
The SSL hanshake could take a few rount-trip-time's that could add from very few ms in the same region (say client and server both in US East) up to 600 ms or more sometimes. I'm in South America so it could take more sometimes with servers in US.
Even when the sequence diagram looks simple, TCP has an initial congestion window that makes the protocol require at least one more RTT (round trip time) for the server to send the complete certificate to the client. Except for servers that change this initial congestion window (cwnd).
Additionally, the SSL protocol is more complex and there could be a "Change Cipher Spec message" that requires an additional RTT.
After SSL handshake, the extra work happens in the server and client encrypting and decrypting with a symmetric key. But it's not critical in my experience (maybe 5% of total CPU utilization).
My comment is useful for web services. Now, if we talk about web sites, I'd do everything with HTTPS.
when we look at the https squence diagram
http://blog.expressionsoftware.com/2011/02/https-sequence-diagram.html
we see that only the client's first requests (the handshaking step) is the overhead.
so I agree with you it is always better to use https...
maybe the missing thing is
you still don't want to believe that
people are lazy and don't care about the quality :)
this can be read too...
http://www.codinghorror.com/blog/2012/02/should-all-web-traffic-be-encrypted.html
Due to a couple of issues with my host, I'm unable to use a SSL-certificate on my server (I'm not ready to change provider just yet), and can't therefore use HTTPS. This server will communicate with a couple of client-computers and will transfer data that's somewhat secret.
Would it be reasonable to simply use AES encryption (encryption on client before sending, decryption on server before processing) instead of HTTPS?
This depends on your deployment environment.
Replacing SSL/TLS (and HTTPS) with your own encryption protocol for use by a web browser is always a bad idea, since it relies on JavaScript code delivered insecurely (for details, see this question on Security.SE, for example).
If the client isn't a web browser, you have more options available. In particular, you can implement message-level security instead of transport-level security (which is what HTTPS uses).
There are a number of attempts to standardise message-level security with HTTP. For example:
HTTPsec had a public specification (still available on WebArchive), but a commercial implementation. I'm not sure whether this has been widely reviewed.
WS-Security, oriented towards the world of SOAP.
Perhaps more simply, if you want to re-use existing tools, you could use S/MIME or PGP (in the same way as you would for e-mails) to encrypt the HTTP message entities. Unlike HTTPS, this won't protect the URL or the HTTP headers, but this might be enough if you don't put any sensitive data there.
The further down you go with "raw encryption" yourself (using AES directly, for example), the more likely you'll have to implement other aspects of security manually (typically, verifying the remote party's identity and dealing with the problem of pre-sharing the keys).
If you have a small list of clients that don't change often, you could implement your own SSL-Tunnel using SSH. On the clients do a;
ssh -D 4444 nulluser#example.com -N
where nulluser has no shell or file access on example.com.
Then add a foxyproxy whitelist setting - so that for example.com the client browsers use the localhost:4040 proxy.
It's a hack, it's totally unscalable, but it would work as I say for a small, static number of clients, and it has the advantage of not reinventing any wheels while being totally secure.
Are attacks like MITM possible when using HTTPS?
I know they are possible if the connection starts with HTTP then gets redirected to HTTPS, but what if the initial connection itself is using HTTPS?
I'm implementing a client which connects to a server using HTTPS and want to find out if my explicitly determining the authenticity of the server is necessary (not, not the server authenticating the client is who it says it is, but the client ensuring the server is who it says it is) - I'm doing this in iOS where an API is available which makes it easy to do, but I'm not sure if its necessary to do, and if I do, then how to test that it works.
Thanks
It's absolutely possible to MITM SSL, and it's often pretty easy if you don't actually check the server's certificate.
Consider someone using your app in a coffee shop where a malicious employee has control over the wireless router. They can watch for HTTPS connections to your server and redirect them to a local MITM program. That program accepts the connection using a self-signed SSL certificate, say, and then opens a connection to your real server and proxies traffic between them.
As long as you check the validity of the server's certificate, this simple attack is thwarted. So do that. :-)
There are much more complicated attacks that have been demonstrated that can still, under special circumstances, MITM an SSL connection even when you check the certificates, but the circumstances that make those attacks work are difficult enough to arrange that most developers needn't worry about them.
It seems that the SSH designers cared a great deal about man in the middle attack.
Their approach was, to save server's public key finger print at the first time you're connected to the server (and hope that the user doesn't connect from a poisoned network in the first time, for instance if he has a virus in the computer). The user then uses the fingerprint to verify the server's public key next time he'll connect to this server.
In practice, I found out that many user simply ignores warnings about unmatched fingerprints, and assume it's due to server re-installation. It's just MITM attack is so difficult to conduct and rare, you never worry about it. Moreover, many times a user wants to use ssh many different computers, and he wouldn't bother importing all the fingerprints to any computer he might want to use SSH with (hey, can you look why my site is down, I'm panicked! I'm not in the office, I'll drop to the nearest internet cafe and have a look).
To be fair, one can use DNSSEC and use the DNS servers as the CA. However I never saw that used in practice. And anyhow, it's not a mandatory part of the protocol.
Many years I thought one cannot avoid MITM without preshared secret, but I've been recently reading Bruce Schneir's excellent "Practical Cryptography", there he mentions the interlock protocol.
Alice sends Bob her public key.
Bob sends Alice his public key.
Alice encrypt her message using Bob's public key. She sends half of the encrypted message to Bob.
Bob encrypts his message using Alice's public key. He sends half of the encrypted message to Alice.
Alice sends the other half of her encrypted message to Bob.
Bob puts the two halves of Alice's message together and decrypts it with his private key. Bob sends the other half of his encrypted message to Alice.
Alice puts the two halves of Bob's message together and decrypts it with her private key.
Now, Mallory has to send something to Bob in step (3) of the protocol, after he receives half of Alice's message, even though he can't decrypt it until he gets everything from Alice in (5). He must fabricate a message to Bob, and Bob is likely to notice he's fabricating, say, after he ls his home directory.
Why didn't SSH use such a scheme? It seems to really fit its goals. It doesn't require any other entity, and it makes MITM attacks substantially more difficult.
Is it something inherent? A flaw in my understanding of the problem? Or just the designer thought the extra security doesn't worth complicating the protocol?
PS:
If you think that it would cause too much overhead, you can force the users of the protocol to use interlock only for the first 10K of data in the connection, so in practice it wouldn't matter too much, but MITM would be more difficult never the less.
Update:
The attack on the interlock protocol described here, does not mean a MITM attack is possible, it does mean that if a single password is sent during the communication the MITM can intercept it and the user would only see a time out error.
Update 2:
The point Eugene, raise is valid. The interlock protocol doesn't allow authentication. That is, you still can't be sure that if you're connecting to example.com, it is indeed example.com, and not malicious.com impersonating to example.com. You can't know that for sure without, say, DNSSEC. So for example, if you're SSHing to the missles silos, and write launch_missile -time now (without, say, using ls to verify the server is indeed the server in the missiles silos), it might be that you actually wrote that in a malicious server, and now the enemy know you're about to launch missiles against him. This type of attack indeed won't be prevented by the interlock protocol.
However if I understand the protocol correctly, a much more dangerous attack, and very practical attack, might be prevented. If the interlock protocol is used, even if you don't know anything about example.com, it is impossible that you would SSH to your server, and someone would eavesdrop to the entire SSH session. I think that this type of attack is much more likely.
Maybe SSH don't care about MITM attack? I think not, see for instance Putty FAQ:
Those annoying host key prompts are
the whole point of SSH. Without them,
all the cryptographic technology SSH
uses to secure your session is doing
nothing more than making an attacker's
job slightly harder; instead of
sitting between you and the server
with a packet sniffer, the attacker
must actually subvert a router and
start modifying the packets going back
and forth. But that's not all that
much harder than just sniffing; and
without host key checking, it will go
completely undetected by client or
server.
He's clearly talking about MITM attack and not about server authentication. I think using the interlock protocol will clearly prevent the attack mentioned in the Putty FAQ and I still don't understand why didn't they use it.
I don't see how interlock protocol prevents MITM.
The problem is not how to exchange keys, but how to trust them. You correctly point out, that people ignore warnings that the keys don't match. This is really the biggest flaw, but the protocol you describe doesn't solve the problem of verification of key origin. SSL uses X.509 certificates and PKI to verify trust. SSH can also use certificates, but almost no SSH software supports them.
I'm looking for options for securing UDP traffic (mainly real-time video) on a wireless network (802.11). Any suggestions apart from Datagram Transport Layer Security (DTLS)?
Thanks.
You must be more clear about the attacks you are trying to defend against. For instance if your only concern is spoofing then you can use a Diffie–Hellman key exchange to transfer a secret between 2 parties. Then this secret can be used to generate an Message Authentication Code for each packet.
If you need any more protection I strongly recommend using DTLS. It should be noted that all TLS/SSL connections can be resumed so you can cut down on the number of handshakes. Also, certificates are free.
Are you trying to wrap an existing application or writing your own? What client server setup do you have? Do you want to prevent snooping or tampering?
I am assuming here that you
are developing an application
are trying to prevent snooping
have access to client and server.
The simple approach is to use any off the self strong encryption. To prevent tampering use any signing algorithm with a private/public key scheme. You can use the same key pair for encryption and authentication.
The drawback of this approach is that it is on layer 7 and you have to do most of the work on your own. On the other hand, DTLS is a viable option...
Have you considered IPSEC? This article provides some good guidance on when and when not to use it.
You can look into ssh with port forwarding. That comes at the cost of maintaining a TCP connection over which the UDP traffic can be secured.