Is session hijacking / MITMA etc. possible with HTTPS? - security

Are attacks like MITM possible when using HTTPS?
I know they are possible if the connection starts with HTTP then gets redirected to HTTPS, but what if the initial connection itself is using HTTPS?
I'm implementing a client which connects to a server using HTTPS and want to find out if my explicitly determining the authenticity of the server is necessary (not, not the server authenticating the client is who it says it is, but the client ensuring the server is who it says it is) - I'm doing this in iOS where an API is available which makes it easy to do, but I'm not sure if its necessary to do, and if I do, then how to test that it works.
Thanks

It's absolutely possible to MITM SSL, and it's often pretty easy if you don't actually check the server's certificate.
Consider someone using your app in a coffee shop where a malicious employee has control over the wireless router. They can watch for HTTPS connections to your server and redirect them to a local MITM program. That program accepts the connection using a self-signed SSL certificate, say, and then opens a connection to your real server and proxies traffic between them.
As long as you check the validity of the server's certificate, this simple attack is thwarted. So do that. :-)
There are much more complicated attacks that have been demonstrated that can still, under special circumstances, MITM an SSL connection even when you check the certificates, but the circumstances that make those attacks work are difficult enough to arrange that most developers needn't worry about them.

Related

Why do browsers forbid non-SSL connections when a webpage was already served over HTTPS?

Once a webpage is served over HTTPS, we can be fairly certain that they are who we intended.
At this point, the only security risk left is that the website itself is malicious or has a security vulnerability.
For example, you may enter your credit card details which are sent to their server, and their server could release those details to the public.
I'm now trying to figure out the reasons why browsers do not allow non-SSL connections when the webpage was already served over HTTPS?
For example, browsers will stop allowing non-SSL HTTP and WS content, and don't expose UDP or TCP socket APIs.
To me there is the exact same risk that they don't use SSL on their server anyway. If anything, HTTPS could now be giving a false sense of security.
I could only identify two reasons:
To prevent webpages from accidentally using non-SSL connections. So I can understand that a form or image should only allow HTTPS. But I believe that browsers should allow, for example, UDP sockets but must be created like so (confirming that the programmer is aware of security risks):
udp = new SomeBrowserAPI.CreateUDPSocket()
udp.amAwareThat("Nothing is encrypted over UDP and I should not send any sensitive data here")
udp.amAwareThat("I cannot confirm the identity of who I am sending data to or receiving data from")
A client-side developer should not have to worry about security risks, but rather UI etc. By being forced to communicate to the server over SSL, it is up to the backend developers to worry about security only. However, this is already not the case anyway. If you are a client-side developer, you could easily write malicious client-side code that reads password input and sends it to your own server, as long as your own server is also over SSL (although SSL might at least allow you to identify who was responsible).
Are there any other reasons? Are my reasons / solutions / info correct?
There is a basic post in Google's Web Fundamentals: What is mixed content?
It is very basic, but lists three major threats: data authentication, data integrity and data confidentiality.
When you connect over UDP, you don't know
who actually serves your connection. It might have been intercepted by enemies;
whether the data received is actually the data sent. It might have been tampered by a man-in-the-middle;
who else have read your messages. Big brother is watching you.
Mixed content ruins the concept of secure webpages.

What is the difference between http and https in an intranet?

Is http ok vs https in an intranet connected to the internet ?
If a bank has a website which using http and only accessible if connected to the network, is this a security risk?
Not sure if I understood your question, but let me try an answer.
HTTPS has the objective to guarantee that no one in the middle of the connection is being able to see what you and the server are talking to each other, he does it using encryption, so even in an intranet using HTTPS has its advantages.
Imagine a wireless connection, everyone in the bank is in the middle of you and the connection with the router (consequently, the server) if you are using it, so if this website has sensitive information like a password, even in an intranet, you would want it secure so no one inside that network can sniff packets and get your password easily in a clear HTTP request.

How to redirect HTTPS to insecure WS?

I may not be doing something right, security wise, so please do inform me.
User visits the site (and is forced to use HTTPS). Valid certificate. They connect via WSS to a "middleman" server. Third-party orgs expose a normal WS server to the internet, and broadcast it's public address/port to the "middleman" server, which is then broadcast to the user.
So, the user, on this HTTPS site, gets a url such as ws://203.48.15.199:3422. Is there any way I can possibly allow this sort of connection to happen?
One such way is to allow the "middleman" to also be a proxy - every third-party server address is assigned a path on the WSS-enabled middeman server. User's would connect to wss://example.com/somepath and I would simply proxy it back to the third-party, insecure websocket. The downside is that I must maintain that proxy, which defeats the purpose of allowing third-parties to even run their own server.
Any ideas?
Is there any way I can possibly allow this sort of connection to happen?
This is a form of active mixed content. All recent browsers, such as Firefox and Google Chrome prohibit this kind of content, much like they prohibit a secure page (HTTPS loaded page) from loading insecure JavaScript.
The reasoning is fairly straight forward: it undermines the purpose of HTTPS in the first place.
The "correct" way to fix this is to force all 3rd party to HTTPS on their websockets as well and avoid the whole issue.
One such way is to allow the "middleman" to also be a proxy
Yes. You could set up a proxy server for your 3rd parties that you want to allow proxying. There are numerous examples of doing this nginx scattered around the internet. A simple example might be:
# untested configuration
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/cert.pem;
ssl_certificate_key /etc/ssl/private/cert.key;
location /3rdpartyproxy {
proxy_pass http://203.48.15.199:3422;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
}
}
Something like this may work, but keep in mind it is not in the best interest of your users. The connection between your proxy and the 3rd party is still insecure and vulnerable to all the same attacks that regular HTTP is.
Another thing to watch out for is to make sure your proxy isn't used incorrect by other parties. I've seen some people set a proxy with a dynamic path and would proxy that traffic to wherever. Something like:
ws://myproxy.example/203.48.15.199/3422
Then configure the web server to proxy it to whatever was in the URL. This seems like a nice solution because it'll just do the right thing depending on whatever the URL is. The major down side to it is, unless you have some kind of authentication to the proxy, anyone can use it to proxy traffic anywhere.
To summarize:
You cannot allow ws: on https: pages.
The best solution is to move the burden on to the 3rd parties. Have the set up proper HTTPS. This is the most beneficial because it fully serves the purpose of HTTPS.
In can proxy it. The caveat being that it is still insecure from your proxy to the 3rd party, and you now need to manage a proxy server. Nginx makes this fairly easy.
If you go the proxy route, make sure it's configured in a manner that cannot be abused.
If I require SSL, does this mean they won't be able to just publish an IP to connect to?
Getting an HTTPS certificate requires a domain name. To be more specific, getting a certificate for an IP address is possible, but it very out of the ordinary and far more effort than getting one for a domain name.
How much more of a burden is it?
That's mostly a matter of opinion, but it will mean picking a domain, registering it, and paying to maintain the registration with a registrar. The cost varies, but a domain can usually be had for less than $15 a year if there is nothing special about the domain.
Can the process of exposing HTTPS be entirely automatic?
Getting an HTTPS certificate is mostly automate-able now. Let's Encrypt offers HTTPS certificates for free (really, free, no strings, is now a wildly popular Certificate Authority). Their approach is a bit different. They issue 3 month certificates, but use software called CertBot that automates the renewal and installation of the certificate. Since the process is entirely automated, the duration of the certificate's validity doesn't really matter since it all just gets renewed in the background, automatically.
There are other web servers that use take this a step further. Apache supports ACME (the protocol that Let's Encrypt uses) natively now. Caddy is another web server that takes HTTPS configuration to the absolute extreme of simplicity.
For your 3rd parties, you might consider giving them examples of configuration that have HTTPS properly set up for various web servers.
I would love just a little more description on "certs for IP addresses" possibility as I feel you glossed over it.
IP addresses for certificates are possible but are exceedingly rare for publicly accessible websites. https://1.1.1.1 is a recent example.
Ultimately, a CA has to validate that the Relying Party (the individual or organization that is requesting a certificate) can demonstrate control of the IP address. Section 3.2.2.5 of the CA/B Requirements describes how this is done, but practically it is left up to the CA to validate ownership of the IP address. As such "Domain Validated" certificates are not eligible for certificates with IP Addresses. Instead, "Organization Validated" certificates, or OV, is required at a minimum. OV certificate issuance is more stringent and cannot be automated. In addition to proving ownership of the IP address, you will need to provide documentation to the identity of the Relying Party such as a drivers license for an individual, or tax documentations for an organization.
Let's Encrypt does not issue certificates for IP addresses, so you would likely have to find a different CA that will issue them, and likely pay for them. Registering a Domain Name for the IP address will likely come out to be more cost effective.
I'm not aware of any CA that will issue certificates for an IP address in an automated fashion like Let's Encrypt / CertBot.
Using a domain name affords more flexibility. If an IP address needs to be changed, then it is a simple matter of updating the A/AAAA records in DNS. If the certificate is for the IP address, then a new certificate will need to be issued.
There have been historical compatibility issues with IP addresses in certificates for some software and browsers.
An idea, but you still need your server to behave as a reverse-proxy : instead of example.com/somepath, implement thirdparty.example.com virtual host for each of your third parties. You then need a wildcard certificate *.example.com, but this prevents people trying to access example.com/somepathdoesntexist

Questions about SSL

I have a couple questions about SSL certificates.
I never used them before but my current project requires me to do so.
Question 1.
Where should you use SSL? Like I know places like logging in, resetting passwords are definite places to put it. How about once they are logged in? Should all requests go through SSL even if the data in there account is not considered sensitive data? Would that slow down SSL for the important parts? Or does it make no difference?(sort of well you got SSL might as well make everything go through it no matter what).
Question 2.
I know in smtp you can enable SSL as well. I am guessing this would be pretty good to use if your sending say a rest password to them.
If I enable this setting how can I tell if SSL if it is working? Like how do I know if it really enabled it? What happens if the mail server does not have SSL enabled and your have that boolean value enabled. Will it just send it as non SSL then?
With an SSL connection, one of the most expensive portions (relatively speaking) is the establishment of the connection. Depending on how it is set up, for example, it might create an ephemeral (created on the fly) RSA key for establishing a session key. That can be somewhat expensive if many of them have to be created constantly. If, though, the creation of new connections is less common (and they are used for longer periods of time), then the cost may not be relevant.
Once the connection has been established, the added cost of SSL is not that great although it does depend on the encryption type. For example, using 256-bit AES for encryption will take more time than using 128-bit RC4 for the encryption. I recently did some testing with communications all on the same PC where both client and server were echoing data back and forth. In other words, the communications made up almost the entire cost of the test. Using 128-bit RC4 added about 30% to the cost (measured in time), and using 256-bit AES added nearly 50% to the cost. But remember, this was on one single PC on the loopback adapter. If the data were transmitted across a LAN or WAN, then the relative costs is significantly less. So if you already have an SSL connection established, I would continue to use it.
As far as verifying that SSL is actually being used? There are probably "official" ways of verifying it, using a network sniffer is a poor man's version. I ran Wireshark and sniffed network traffic and compared a non-SSL connection and an SSL connection and looked at the raw data. I could easily see raw text data in the non-SSL version while the SSL "looked" encrypted. That, of course, means absolutely nothing. But it does show that "something" is happening to the data. In other words, if you think you are using SSL but can recognize the raw text in a network sniff, then something is not working as you expected. The converse is not true, though. Just because you can't read it, it does not mean it is encrypted.
Use SSL for any sensitive data, not just passwords, but credit card numbers, financial info, etc. There's no reason to use it for other pages.
Some environments, such as ASP.NET, allow SSL to be used for encryption of cookies. It's good to do this for any authentication or session-ID related cookies, as these can be used to spoof logins or replay sessions. You can turn these on in web.config; they're off by default.
ASP.NET also has an option that will require all authenticated pages to use SSL. Non-SSL requests get tossed. Be careful with this one, as it can cause sessions to appear hung. I'd recommend not turning on options like this, unless you really need them.
Sorry, can't help with the smtp questions.
First off, SSL is used to encrypt communications between client and server. It does this by using a public key that is used for encryption. In my opinion it is a good practice to use it for as anything that has personally identifiable information or sensitive information.
Also, it is worth pointing out that there are two types of SSL authentication:
One Way - in which there is a single, server certificate - this is the most common
Two Way - in which there is a server certificate and a client certificate - the client first verifies the server's identity and then the server ids the client's id - example is DOD CAC
With both, it is important to have up to date, signed, certificates by a reputable CA. This verifies your site's identity.
As for question 2, yes, you should use SSL over SMTP if you can. If your emails are routed through an untrusted router, they can be eavesdropped if sent without encryption. I am not sure about the 'boolean value enabled' question. I don't believe setting up SSL is simply as easy as checking a box though.
A couple people have already answered your Question 1.
For question 2 though, I wouldn't characterize SMTP over SSL as protecting the message. There could be plenty of points at which the message is exposed. If you want to protect the message itself, you need S/MIME, or something similar. I'd say SMTP over SSL is more useful for protecting your SMTP credentials, so that someone cannot grab your password.

Can a proxy server cache SSL GETs? If not, would response body encryption suffice?

Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't.
I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option?
I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
The comment by Rory that the proxy would have to use a self-signed cert if not stricltly true.
The proxy could be implemented to generate a new cert for each new SSL host it is asked to deal with and sign it with a common root cert. In the OP's scenario of a corportate environment the common signing cert can rather easily be installed as a trusted CA on the client machines and they will gladly accept these "faked" SSL certs for the traffic being proxied as there will be no hostname mismatch.
In fact this is exactly how software such as the Charles Web Debugging Proxy allow for inspection of SSL traffic without causing security errors in the browser, etc.
No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption.
You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible.
The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
I think you should just use SSL and rely on an HTTP client library that does caching (Ex: WinInet on windows). It's hard to imagine that the benefits of enterprise wide caching is worth the pain of writing a custom security encryption scheme or certificate fun on the proxy. Worse, on the encryption scheme you mention, doing asymmetric ciphers on the entity body sounds like a huge perf hit on the server side of your application; there is a reason that SSL uses symmetric ciphers for the actual payload of the connection.
How about setting up a server cache on the application server behind the component that encrypts https responses? This can be useful if you have a reverse-proxy setup.
I am thinking of something like this:
application server <---> Squid or Varnish (cache) <---> Apache (performs SSL encryption)

Resources