I've read OWASP's HSTS cheat sheet at
https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support
and also watched the related video:
https://www.youtube.com/watch?v=zEV3HOuM_Vw
but still I can't understand how this helps against man-in-the-middle attacks in case of user typing http ://site.com. OWASP claims it helps.
Let's imagine the following scenario: the middle man gets request from victim: http ://site.com. Then he fires HTTPS request himself to https ://site.com and returns content to the user, stripping the HSTS header. All further user input is visible to the attacker.
In my mind, there's no way to protect against MITM unless we're using HTTPS from the beginning.
Does HSTS header really help against MITM attacks?
HSTS helps only if the user agent has visited the site before and there was no interference from a MITM at the time of the first visit. In order words, you are vulnerable the first time you go to the site, but never again.
Since you are still vulnerable the first time, HSTS is far from perfect. But it's better than nothing, since it does protect from an attacker who targets you AFTER you have already visited the site before.
(Except if the user was careful to use https the first time: in that case they are protected the first time and also protected against forgetting to use https on all subsequent visits.)
Firefox is also working on an HSTS preloaded list: http://blog.mozilla.org/security/2012/11/01/preloading-hsts/
The browsers typically maintain the HSTS information in an implementation-dependent secure store of some form. Of course with Firefox and Chrome the code is browseable. See for example https://code.google.com/p/chromium/source/search?q=stsheader&origq=stsheader&btnG=Search+Trunk
Related
In MDN HTTP Strict Transport Security (HSTS), it has an example of HSTS settings as below
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
where I can find the corresponding mean of max-age and includeSubDomains in RFC 6979, but it does not have the meaning of preload.
I have tested in latest Chrome and Firefox, and it seems that preload does not do anything at all. With and without preload, on requesting http request, both trials if using Chrome, can find 307 Internal Redirect made by Chrome browser without requesting to the server, which is what HSTS expect.
So what is the purpose of preload?
In addition, even if I add HSTS header, it will still have a chance to be attacked, on the first time the user visit the website with HTTP. How can we mitigate from this risk? That is, how can we tell the browser to add the domain to HSTS list before any request are sent to the server?
P.S.
I have found https://hstspreload.org/, which if I need to register the domain, requires me to add max-age and preload directive. Is it the reason why preload is necessary? And this should be the page where I should add my domain to ensure new user are safe from SSL Stripping Attack?
Preload is a big commitment. It will effectively be hardcoded into a browser’s code. Given it takes several months at a minimum to roll out new version, it’s basically irreversible.
Also as it’s down at the domain level, mistakes have been made. For example preloading domain.com but covering that blog.domain.com, or intranet.domain.com have not been upgraded to HTTPS. At this point your options are 1) upgrade side to HTTPS and live with zero users to the site until the or 2) reverse the preload and wait the months for that to roll out to all browsers and deal with zero users until then.
HTTPS is much more common now, so the risks are reduced but when HSTS preload first came out, these were real risks.
Therefore the preload attribute was a signal that the site owner was ready for that commitment. It also prevent someone else submitting a site that wasn’t using this header (whether maliciously or with good, but misguided, intentions).
You are correct in that it doesn’t “do” anything in the browser.
There was also talk of checking if the preload header was still being sent, and if not removing the preload but not sure if that’s done.
Is it okay that a website displays the csrf_token as a URL parameter? I have a feeling that I shouldn't be able to see it, but I am no quite sure. If someone can clear this up a bit, I would be grateful!
No, It's not acceptable.
Passing tokens in URLs isn't normally an acceptable solution. Actually it's
in some cases considered a vulnerability.
What if the Website not running under HTTPS?
What if it's running under HTTPS but HSTS isn't enabled on the server? Then SSL-Stripping techniques would be possible and other MITM attacks.
Even if it's running under HTTPS and HSTS is enabled that won't solve the issue.
The token could be exposed in:
Referer Header
Web Logs
Shared Systems
Browser History
Browser Cache
For more information refer to:
Information exposure through query strings in url
OWASP CSRF Cheatsheet
The typical characteristics of a CSRF Token are as follows:
-Unique per user session
- Large random value
- Generated by a cryptographically secure random number generator
CSRF tokens in GET requests are potentially leaked at several locations: browser history, HTTP log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site so it is not recommended.
HTTPS is slow to start up, especially on low-bandwidth and high-latency connections, or on low-spec machines. Unfortunately it seems to be the standard method for securing logins used by all major websites.
But a lot of websites we usually visit simply to read information. If we only occasionally want to make a write/update, then waiting to get logged in is an unnecessary time overhead.
The most upsetting example for me is:
Github. I often want to visit a github page just to read a project's overview or view a file. But I must wait for the SSL handshake, even if I don't want to do anything related to my personal account. Github always redirects my browser from HTTP to HTTPS. Why?!
I understand a secure connection is important to authenticate a user account. But when this impacts the user experience of simply viewing public pages, we should try to work out an alternative (and encourage major sites to adopt it).
Here is a possible workaround (1):
Allow users to make HTTP connections to our website, so we can present pages quickly without the need for an SSL handshake.
Allow the login to occur after the page has loaded. Perhaps an Ajax request over HTTPS could authenticate the user, and provide relevant updates to the page. (Is this fundamentally insecure? Edit: Yes, it is not fully secure, see answer below.)
Another alternative might be (2):
Instead of long-lived cookies over HTTPS, use a combination of long-lived one-time-key cookies for persistent login, and short-lived cookies for non-linear browsing, over HTTP. Replace them frequently. (This may be less secure than HTTPS, but more secure than normal long-lived cookie usage over HTTP.)
Do these solutions seem secure enough, or can you suggest something better?
(It might not be a coincidence that I am writing this from somewhere near Indonesia, which is a long way from the USA net!)
Workaround #1 in the question cannot provide full security to the first page, because a man-in-the-middle attack could have injected or modified scripts on the page before the login occurs.
Therefore we should not ask for a username/password on the HTTP page. However, the HTTPS Ajax operation might be able to inform the user that a persistent login session has/can be restored. (A script could then replace all HTTP links on the page with HTTPS links.)
But even if that succeeds, we still should not fully trust any user clicks or <form> POSTs originating from the first page. (Of course, requests to view other pages are fine, but it might be wise to reject updates to settings, password, and finance-related actions.)
This technique could at least be a way to perform the HTTPS setup in the background, without making the user wait for initial content. (StackOverflow uses something like this procedure.) Hopefully the browser will cache the HTTPS connection, or at least the keys, avoiding any delay on subsequent requests.
Here is one alternative I can think of, albeit slightly restrictive:
Allow browsing of public pages over HTTP, but don't perform any user login. This avoids all security concerns.
The 'Login' link would then send us to an HTTPS page, and may be able to recover the user's account automatically from a long-lived HTTPS cookie.
Make an option available "Always log me in through HTTPS", for users who are not bothered by the handshake overhead, and prefer to be logged in at all times. Note that a cookie for this setting would need to be set on the HTTP domain, since it needs to work without the user being logged in!
In reality, we would probably offer the converse: default to the existing prevalent behaviour of redirecting to HTTPS automatically, but provide an opt-out "Do not always switch to HTTPS for login" for those users wishing to avoid the SSL handshake.
But there are still issues with this approach:
Unfortunately cookies are not namespaced to the protocol (http/https). We can mark cookies as "secure" to prevent them ever being sent over HTTP, but some browsers will wipe them entirely if an HTTP request does occur. One way to keep the cookies separate would be to use different domains for unauthenticated and authenticated access to the site. But then we find ourselves violating REST, with two different addresses pointing to essentially the same resource...
Can this be resolved?
So this is a somewhat broad question, I know, but I'm hoping someone who is wiser than I can provide a summary answer that can help wrap up all of the ins and outs of SSL for me.
Recently I watched a video of Moxie Marlinspike giving a presentation at BlackHat, and after the hour was up, I thought to myself, "It doesn't really matter what I do. There's always a way in for a determined hacker." I recall his final example, in which he demonstrated how even using a redirect when the user typed in an HTTP address to go directly to HTTPS, there is still an opportunity in that transition for an attacker to insert himself via MITM.
So if browsers always default to HTTP, and users very rarely enter an HTTPS address directly in the address bar, then an attacker who is listening for accesses to Bank X's website will always have an opportunity during the HTTP -> HTTPS redirect to gain control. I think they have to be on the same network, but that's little consolation. Seems like Marlinspike's point was that until we go straight HTTPS as a standard rather than an alternative, this will always be a problem.
Am I understanding this correctly? What is the point in redirecting to HTTPS if an attacker can use MITM during the transition to gain control? Does anyone have any clue as to preventative measures one might take to protect himself? Would redirecting via javascript that obfuscates the HTTPS links (so they can't be stripped out in transit) be of any help? Any thoughts would be appreciated!
You can use HSTS to tell the browser that your site must always be accessed using HTTPS.
As long as the browser's first HTTP connection is not attacked, all future connections will go directly over HTTPS, even if the user doesn't type HTTPS in the address bar.
There are no actually any information leak in HTTP/HTTPS redirect if implemented properly on server side (i. e. if all dangerous cookies are marked as "HTTPS only" and inaccessible for JavaScript).
Of course, if end user doesn't know anything about security, it can be hacked, but from the other hand, if you user doesn't know even basic security rules and there are not system administrator who will explain it to user, it is possible to hack such user in so many ways that HTTP to HTTPS redirect problems is not really trouble.
I saw many users who downloaded and run unknown EXE files from server just for promise to "win 1000000 dollars", which is also a good way to hack them (even better than to exploit redirection, if you run as EXE in user computer, you are already king here and can steal any user cookies under default security settings).
So, if user collaborates with hacker and helps hacker to hack his own computer, yes, such user will succeed to be hacked, but security is about professionals who ready to protect himself and not about user who may be publishes his PayPal cookies somewhere on Blogpost and after that surprized that he is hacked.
Hi I have recently read JSP and came across its technologies, mainly session. Under session, I read URL rewriting one of the method that was been done in order to maintain the session with the client. But since the URL rewriting changes the URL with the session ID and it can be visible to the client.
Is that not a security issue? Lets say for example, if any one note this session ID apart from the particular user, and can make a bad use of it? Or else there are techniques for preventing these?
Correct me if am wrong.
Certainly this is a security concern. If you quickly take note of the jsessionid value, either from a by someone else mistakenly in public copypasted URL or a in public posted screenshot of some HTTP debugging tool (Firebug) which shows the request/response headers, and the website in question maintains users by a login, then you'll be able to login under the same user by just appending the jsessionid cookie to the URL or the request headers. Quickly, because those sessions expire by default after 30 minutes of inactivity. This is called a session fixation attack.
You can disable URL rewriting altogether so that the jsessionid never appears in the URL. But you're still sensitive to session fixation attacks, some hacker might have installed a HTTP traffic sniffer in a public network or by some trojan/virus, or even used XSS to learn about those cookies. To be clear, this security issue is not specific to JSP, a PHP, ASP or whatever website which maintains the login by a cookiebased session, is as good sensitive to this.
To be really safe with regard to logins, let the login and logged-in traffic go over HTTPS instead of HTTP and make the cookie HTTPS (secure) only.
URL rewriting of session cookies is discouraged in most (if not all) security circles. OWASP ASVS explicitly discourages its use as it results in exposure of the session identifiers via an insecure medium.
When URL rewriting of session cookies is enabled, the URL could be transmitted (with the session identifier) to other sites, resulting in disclosure of the session identifier via the HTTP Referrer header. In fact, a simple request by a browser to a resource located on another domain will result in possible hijacking (via a Man-In-The-Middle attack) or fixation of the session; this is as good as a Cross Site Scripting vulnerability in the site.
On a different note, additional protection mechanisms like the HttpOnly and Secure-Cookie flags introduced into various browsers for protecting the session cookie in different ways, can no longer be used when URL rewriting of cookies is performed by a site.
I believe you're referring to cookieless sessions. Although I have seen it referred to as 'url rewriting' in Java circles.
There are some extra session hijacking concerns (and they apply across all web development frameworks that support cookieless sessions--not just JSP). But session hijacking is possible even with cookies.
Here's a pretty good in-depth article on MSDN about cookieless sessions and the risks/benefits. Again, these are all platform agnostic.
http://msdn.microsoft.com/en-us/library/aa479314.aspx (toward the bottom)
This is what I came accross checking the OWASP specifications for URL rewriting and it Exposing session information in the URL is a growing security risk (from place 7 in 2007 to place 2 in 2013 on the OWASP Top 10 List).
Options for managing URL rewriting include :
disabling them at the server level.
disabling them at the application level.
An attractive option is a Servlet filter.
The filter wraps the response object with an alternate version, which changes response.encodeURL(String) and related methods into no-operations.
(The WEB4J tool includes such a filter.)