Why do browsers not keep status 302 pages in back-button history? - browser

When a request for an HTML page responds HTTP 302 Found (aka 'temporary redirect') FireFox loads the redirect page 'in-place', without keeping the originally opened URL U in 'back-button history'.
One popular use for 302 (and a correct use of the code, I think) seems to be redirecting to a /cookieAbsent page, alerting the user that their browser doesn't 'support' (perhaps more likely the user has disabled) cookies.
The consequence of this browser behaviour is that, if the user decides to enable cookies, reloading of course just reloads (the server couldn't send you back, reliably, if it wanted to) /cookieAbsent which is no good, and the back button goes back to wherever they were prior to opening (whether by hyperlink or typing) the original U. This would make sense to me for 301 Moved Permanently (aka 'permanent redirect'), but seems undesirable for 302, especially when used like this.
If I am implementing a browser - or, perhaps, hoping to report a bug or feature request in an existing one - is this behaviour required by a common specification, or is it simply up to the browser to do as it sees fit?

You can find this very relevant bug from 20 years ago (!) on mozilla bugtracker
A short summary of the justification seems to be they did it that way because some websites apparently used 302 the wrong way as a redirect path
/product?limitedtimetoken
302 to
/product?superproduct
302 to
/superproduct
and some got used to the "wrong" behavior of not saving and used it for pretend security
/checkauth
302 to
/connected?gotoaccount
302 to
/account
and while it was "technically right" to save those in history it was just the wrong behavior for end users to have all those middling urls saved, since those were fake 302 that would always redirect in reality.
If you've ever had a case of "the back button of my browser doesn't work it always go back forward" that you can see sometimes it's easy to understand their reasoning.
They mention that it was also what IE and Netscape did, though I could not find the why for those (but I assume it's mostly the same: it would have been bad for the end user to have all that saved in history).
by the way, it makes no sense to undo "this change" - this change is the right
fix.. you're right that the site in question is doing the wrong thing, but we
should still be fixing this problem.
this patch makes mozilla behave identically to Netscape 4x and IE w.r.t. storing
redirect urls in global history. and it is important that we also implement
this behavior because (unfortunately) many sites have grown up depending on it
for some level of percieved security. this fact is something that can't be
changed overnight, nor will it help the reputation of mozilla to not compromise
on this issue.

Suppose URL $u$ redirects to URL $v$. Suppose also that $u$ is pushed to the history and then $v$ is pushed to the history. When the user clicks the back button, the user will be taken to $u$, which will redirect to $v$. The user will find himself in an infinite loop, making the back button useless.

This is because browser history does not (have to) conform to cache semantics.
The goal of the history (back button) is to present the user with the page that they would have gotten given the (possibly expired) responses the browser got earlier.
For a more detailed explanation you can look at: https://svn.tools.ietf.org/svn/wg/httpbis/draft-ietf-httpbis/latest/p6-cache.html#history.lists

Related

HTTPS iframe within an HTTP page, how can I stop that?

I'm looking at buying an airline ticket, and I'm having to enter my credit card details in a http:// page, that looks like this:
If I look at the source code, this is actually an iframe with an HTTPS source, so this actually secure, but a non-tech-savvy user has no way of knowing that. Obviously, this is horrible (even for tech-savvy users).
Now, my question is, if I was the site offering this iframe (Verified by Visa in this case), is there a way that I could force modern browsers to not allow my page to be used as an iframe on http:// pages, but still allow it to be used as an iframe on https:// pages? Is there a technique that Verified by Visa really should be using here?
I'm looking at buying an airline ticket, and I'm having to enter my credit card details in a http:// page
Ouch! Someone's breaking the PCI-DSS terms of their merchant agreement huh.
If I look at the source code, this is actually an iframe with an HTTPS source, so this actually secure, but a non-tech-savvy user has no way of knowing that.
Indeed. You'd have to look at all the source code, including every piece of script on the parent page, to ensure that there is nothing interfering with the iframe (eg via clickjacking) and that the image you see in the browser page actually is the secure iframe. And ensure there were no other tabs open from the same domain with a reference to the window to cross-document-script into it... a non-starter.
if I was the site offering this iframe (Verified by Visa in this case), is there a way that I could force modern browsers to not allow my page to be used as an iframe on http:// pages, but still allow it to be used as an iframe on https:// pages?
I believe you could do it using Content Security Policy Level 2, eg with the header:
Content-Security-Policy: frame-ancestors https:
However support is patchy: at the time of writing, even the latest IE and Safari don't support it, and obviously it didn't exist at the time 3-D Secure implementations were being written. Still, if it just complains some of the time that would be enough to let an unwitting merchant know they'd messed up their payment integration.
One other thing they might have been able to do back then would be to check the Referer header for an http: address. Still not reliable (and maybe tricky to make work for all possible flows including redirect and pop-up, and in-between redirections) but could have helped.

Is there any way to tell a browser that this is a bad URL to remember?

I'm sending emails to customers, and I'm providing a custom URL for each, which when they go to, will log them in.
This is fine, except if they are using a shared browser that will remember the URL.
Is there any way at all to suggest to the browser that it shouldn't remember a URL?
Edit: This question has nothing to do with caching of the page.
Have the link log them in once. Then make them create credentials that let them access the site in the future. Whats to stop a random person from typing in the url and gaining access to the content?
Yes. You can redirect them with a 301 or 302. Then the browser won't save the URL they went to. At least that work with the Mozilla based browsers and I would imagine others too.
Another way, it is uglier though is to reply with an error and include a body which does a refresh. Whether that works in most browsers, probably not. However, browsers do not cache pages that return an error (404 Page Not Found would work, you could also use 403 Forbidden.)
Other than that, there isn't much you can do. JavaScript does not allow you to temper with the history anymore...

OWASP TOP10 - #10 Unvalidated Redirects and Forwards

I read many of the articles to this topic, including the OWASP PAGE and the Google blog article about open redirects...
I also found this question on open redirects here on stack overflow but it's a different one
I know why i should not redirect ... this makes totaly sense to me.
But what I really don't understand: Where is exactly the difference between redirecting and putting this in a normal <a href link?
Maybe some of the users are looking in the status bar but i think most of them are not really looking to the status bar, when they klick a link.
Is this really the only reason?
like on this article they wrote:
Click here to log in
The user may assume that the link is safe since the URL starts with their trusted bank, bank.example.com. However, the user will then be redirected to the attacker's web site (attacker.example.net) which the attacker may have made to appear very similar to bank.example.com. The user may then unwittingly enter credentials into the attacker's web page and compromise their bank account. A Java servlet should never redirect a user to a URL without verifying that the redirect address is a trusted site.
So, if you have something like a guestbook, where the user can put the link to their homepage, then the only difference is that the link is not redirected, but it still goes to the evil webpage.
Am I seeing this problem right?
From my understanding, it is not that the redirect is the problem. The main problem here is allowing a redirect (where the target is potentially controllable by the user) that contains an absolute url.
The fact that the url is absolute (meaning it begins http://host/etc), means that you are un-intentionally allowing cross-domain redirects. This is very similar to classic XSS vulnerabilities whereby javascript can be reflected to make cross-domain calls (and leak your domain's information).
So, as I understand, the way to fix most of these sorts of problems is to make sure that any redirect (on the server) is done relative to the root. Then there is no way for the user-controlled query string value go somewhere else.
Does that answer your question or just create more?
The main problem is that its possible for an attacker to make the URL appear to be trustworthy as it’s actually a URL to web site the victim trusts, i. e. bank.example.com.
The redirect target does not need to be that obvious as in the example. Actually, the attacker will probably use further techniques to trick both the user and possibly even the web application if necessary with special encodings, parameter pollutioning, and other techniques to spoof a legitimate URL.
So even if a victim is so security-conscious to check a URL before clicking a link or requesting its resource otherwise, all they can verify is that the URL points to the trustworthy web site bank.example.com. And that alone suffices too often.

Preventing other websites to see the 'correct' referer

On my website users can post stuff anonymously.
When they have posted something they will be redirected to their post, let's say:
http://example.com/post/2/title-of-the-anonymous-post
The user who submitted the post and the admins are the only ones with access to that post (until it is made public). Once it is made public the post would still be anonymous (i.e. people cannot see who submitted the post).
However, on that page there are also some external links. If the user decides to click an external link the target website has the ability to log the http referer (which would contain the link to the hidden page). This means it would be possible to find out who posted it once it is made public.
Is there a way to change the HTTP referer (/ referrer) when a users clicks on a link to another website?
By for example first redirecting the user to another url and let that page redirect to the external website:
user clicks on: http://example.com/referer-hider?url={urlencoded(url)}
and let the referer-hider redirect the user to the external page so that the referer will contain: http://example.com/referer-hider?url={urlencoded(url)}
Will this work? Or is there another solution for this (which doesn't require client side modifications)?
Since the referrer is provided by the browser to a web server, I only see two ways to insure that external sites don't get a view of this "hidden" URL.
First way would be (as you said) to remove the external links from your hidden page by running them through a redirector which uses header("location: ...");). Yes, that will work. You might just want to use this in general, so that you can track the exits from your site.
Second way would be to stop hiding this URL. It won't stay hidden forever, after all. A Google/Alexa/whatever toolbar hits it, and bam, it's indexed. So instead, build this hidden functionality into something session based. Make a script that changes its output depending on session variables, and only allow the hidden content to show up if people have logged in or previewed their post or whatever.
The third (and probably best) way would be to implement proper access control, so that anonymous users CANNOT visit the page with the restricted content. If you want an anonymous original poster to be able to visit THEIR OWN post, you can send them a cookie, then validate the cookie upon the visit to the unapproved post.
For example, upon submission for approval:
setcookie('postkey', mysql_insert_id());
Then:
$pieces=explode($_SERVER['PHP_SELF']);
$postid=$pieces[2]; // or whatever
if (!isset($_COOKIE['postkey'])) {
header("Location: http://example.org/");
} else if ($_COOKIE['postkey'] != $postid) {
header("Location: http://example.org/");
}
etc. You probably want better protection than this, but it should give you some ideas.
The HTTP referer is not transmitted by the browser when a link is going from HTTPS->HTTP. So a simple solution is to have an https redirect page: https://yoursite/redirect?url=... . However this page is also vulnerable to OWASP a10 - Unvalidated Redirects and Forwards, but that might not matter to you. Another solution that doesn't expose you to OWASP a10 is to use a free redirect service.
The Meta referrer proposal from Adam Barth would help with your case; in short you could tell browsers via a <meta> tag that the Referer header should be stripped on all outgoing links.
This isn't a complete answer since it's only implemented in Webkit thus far, but it's something to keep an eye on.

Is there a way to tell the browser to bookmark a different URL than is in the address bar?

I have an application that utilizes rather unfriendly dynamic URLs most of the time. I am providing friendly URLs to some content, but these are used only as an entry point into the application, after which point all of the generated URLs will be the unfriendly variety.
My question is, if I know that the user is on a page for which a friendly URL could be generated and they choose to bookmark it, is there a way to tell the browser to bookmark the friendly one instead of what is in the address bar?
I had hoped that rel="canonical" would help here, but it seems as if it's only used for indexing. Maybe one day browsers will utilise it.
No. This is by design, and a Good Thing.
Imagine the following scenario: Piskvor surfs to http://innocentlookingpage.example.com/ and clicks "bookmark". He doesn't notice that the bookmark he saved points to http://evilsite.example.net/ Next time he opens that bookmark, he might get a bit of a surprise.
Another example without cross-domain issues:
Piskvor the site admin clicks "bookmark" on the homepage of http://security-holes-r-us.example.org/ - unfortunately, the page is vulnerable to script injection, and the injected code changes the bookmark to http://security-holes-r-us.example.org/admin?action=delete&what=everything&sure=absolutely . If he's still logged in the next time he opens the bookmark, he may find his site purged of data (Granted, it was his fault not to prevent script injection AND to have non-idempotent GET resources, but that is all too common).

Resources