HTTPS iframe within an HTTP page, how can I stop that? - security

I'm looking at buying an airline ticket, and I'm having to enter my credit card details in a http:// page, that looks like this:
If I look at the source code, this is actually an iframe with an HTTPS source, so this actually secure, but a non-tech-savvy user has no way of knowing that. Obviously, this is horrible (even for tech-savvy users).
Now, my question is, if I was the site offering this iframe (Verified by Visa in this case), is there a way that I could force modern browsers to not allow my page to be used as an iframe on http:// pages, but still allow it to be used as an iframe on https:// pages? Is there a technique that Verified by Visa really should be using here?

I'm looking at buying an airline ticket, and I'm having to enter my credit card details in a http:// page
Ouch! Someone's breaking the PCI-DSS terms of their merchant agreement huh.
If I look at the source code, this is actually an iframe with an HTTPS source, so this actually secure, but a non-tech-savvy user has no way of knowing that.
Indeed. You'd have to look at all the source code, including every piece of script on the parent page, to ensure that there is nothing interfering with the iframe (eg via clickjacking) and that the image you see in the browser page actually is the secure iframe. And ensure there were no other tabs open from the same domain with a reference to the window to cross-document-script into it... a non-starter.
if I was the site offering this iframe (Verified by Visa in this case), is there a way that I could force modern browsers to not allow my page to be used as an iframe on http:// pages, but still allow it to be used as an iframe on https:// pages?
I believe you could do it using Content Security Policy Level 2, eg with the header:
Content-Security-Policy: frame-ancestors https:
However support is patchy: at the time of writing, even the latest IE and Safari don't support it, and obviously it didn't exist at the time 3-D Secure implementations were being written. Still, if it just complains some of the time that would be enough to let an unwitting merchant know they'd messed up their payment integration.
One other thing they might have been able to do back then would be to check the Referer header for an http: address. Still not reliable (and maybe tricky to make work for all possible flows including redirect and pop-up, and in-between redirections) but could have helped.

Related

Could I use an iframe for implement web payments procedure

Hello I would like to implement a payments on my web site.
I have a requirements that to do it in iframe with hidden address bar.
But in this case user would't be abble to see that we are using HTTPS protocol for sending data and e.t.c
Does is it good practice or it is looks like security issue ?
I don't think it is a good idea to hide HTTPS information from end users. If you look at any web security for dummies kind of guide, they all say that when you enter private/financial information etc make sure your address bar display the lock etc.
Even though you may in your HTML that you are using HTTPS, do you really expect users to "view source" your HTML and/or use Fiddler etc? No right?
So, do the right thing- show HTTPS URL.
BTW, from security perspective, if the first page you serve is NOT over SSL, someone could just modify HTML and inject a malicious HTTPS link with valid cert. That is why it is very important to have SSL enabled on your whole website.
No wonder HTTP 2.0 is going to be all SSL :)
Technically you don't need HTTPS if you are using iFrames for checkout. Ofcourse the 3rd party website is always protected... BUT since you cannot explain this to your customers/clients, so you have to have a HTTPS even you are using iFrames even it is secure but to make your customers feel actually that they secure you should have SSL (HTTPS).. Or I know many of your customers will simply leave your website... SO YOU DO NEED IT... YES

OWASP TOP10 - #10 Unvalidated Redirects and Forwards

I read many of the articles to this topic, including the OWASP PAGE and the Google blog article about open redirects...
I also found this question on open redirects here on stack overflow but it's a different one
I know why i should not redirect ... this makes totaly sense to me.
But what I really don't understand: Where is exactly the difference between redirecting and putting this in a normal <a href link?
Maybe some of the users are looking in the status bar but i think most of them are not really looking to the status bar, when they klick a link.
Is this really the only reason?
like on this article they wrote:
Click here to log in
The user may assume that the link is safe since the URL starts with their trusted bank, bank.example.com. However, the user will then be redirected to the attacker's web site (attacker.example.net) which the attacker may have made to appear very similar to bank.example.com. The user may then unwittingly enter credentials into the attacker's web page and compromise their bank account. A Java servlet should never redirect a user to a URL without verifying that the redirect address is a trusted site.
So, if you have something like a guestbook, where the user can put the link to their homepage, then the only difference is that the link is not redirected, but it still goes to the evil webpage.
Am I seeing this problem right?
From my understanding, it is not that the redirect is the problem. The main problem here is allowing a redirect (where the target is potentially controllable by the user) that contains an absolute url.
The fact that the url is absolute (meaning it begins http://host/etc), means that you are un-intentionally allowing cross-domain redirects. This is very similar to classic XSS vulnerabilities whereby javascript can be reflected to make cross-domain calls (and leak your domain's information).
So, as I understand, the way to fix most of these sorts of problems is to make sure that any redirect (on the server) is done relative to the root. Then there is no way for the user-controlled query string value go somewhere else.
Does that answer your question or just create more?
The main problem is that its possible for an attacker to make the URL appear to be trustworthy as it’s actually a URL to web site the victim trusts, i. e. bank.example.com.
The redirect target does not need to be that obvious as in the example. Actually, the attacker will probably use further techniques to trick both the user and possibly even the web application if necessary with special encodings, parameter pollutioning, and other techniques to spoof a legitimate URL.
So even if a victim is so security-conscious to check a URL before clicking a link or requesting its resource otherwise, all they can verify is that the URL points to the trustworthy web site bank.example.com. And that alone suffices too often.

Preventing other websites to see the 'correct' referer

On my website users can post stuff anonymously.
When they have posted something they will be redirected to their post, let's say:
http://example.com/post/2/title-of-the-anonymous-post
The user who submitted the post and the admins are the only ones with access to that post (until it is made public). Once it is made public the post would still be anonymous (i.e. people cannot see who submitted the post).
However, on that page there are also some external links. If the user decides to click an external link the target website has the ability to log the http referer (which would contain the link to the hidden page). This means it would be possible to find out who posted it once it is made public.
Is there a way to change the HTTP referer (/ referrer) when a users clicks on a link to another website?
By for example first redirecting the user to another url and let that page redirect to the external website:
user clicks on: http://example.com/referer-hider?url={urlencoded(url)}
and let the referer-hider redirect the user to the external page so that the referer will contain: http://example.com/referer-hider?url={urlencoded(url)}
Will this work? Or is there another solution for this (which doesn't require client side modifications)?
Since the referrer is provided by the browser to a web server, I only see two ways to insure that external sites don't get a view of this "hidden" URL.
First way would be (as you said) to remove the external links from your hidden page by running them through a redirector which uses header("location: ...");). Yes, that will work. You might just want to use this in general, so that you can track the exits from your site.
Second way would be to stop hiding this URL. It won't stay hidden forever, after all. A Google/Alexa/whatever toolbar hits it, and bam, it's indexed. So instead, build this hidden functionality into something session based. Make a script that changes its output depending on session variables, and only allow the hidden content to show up if people have logged in or previewed their post or whatever.
The third (and probably best) way would be to implement proper access control, so that anonymous users CANNOT visit the page with the restricted content. If you want an anonymous original poster to be able to visit THEIR OWN post, you can send them a cookie, then validate the cookie upon the visit to the unapproved post.
For example, upon submission for approval:
setcookie('postkey', mysql_insert_id());
Then:
$pieces=explode($_SERVER['PHP_SELF']);
$postid=$pieces[2]; // or whatever
if (!isset($_COOKIE['postkey'])) {
header("Location: http://example.org/");
} else if ($_COOKIE['postkey'] != $postid) {
header("Location: http://example.org/");
}
etc. You probably want better protection than this, but it should give you some ideas.
The HTTP referer is not transmitted by the browser when a link is going from HTTPS->HTTP. So a simple solution is to have an https redirect page: https://yoursite/redirect?url=... . However this page is also vulnerable to OWASP a10 - Unvalidated Redirects and Forwards, but that might not matter to you. Another solution that doesn't expose you to OWASP a10 is to use a free redirect service.
The Meta referrer proposal from Adam Barth would help with your case; in short you could tell browsers via a <meta> tag that the Referer header should be stripped on all outgoing links.
This isn't a complete answer since it's only implemented in Webkit thus far, but it's something to keep an eye on.

Firefox or Chrome plugin to block and filter all outgoing connections

In Firefox or Chrome I'd like to prevent a private web page from making outgoing connections, i.e. if the URL starts with http://myprivatewebpage/ or https://myprivatewebpage/ in a browser tab, then that browser tab must be restricted so that it is allowed to load images, CSS, fonts, JavaScript, XmlHttpRequest, Java applets, flash animations and all other resources only from http://myprivatewebpage/ or https://myprivatewebpage/, i.e. an <img src="http://www.google.com/images/logos/ps_logo.png"> (or the corresponding <script>new Image(...) must not be able to load that image, because it's not on myprivatewebpage. I need a 100% and foolproof solution: not even a single resource outside myprivatewebpage can be accessible, not even at low probability. There must be no resource loading restrictions on Web pages other than myprivatewebpage, e.g. http://otherwebpage/ must be able to load images from google.com.
Please note that I assume that the users of myprivatewebpage are willing to cooperate to keep the web page private unless it's too much work for them. For example, they would be happy to install a Chrome or Firefox extension once, and they wouldn't be offended if they see an error message stating that access is denied to myprivatewebpage until they install the extension in a supported browser.
The reason why I need this restriction is to keep myprivatewebpage really private, without exposing any information about its use to webmasters of other web pages. If http://www.google.com/images/logos/ps_logo.png was allowed, then the use of myprivatewebpage would be logged in the access.log of Google's ps_logo.png, so Google's webmasters would have some information how myprivatewebpage is used, and I don't want that. (In this question I'm not interested in whether the restriction is reasonable, but I'm only interested in the technical solutions and its strengths and weaknesses.)
My ideas how to implement the restriction:
Don't impose any restrictions, just rely on the same origin policy. (This doesn't provide the necessary protection, the same origin policy lets all images pass through.)
Change the web application on the server so it generates HTML, JavaScript, Java applets, flash animations etc. which never attempt to load anything outside myprivatewebpage. (This is almost impossibly hard to foolproof everywhere on a complicated web application, especially with user-generated content.)
Over-sanitize the web page using a HTML output filter on the server, i.e. remove all <script>, <embed> and <object> tags, restrict the target of <img src=, <link rel=, <form action= etc. and also restrict the links in the CSS files. (This can prevent all unwanted resources if I can remember all HTML tags properly, e.g. I mustn't forget about <video>. But this is too restrictive: it removes all dyntamic web page functionality like JavaScript, Java applets and flash animations; without these most web applications are useless.)
Sanitize the web page, i.e. add an HTML output filter into the webserver which removes all offending URLs from the generated HTML. (This is not foolproof, because there can be a tricky JavaScript which generates a disallowed URL. It also doesn't protect against URLs loaded by Java applets and flash animations.)
Install a HTTP proxy which blocks requests based on the URL and the HTTP Referer, and force all browser traffic (including myprivatewebpage, otherwebpage, google.com) through that HTTP proxy. (This would slow down traffic to other than myprivatewebpage, and maybe it doesn't protect properly if XmlHttpRequest()s, Java applets or flash animations can forge the HTTP Referer.)
Find or write a Firefox or Chrome extension which intercepts all outgoing connections, and blocks them based on the URL of the tab and the target URL of the connection. I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
Some links I've found for the Chrome extension:
http://www.chromium.org/developers/design-documents/extensions/notifications-of-web-request-and-navigation
https://groups.google.com/a/chromium.org/group/chromium-extensions/browse_thread/thread/90645ce11e1b3d86?pli=1
http://code.google.com/chrome/extensions/trunk/experimental.webRequest.html
As far as I can see, only the Firefox or Chrome extension is feasible from the list above. Do you have any other suggestions? Do you have some pointers how to write or where to find such an extension?
I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
I am the author of the latter extension, though I have yet to update it to support newer versions of Firefox. My initial guess is that, yes, it will do what you want:
User visits your web page without plugin. Web page contains ThinkAhead block that would send a simple version header to the server, but this is ignored as plugin is not installed.
Since the server does not see that header, it redirects the client to a page to install the plugin.
User installs plugin.
User visits web page with plugin. Page sends version header to server, so server allows access.
The ThinkAhead block matches all pages that are not myprivatewebpage, and does something like set the HTTP status to 403 Forbidden. Thus:
When the user visits any webpage that is in myprivatewebpage, there is normal behaviour.
When the user visits any webpage outside of myprivatewebpage, access is denied.
If you want to catch bad requests earlier, instead of modifying incoming headers, you could modify outgoing headers, perhaps screwing up "If-Match" or "Accept" so that the request is never honoured.
This solution is extremely lightweight, but might not be strong enough for your concerns. This depends on what you want to protect: given the above, the client would not be able to see blocked content, but external "blocked" hosts might still notice that a request has been sent, and might be able to gather information from the request URL.

Setting cookie in iframe - different Domain

We have our site integrated as an iframe into another site that runs on a different domain. It seems that we cannot set cookies. Has anybody encountered this issue before? Any ideas?
Since your content is being loaded into an iframe from a remote domain, it is classed as a third-party cookie.
The vast majority of third-party cookies are provided by advertisers (these are usually marked as tracking cookies by anti-malware software) and many people consider them to be an invasion of privacy. Consequently, most browsers offer a facility to block third-party cookies, which is probably the cause of the issue you are encountering.
From new update of Chromium in February 4, 2020 (Chrome 80).
Cookies default to SameSite=Lax. According to this link.
To fix this, you just need to mark your cookies are SameSite=None and Secure.
To understand what is Samesite cookies, please see this document
After reading through Facebook's docs on iframe canvas pages, I figured out how to set cookies in iframes with different domains. I created a proof of concept sinatra application here: https://github.com/agibralter/iframe-widget-test
There is more discussion on how Facebook does it here: How does Facebook set cross-domain cookies for iFrames on canvas pages?
IE requires you to set a P3P policy before it will allow third-party frames to set cookies, under the default privacy settings.
Supposedly P3P allows the user to limit what information goes to what parties who promise to handle it in certain ways. In practice it's pretty much worthless as users can't really set any meaningful limitations on how they want information handled; in the end it's just a fairly uniform setting acting as a hoop that all third parties have to jump through, saying “I'll be nice with your personal information” even if they have no intention of doing so.
Despite adding SameSite=None and Secure in the cookie, you might not see the cookie being sent in the request. This might be because of the browser settings. e.g, on Brave, you have to explicity disable it.
As more and more people are switching to Brave or block third party cookies using browser extensions, you should not rely on this mechanism.

Resources