Why does the Same Origin Policy prevent reading GET responses? - web

I've done a bit of research on the web and searched through a few questions about SOP and what kinds of abuse it mitigates, but most answers are focused on preventing stolen credentials. This makes sense to me.
What doesn't make sense to me is why browsers following SOP rules block the response outright, rather than blocking cookie and local storage access.
In other words, if cookies and local storage didn't exist, would there still be a need to prevent reading GET responses? Presumably this is already what happens to some degree with <img>, <script>, and <iframe>.

According to Mozilla Developer Network :
The same-origin policy restricts how a document or script loaded from one origin can interact with a resource from another origin. It is a critical security mechanism for isolating potentially malicious documents.
According to RFC 6454 :
Although user agents group URIs into origins, not every resource in
an origin carries the same authority (in the security sense of the
word "authority", not in the [RFC3986] sense). For example, an image
is passive content and, therefore, carries no authority, meaning the
image has no access to the objects and resources available to its
origin. By contrast, an HTML document carries the full authority of
its origin, and scripts within (or imported into) the document can
access every resource in its origin.
To answer your question, even if cookies and local storage didn't exist, it will be still dangerous to execute unknown script in the context of the document. These scripts could issue XHR requests with the same IP as the authorized scripts and behave badly.

Related

Block the access of Aws cloudfront url from chrome, safari and all browsers

Had Done:
I had done uploading Kyc documents and attachments in s3 bucket
Integrated S3 with CloudFront
Blocked all public access in S3 bucket.
Only way of accessing content is 'CloudFront url'
My requirement is:
Any one can access the documents if 'CloudFront Url' known
So i want to restrict the access of URL except my application
Mainly block the access of that url in chrome, safari and all browsers
Is it possible to restrict the URL ? How ?
Lambda#Edge will let you do almost anything you want with a request as it's processed by CloudFront.
You could look at the user agent, then return a 403 if it doesn't match what you expect. Beware, however, that it's not difficult to change the user-agent string. Better is to use an authentication token.
To be honest, I don't understand your question well and you should make an attempt to describe the issue again. From a bird's eye view, I feel you are describing an IDOR vulnerability. But I will address multiple parts in my response.
AWS WAF will allow you to perform quite a bit of blocking on a wide variety of request content.
Specifically for this problem, if you choose to use AWS WAF, you can do the following to address this issue:
Create a WAF ACL, it should not be regional and should be global, set the default action of the WAF ACL to auto allow
Build regex pattern sets of what you would like to block or you can hard code specific examples
Create a rule that will block requests which have a User-Agent header that matches your regex pattern set
But at the end of the day, you might just be fighting a battle which should not necessarily be fought in the first place. Think about it like this, if you want to block all User-Agent headers which symbolize a browser, that is fine. But the problem is, the User-Agent header can easily be overwritten and spoofed such you won't see the typical browser User-Agent header. I don't suggest you to block requests based on this criteria because at the end of the day, I can just use a proxy and have it replace that request content before forwarding the traffic to the server and bypass the WAF or even Lambda#Edge.
What I would suggest is to develop some sort of authorization/authentication requirement to access these specific files. Since KYC can be sensitive, this would be a good control to put in place to be sure the files are not accessed by those who should not access them.
It seems to me like you are running into a case where an attacker can exploit an IDOR vulnerability. If that is the case, you need to program this logic in the application layer. There will be no way to prevent this at the AWS WAF layer.
If you truly wanted to fix the issue and you were dealing with an IDOR, I would use Lambda#Edge to validate that the Cookie included in the request should be able to access the KYC document. You should store in a database what KYC documents can be accessed by which specific user and you should check that the Cookies header includes the Cookies of the user who uploaded the KYC document. This would be effectively implementing authorization/authentication, but instead at just at the application layer, it would be also at the Lambda#Edge (or CDN) layer.

Same Origin Policy easily circumvented?

I've read an article which used Cors-Anywhere to make an example url request, and it made me think about how easily the Same Origin Policy can be bypassed.
While the browser prevents you from accessing the error directly, and cancels the request altogether when it doesn't pass a preflight request, a simple node server does not need to abide to such rules, and can be used as a proxy.
All there needs to be done is to append 'https://cors-anywhere.herokuapp.com/' to the start of the requested url in the malicious script and Voila, you don't need to pass CORS.
And as sideshowbarker pointed out, it takes a couple of minutes to deploy your own Cors-Anywhere server.
Doesn't it make SOP as a security measure pretty much pointless?
The purpose of the SOP is to segregate data stored in browsers by their origin. If you got a cookie from domain1.tld (or it stored data for you in a browser store), Javascript on domain2.tld will not be able to gain access. This cannot be circumvented by any server-side component, because that component will still not have access in any way. If there was no SOP, a malicious site could just read any data stored by other websites in your browsers.
Now this is also related to CORS, as you somewhat correctly pointed out. Normally, your browser will not receive the response from a javascript request made to a different origin than the page origin it's running on. The purpose of this is that if it worked, you could gain information from sites where the user is logged in. If you send it through Cors-Anywhere though, you will not be able to send the user's session cookie for the other site, because you still don't have access, the request goes to your own server as the proxy.
Where Cors-Anywhere matters is unauthenticated APIs. Some APIs might check the origin header and only respond to their own client domain. In that case, sure, Cors-Anywhere can add or change CORS headers so that you can query it from your own hosted client. But the purpose of SOP is not to prevent this, and even in this case, it would be a lot easier for the API owner to blacklist or throttle your requests, because they are all proxied by your server.
So in short, SOP and CORS are not access control mechanisms in the sense I think you meant. Their purpose is to prevent and/or securely allow cross-origin requests to certain resources, but they are not meant to for example prevent server-side components from making any request, or for example to try and authenticate your client javascript itself (which is not technically possible).

is it safe to fetch an image on plain http on a bank's homebanking website?

I ask here instead of https://security.stackexchange.com/ because I dont think this question is on a professional level.
I just saw something weird on my bank's website, they are fetching an image from another domain, using http instead of https , on firefox it raises a security "mixed content" alert, on chrome it just shows up an alert on the security tab.
This is the site: https://www.bancoprovincia.com.ar/Principal/BipPersonal
The unsafe content (an image) happens to be on the page just before the user logs in to his home banking, I was worried that some attacker could intercept the content and replace it with something that could be a security risk.
Any chance this is a security risk for the bank and it's clients?.
It's not a direct vulnerability, but still bad practice.
Some risks that come to mind:
An attacker having access to users' connections (man in the middle) could replace the image with a malicious one, exploiting potentially zero-day (as yet unknown) flaws in browser or operating system image processor libraries. This could lead to remote code execution on the client.
Replacing the image could also be used to facilitate phishing. The malicious image could tell the user to call a phone number because of some kind of a problem, etc.
It is an information leak. An attacker may receive information about users browsing to the bank website, also if the image is in a header included on all pages, they may receive information about what the user does. This is inherently the case for every external site that has its images linked even over https, but over http this also applies to any MitM attacker.
It is a potential availability problem. If the image on the external site times out (waits too long to download), the page will not load for some time in some browsers and an attacker could exploit that. However, this I think is not affected by the image being served on plain http, it would affect an externally linked https image as well I think.
It's also a very bad practice, because instead of strengthening good security practices in users like always checking browser indications of a secure website, it is telling them that it's ok if there are warnings. It is not.

How secure is it to use fragment identifiers to hold private data in URLs?

We know the URL itself is not a secure way to pass or store information. Too many programs will perform unexpected processing on the URL or even ship it over the network, and generally speaking, the URL is not treated with a high regard for its privacy.
In the past we've seen Bitcoin wallets, for example, which have relied on keeping a URL secret, but they found out the hard way there are too many ways in which a URL (sent via Skype, or emailed, or even just typing it into the Google Chrome omnibar) will get stored by a remote server, and possibly displayed publicly.
And so I thought URL would be forsaken forever as a means for carrying any private data... despite being extremely convenient, except now I've seen a few sites which are using URL fragments -- the portion of the URL after the '#' -- as a kind of 'secure' storage. I think the expectation is that Google won't parse the fragment and allow it to show up in search results, so that data shouldn't be published.
But that seems like a pretty weak basis for the security of your product. There would be a huge benefit to having a way to securely move data in URL fragments, but can we really rely on that?
So, I would really like to understand... Can anyone explain, what is the security model for fragment identifiers?
Tyler Close and others who did the security architecture for Waterken did the relevent research form this. They use unguessable strings in URI fragments as web-keys:
This leakage of a permission bearing URL via the Referer header is only a problem in practice if the target host of a hyperlink is different from the source host, and so potentially malicious. RFC 2616 foresaw the danger of such leakage of information and so provided security guidance in section 15.1.3:
"Because the source of a link might be private information or might reveal an otherwise private information source, … Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol."
Unfortunately, clients have implemented this guidance to the letter, meaning the Referer header is sent if both the referring page and the destination page use HTTPS, but are served by different hosts.
This enthusiastic use of the Referer header would present a significant barrier to implementation of the web-key concept were it not for one unrelated, but rather fortunate, requirement placed on use of the Referer header. Section 14.36 of RFC 2616, which governs use of the Referer header, states that: "The URI MUST NOT include a fragment." Testing of deployed web browsers has shown this requirement is commonly implemented.
Putting the unguessable permission key in the fragment segment produces an https URL that looks like: <https://www.example.com/app/#mhbqcmmva5ja3>.
Fetching a representation
Placing the key in the URL fragment component prevents leakage via the Referer header but also complicates the dereference operation, since the fragment is also not sent in the Request-URI of an HTTP request. This complication is overcome using the two cornerstones of Web 2.0: JavaScript and XMLHttpRequest.
So, yes, you can use fragment identifiers to hold secrets, though those secrets could be stolen and exfiltrated if your application is susceptible to XSS, and there is no equivalent of http-only cookies for fragment identifiers.
I believe Waterken mitigates this by removing the secret from the fragment before it runs any application code in the same way many sensitive daemons zero-out their argv.
The part after the # is not any more secure than any other part of the URL. The only difference is that it MAY be omitted from the web server access log. But the web server is not the threat.
As long as you store the secret, either in a URL or somewhere else where it can become public it is insecure. That is why we invented passwords, because they are supposed to only exist in peoples head.
The problem is not to find a way to store a secret in a URL.
That is impossible, because as you say: The probably will become public. If all you need is the URL, and it gos public, nobody cares what the original data is. Bacuse they have what they need, the URL. So to rely on the URL alone for authentication is.. moronic.
The The problem is to store your secrets in a secure way, and to create secure systems.

Why same origin policy for XMLHttpRequest

Why do browsers apply the same origin policy to XMLHttpRequest? It's really inconvenient for developers, but it appears it does little in actually stopping hackers.
There are workarounds, they can still include javascript from outside sources (the power behind JSONP).
It seems like an outdated "feature" in a web that's largely interlinked.
Because an XMLHttpRequest passes the user's authentication tokens. If the user were logged onto example.com with basic auth or some cookies, then visited attacker.com, the latter site could create an XMLHttpRequest to example.com with full authorisation for that user and read any private page that the user could (then forward it back to the attacker).
Because putting secret tokens in webapp pages is the way to stop simple Cross-Site-Request-Forgery attacks, this means attacker.com could take any on-page actions the user could at example.com without any consent or interaction from them. Global XMLHttpRequest is global cross-site-scripting.
(Even if you had a version of XMLHttpRequest that didn't pass authentication, there are still problems. For example an attacker could make requests out to other non-public machines on your intranet and read any files it can download from them which may not be meant for public consumption. <script> tags already suffer from a limited form of this kind of vulnerability, but the fully-readable responses of XMLHttpRequest would leak all kinds of files instead of a few unfortunately-crafted ones that can parse as JavaScript.)

Resources