Based on this blog post, in chrome extension mv3, cross origin requests initiated from content script are disallowed but it also mentioned that "Extension pages, such as background pages, popups, or options pages, are unaffected by this change".
But how about the cross origin requests initiated from the an a static html page (listed under web_accessible_resources)? I saw it is also blocked, any ways to bypass the check for this case?
Related
With the new release of the Chrome browser, version 85.0.4183.121, the referral url is now being stripped off when our on-line shoppers are redirected to Microsoft's Azure AD B2C in order to login. According to this article, https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy, the fix is to set the Referrer-Policy on https://missionb2c.b2clogin.com.
I don't see any way to do this within AADB2C? Please help.
Our problem occurs only with the new version of Chrome: 85.0.4183.121. It does not happen in Edge or with older versions of Chrome.
The work around is to disable the “strict-origin-when-cross-origin” policy in Chrome: chrome://flags/#reduced-referrer-granularity
However, we cannot use this as workaround – too many of our customers are experiencing this issue.
There is no error so I cannot send you a screenshot. Simply put, when shoppers are redirected to Microsoft’s b2c login page, the referral url is stripped out by the browser. This causes a ‘generic’ login page to be displayed instead of having our customer’s logo. Additionally, there is not option to “sign-up now” because this, too comes from the referral URL.
Editing post:
This is a problem related to Chrome only.
https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default
You need to declare inside your anchor tag or inside in your javascript doc that the referrer policy you want is the no-referrer-downgrade.
If you are using user flows to perform login, you can use a custom layout with your own html and css files as seen in here customize user flow.
With that setup, you can add a script to your custom html adding a meta tag to set the referrer policy for you:
const meta = document.createElement('meta');
meta.name = 'referrer';
meta.content = 'unsafe-url';
document.getElementsByTagName('head')[0].appendChild(meta);
or simply add it to your custom html:
<meta name='referrer' content='unsafe-url'></meta>
If you need referral information in Microsoft’s b2c login page that was being sending from your app before Chrome's update, than you'll need to add this tag to your site.
Edit: You may use no-referrer-when-downgrade to improve security, I've used unsafe-url because I'm doing this only in development
I would like to serve files from a Chrome extension under different origin than the Chrome extension itself. Ideally, I would like that origins are multiple and can be configured. The idea is that then I can load them inside an <iframe> and they have their own isolated origin from anything else.
Use a sandbox (either as an iframe attribute or via the "sandbox" key in the manifest file) without the allow-same-origin directive. Then the page will have a unique origin (and it won't have access to APIs specific to the extension origin).
After reading blogs and some stackoverflow answers while building a chrome extension, I had for some reason thought that we cannot make an ajax call to a REST API hosted on server that comes under another domain than the hosted page. Is this correct? While developing my extension, I mistakenly made a call from a content script on clicking a button on my extension UI (UI is injected into the DOM using content script). I did not ran into any error. Everything went smooth. The host page in my test case is infact a page from stack overflow, and the REST API is hosted on my localhost. Could it be because the api was on local host?
From Chrome XHR documentation:
Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
Furthermore, from the Content Script documentation:
Content scripts can also make cross-site XMLHttpRequests to the same sites as their parent extensions [...]
So the only thing you need is to add your API endpoint to host permissions in the manifest:
"permissions" : [
"*://api.example.com/*"
]
We've partnered with a company whose website will display our content in an IFRAME. I understand what the header is and what it does and why, what I need help with is tracking down where it's coming from!
Windows Server 2003/IIS6
Container page: https://testDomain.com/test.asp
IFRAME Content: https://ourDomain.com/index.asp?lots_of_parameters,_wheeeee
Testing in Firefox 24 with Firebug installed. (IE and Chrome do the same thing.) Also running Fiddler so I can watch network traffic while I'm at it.
For simplicity's sake, I created a page with nothing on it but the IFRAME in question - same physical server, different domain/site - and it failed with
Load denied by X-Frame-Options: https://www.google.com/ does not permit cross-origin framing.
(That's in the Firebug console.) I'm confused because:
Google is not referenced anywhere in the containing app, or in the IFRAMEd app. All javascript libraries are kept locally; there is no analytics in the app. No Google, nowhere.
The containing page has NOTHING on it, except the IFRAME. No html tags, no head tag, no body tag. IFRAME. That's it.
The X-FRAME-OPTIONS header does not exist in IIS on the server: not at the "Websites" node, not in the individual sites.
So where the h-e-double-sticks is that coming from? What am I missing?
Interesting point: if I remove http"S" from the IFRAME url, it works. Given the nature of the data, SSL is required.
You might check global.asax.cs, the app could be adding the header to every response automatically. If you just search the app for "x-frame-options" you might find something also.
In Firefox or Chrome I'd like to prevent a private web page from making outgoing connections, i.e. if the URL starts with http://myprivatewebpage/ or https://myprivatewebpage/ in a browser tab, then that browser tab must be restricted so that it is allowed to load images, CSS, fonts, JavaScript, XmlHttpRequest, Java applets, flash animations and all other resources only from http://myprivatewebpage/ or https://myprivatewebpage/, i.e. an <img src="http://www.google.com/images/logos/ps_logo.png"> (or the corresponding <script>new Image(...) must not be able to load that image, because it's not on myprivatewebpage. I need a 100% and foolproof solution: not even a single resource outside myprivatewebpage can be accessible, not even at low probability. There must be no resource loading restrictions on Web pages other than myprivatewebpage, e.g. http://otherwebpage/ must be able to load images from google.com.
Please note that I assume that the users of myprivatewebpage are willing to cooperate to keep the web page private unless it's too much work for them. For example, they would be happy to install a Chrome or Firefox extension once, and they wouldn't be offended if they see an error message stating that access is denied to myprivatewebpage until they install the extension in a supported browser.
The reason why I need this restriction is to keep myprivatewebpage really private, without exposing any information about its use to webmasters of other web pages. If http://www.google.com/images/logos/ps_logo.png was allowed, then the use of myprivatewebpage would be logged in the access.log of Google's ps_logo.png, so Google's webmasters would have some information how myprivatewebpage is used, and I don't want that. (In this question I'm not interested in whether the restriction is reasonable, but I'm only interested in the technical solutions and its strengths and weaknesses.)
My ideas how to implement the restriction:
Don't impose any restrictions, just rely on the same origin policy. (This doesn't provide the necessary protection, the same origin policy lets all images pass through.)
Change the web application on the server so it generates HTML, JavaScript, Java applets, flash animations etc. which never attempt to load anything outside myprivatewebpage. (This is almost impossibly hard to foolproof everywhere on a complicated web application, especially with user-generated content.)
Over-sanitize the web page using a HTML output filter on the server, i.e. remove all <script>, <embed> and <object> tags, restrict the target of <img src=, <link rel=, <form action= etc. and also restrict the links in the CSS files. (This can prevent all unwanted resources if I can remember all HTML tags properly, e.g. I mustn't forget about <video>. But this is too restrictive: it removes all dyntamic web page functionality like JavaScript, Java applets and flash animations; without these most web applications are useless.)
Sanitize the web page, i.e. add an HTML output filter into the webserver which removes all offending URLs from the generated HTML. (This is not foolproof, because there can be a tricky JavaScript which generates a disallowed URL. It also doesn't protect against URLs loaded by Java applets and flash animations.)
Install a HTTP proxy which blocks requests based on the URL and the HTTP Referer, and force all browser traffic (including myprivatewebpage, otherwebpage, google.com) through that HTTP proxy. (This would slow down traffic to other than myprivatewebpage, and maybe it doesn't protect properly if XmlHttpRequest()s, Java applets or flash animations can forge the HTTP Referer.)
Find or write a Firefox or Chrome extension which intercepts all outgoing connections, and blocks them based on the URL of the tab and the target URL of the connection. I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
Some links I've found for the Chrome extension:
http://www.chromium.org/developers/design-documents/extensions/notifications-of-web-request-and-navigation
https://groups.google.com/a/chromium.org/group/chromium-extensions/browse_thread/thread/90645ce11e1b3d86?pli=1
http://code.google.com/chrome/extensions/trunk/experimental.webRequest.html
As far as I can see, only the Firefox or Chrome extension is feasible from the list above. Do you have any other suggestions? Do you have some pointers how to write or where to find such an extension?
I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
I am the author of the latter extension, though I have yet to update it to support newer versions of Firefox. My initial guess is that, yes, it will do what you want:
User visits your web page without plugin. Web page contains ThinkAhead block that would send a simple version header to the server, but this is ignored as plugin is not installed.
Since the server does not see that header, it redirects the client to a page to install the plugin.
User installs plugin.
User visits web page with plugin. Page sends version header to server, so server allows access.
The ThinkAhead block matches all pages that are not myprivatewebpage, and does something like set the HTTP status to 403 Forbidden. Thus:
When the user visits any webpage that is in myprivatewebpage, there is normal behaviour.
When the user visits any webpage outside of myprivatewebpage, access is denied.
If you want to catch bad requests earlier, instead of modifying incoming headers, you could modify outgoing headers, perhaps screwing up "If-Match" or "Accept" so that the request is never honoured.
This solution is extremely lightweight, but might not be strong enough for your concerns. This depends on what you want to protect: given the above, the client would not be able to see blocked content, but external "blocked" hosts might still notice that a request has been sent, and might be able to gather information from the request URL.