How to enable cross origin isolation? (the specifics) - browser

I am hosting a website using 1and1 (ionos), and it is serving a HTML page with imported CSS and JS. I am trying to figure out how to enable cross origin isolation, but all I can find is that we need to enable certain response headers: https://web.dev/cross-origin-isolation-guide/.
Specifically in these instructions:
What does it mean to set a header on a top-level document? How does one accomplish this? I have done plenty of searching but have not found details on how to create/enable these response headers.
I need to do this in order to use SharedArrayBuffer in Firefox.

Related

Azure CDN CORS setup

New at this, so forgive me if the answer is obvious.
I created a CDN for use with a standard javascript library (a commercial graphing library).
The library has many files, and is present in 4 directories (modules, fonts, styles etc)
I stored all the above in a storage account, and then created anf linked a CDN account to it. The access is anonymous, as it will be accessed by IOT devices that need the javascript libraries fr operation (and the devices have insufficient memory to serve the files themselves).
All seemed to work, except I am getting CORS policy errors, when the library tries to access a font in the same CDN, but in a different directory.
Here is the error in the web browser
192.168.0.202/:1
Access to font at 'https://xxx.azureedge.net/htmlelements/styles/font/smart-icons.woff2' from origin 'http://192.168.0.202' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.enter code here
Is there a way to globally cross allow access of files in the same cdn, but in different directories? (I looked at the online help, and got nowhere.
Thanks
I added this line in my part of the html page
<meta http-equiv="Access-Control-Allow-Origin" content="*"/>
I also added
DefaultHeaders::Instance().addHeader("Access-Control-Allow-Origin", "*");
before my first server.on() routine in the asyncwebserver responses area on the server
The errors seem to have gone away.
Not sure if that is what is required, but willing to take suggestions if there is anything incomplete or wrong.
Thanks

Node js - Bundler for http2

I'm currently using babel to transform es6 code to es5 and browserify to bundle it to use it in the browser. Now I've began to using a http2 server (Nginx).
Http2 is more effective when it can load multiple small files instead of one big bundle.
How to best serve multiple js files instead of one big bundle?
I know that SystemJS can load multiple files in development without bundling, and for production you can use a DepCache to define the dependence trees of the modules you are importing
https://github.com/systemjs/systemjs/blob/master/docs/production-workflows.md
This approach would require you to ditch browserfy and change to systemjs as it only uses bundles.
I see that you didn't get the answer on your question till now. Thus I try to help you in spite of HTTP/2 is new for me too (it explains the long text of my answer :-)).
Good information about HTTP/2 can be find on the page https://blog.cloudflare.com/http-2-for-web-developers/. I repeat shortly:
stop concatenating files
stop inlining assets
stop sharding domains
continue minimizing of CSS/JavaScript files
continue loading from CDNs
continue DNS prefetching via <link rel='dns-prefetch' href='...' /> included in <head>
...
I want to add two additional points about the importance of setting HTTP headers Cache-Control and Link:
think about setting Cache-Control HTTP headers (especially max-age, expires and etag) on all content of your page. See details below. I strictly recommend to read the Caching Tutorial.
set Link HTTP header to use SERVER PUSH of HTTP/2.
The setting of HTTP headers LINK: are important to use server push feature of HTTP/2 (see here, here). RFC5988 and Section 19.6.1.2 of RFC2068 describe the feature existing in HTTP 1.1 already. Everybody knows Content-Type: application/json, but in the same way one could set less known Link: <...>; rel=prefetch, described here. For example, one can use
Link: </app/script.js>; rel=preload; as=script
Link: </fonts/font.woff>; rel=preload; as=font
Link: </app/style.css>; rel=preload; as=style
Such links, set on HTML page (like index.html), will informs HTTP server to push the resources together with the response on your HTML page. As the result you save unneeded round-trips and the later requests (after parsing HTML files) and the resources will be displayed immediately. You can consider to set the LINK headers on all images from your page to improve the visibility of your page. See here additional information with nice pictures, which demonstrates the advantage of HTTP/2 server push. If you use PHP then the code could be interesting for you.
The most web developers do some optimizations steps directly or indirectly. The steps are done either during building process or by setting HTTP headers in HTTP responses. One have to review some processes switch off someone and include another one. I try to summarize my results.
you can consider to use webpack instead of browserify to exclude some dependencies from merging. I don't know browserify good enough, but I know that webpack supports externals (see here), which allows to load some modules from CDN. In the next step you can remove any merging at all, but minimize and set cache-control on all your modules.
It's strictly recommended to load CSS/JS/Fonts, which you use, and which you don't developed yourself, from CDN. You should never merge such resources with your JavaScript files (what could you probably do with browserify now). Loading of Bootstrap CSS from your server is not good idea. One should better follow advises from here and use CDN instead ol downloading of all files locally.
The main reason of the usage of CDN is very easy to understand if you examine HTTP headres of the response from https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js for example. You will find something like cache-control: public, max-age=30672000 and expires:Mon, 06 Mar 2017 21:25:04 GMT. Chrome will shows typically Status Code:200 (from cache) and you will see no traffic over the wire. If you explicitly reload the page (by pressing F5) then you will see a response with 222 bytes and Status Code:304. In other words the file will be typically didn't loaded at all. jQuery 2.2.1 stay forever the same. The next version will have another URL. The usage of HTTPS makes sure that the user will load really jQuery 2.2.1. If it's not enough then you can use https://www.srihash.org/ to calculate sha384 value and use extended form of <link> or <script>:
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js"
integrity="sha384-8C+3bW/ArbXinsJduAjm9O7WNnuOcO+Bok/VScRYikawtvz4ZPrpXtGfKIewM9dK"
crossorigin="anonymous"></script>
If the user opens your page with the link then the sha384 hash will be recalculated and verified (by Chrome and Firefox). If the file is not yet in local cache then it will be loaded really quickly too. One short remark by loading the same file from https://code.jquery.com/jquery-2.2.1.min.js one uses HTTP 1.1 today, but from https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.1/jquery.min.js be used HTTP/2 protocol. i recommend to test the protocol by choosing the CDN. You can find here the list of CDNs which supports now HTTP/2. In the same way loading Bootstrap from https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css one would uses HTTP 1.1 today, but one would use HTTP/2 by loading the same data from https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.6/css/bootstrap.min.css.
I spend many time for CDN to make clear that the most advantage of CDN is setting of cashing headers of HTTP response and the usage of immutable URLs. You can do the same in your modules too.
One should think about the time of caching of every content returned from the server. You can use URLs to your modules, which contains version number of your component (like /script/mycomponent1.1.12341) and to change the last part of version number every time on changing the module. You can set long enough value of max-age in cache-control and your components will be cached by web browser of the client.
Finally I'd recommend you to verify that you installed the latest version of OpenSSL and the latest version of nginx. I recommend to verify your web site in http://www.webpagetest.org/ and in https://www.ssllabs.com/ssltest/ to be sure that you don't forget any simple steps.

Tracking down X-Frame-Options header

We've partnered with a company whose website will display our content in an IFRAME. I understand what the header is and what it does and why, what I need help with is tracking down where it's coming from!
Windows Server 2003/IIS6
Container page: https://testDomain.com/test.asp
IFRAME Content: https://ourDomain.com/index.asp?lots_of_parameters,_wheeeee
Testing in Firefox 24 with Firebug installed. (IE and Chrome do the same thing.) Also running Fiddler so I can watch network traffic while I'm at it.
For simplicity's sake, I created a page with nothing on it but the IFRAME in question - same physical server, different domain/site - and it failed with
Load denied by X-Frame-Options: https://www.google.com/ does not permit cross-origin framing.
(That's in the Firebug console.) I'm confused because:
Google is not referenced anywhere in the containing app, or in the IFRAMEd app. All javascript libraries are kept locally; there is no analytics in the app. No Google, nowhere.
The containing page has NOTHING on it, except the IFRAME. No html tags, no head tag, no body tag. IFRAME. That's it.
The X-FRAME-OPTIONS header does not exist in IIS on the server: not at the "Websites" node, not in the individual sites.
So where the h-e-double-sticks is that coming from? What am I missing?
Interesting point: if I remove http"S" from the IFRAME url, it works. Given the nature of the data, SSL is required.
You might check global.asax.cs, the app could be adding the header to every response automatically. If you just search the app for "x-frame-options" you might find something also.

What ways can you secure a web page so that it can ONLY be viewed from within an iFrame?

This thread was created back in 2008 Restricting IFRAME access in PHP
I am looking to do almost the exact same thing. i.e. I want to have sites which are publicly accessible as long as they are being viewed from a specific iFrame, from a specific app. The IFrame app will have user authentication giving them access to urls outside the core application. The urls are all likely to be built using Open Source PHP tools e.g. Wordpress.
Both the viewing iFrame and the viewed sites/pages will be owned by us.
Have there been any developments in last few years on ways to do this?
For various reasons not related to this particular issue, I am considering using the serverside RIA framework Vaadin (JAVA) for building the app that will contain the iFrame viewer.
The demo of the embed widget is here http://demo.vaadin.com/sampler#WebEmbed Looking at the page source I don't see anywhere that the address of the embedded webpage is displayed. So to some extent I wonder if I can hide my urls from search engines, give them very long, randomly generated URI's and maybe they will be impossible to find anyway?
You should be able to modify a framekiller to do the opposite. A framekiller is a piece of javascript to prevent clickjacking by detecting if the page has been loaded within an iframe.
Limiting the iframe to load within a specific page is more difficult. Looking at the referer is easy, but also easy to bypass. If you load the iframe from an https page the referer will be blank. A better way would be to require the server to obtain a Nonce and include this in the iframe url. Such as http://iframe_url?key=difhj8j84528423j423894hfdj897 or whatever. Having the server make a request to your server would be ideal. Doing it with client side code and jsonp to fetch the nonce is problematic because an attacker could deliver modified javascript to fetch the nonce.

How to detect which content is not secured on mixed content SSL page.?

I've added a SSL certificate to an existing site, and now in IE I get a mixed content warning. Problem is, I don't know what's the non-secure content IE is warning me about. It's a simple html page, with a few Flash, a few images, a loaded CSS and JS.
How can I find out what's the non-secured content..?
Edit:
I found the culprit: it's the JS AC_RunActiveContent.js used to display Flash movie. So anyone has an idea on how to prevent SSL mixed content when using AC_RunActiveContent.js.?
This means that something is requesting content using the http protocol specifically, or you have an absolute path to an image or other content that begins with http instead of https.
A few tips: Use relative paths everywhere you can. If you must use an absolute path, and it's to a server you own, use https. If you're loading stuff from off your site, you're probably stuck with the mixed-content warning.
This also goes for your scripts, check out the JS, and the CSS template and make sure they're not the guilty parties - if they are change them to use relative paths, or to request items via https instead of http (assuming you're positive that the server they're referencing supports https, if it doesn't you're stuck).
There are a few other details, this might be helpful.
Ok, so here is the solution for my particular problem. It was the codebase value in my code that needed to be https as well (I didn't think it would trigger the warning, as my Flash were displaying correctly, oh well)...
AC_FL_RunContent( 'codebase','https://download.macromedia.com/pub/shoc...
Link to Adobe info on this: Security Information error in Internet Explorer
I use the Firefox console -- it reports the http resources it blocks from fetching on a mixed content page.
Search your source for http: only. Another great tool to help you out is Fiddler with which you can see what's getting downloaded upon requesting your page.

Resources