Is a Http2 Cross-origin push request possible? - cross-domain

Say I have a server that serves an HTML file at the url https://example.com/ and this refers to a css file at the url https://test.com/mystyles.css. Is it possible to push the mystyles.css file alongside the html content as part of an HTTP2 connection, so that a browser will use this css content?
I have tried to create such a request using a self-signed certificate on my localhost (and I have pre-created a security exception for both hosts in my browser) by sending the html file when a request arrives at http://localhost/, and pushing the css with a differing hostname/port in the :authority or Host header. However, on a full-page refresh, the CSS file is fetched in a separate request from the server, rather than using the pushed css file.
See this gist for a file that I have been using to test this. If I visit http://localhost:8080/ then the text is red, but if I visit http://test:8080/ it is green, implying that the pushed content is used if the origin is the same.
Is there a combination of headers that needs to be used for this to work? Possibly invoking CORS?

Yes it is theoretically possible according to this blog post from a Chrome developer advocate from 2017.
As the owners of developers.google.com/web, we could get our server to
push a response containing whatever we wanted for android.com, and set
it to cache for a year.
...
You can't push assets for
any origin, but you can push assets for origins which your connection
is "authoritative" for.
If you look at the certificate for developers.google.com, you can see
it's authoritative for all sorts of Google origins, including
android.com.
Viewing certificate information in Chrome Now, I lied a little,
because when we fetch android.com it'll perform a DNS lookup and see
that it terminates at a different IP to developers.google.com, so
it'll set up a new connection and miss our item in the push cache.
We could work around this using an ORIGIN frame. This lets the
connection say "Hey, if you need anything from android.com, just ask
me. No need to do any of that DNS stuff", as long as it's
authoritative. This is useful for general connection coalescing, but
it's pretty new and only supported in Firefox Nightly.
If you're using a CDN or some kind of shared host, take a look at the
certificate, see which origins could start pushing content for your
site. It's kinda terrifying. Thankfully, no host (that I'm aware of) offers full control over HTTP/2 push, and is unlikely to thanks to this little note in the spec: ...
In practice, it sounds like it's possible if your certificate has authority over the other domains and they're hosted at the same IP address, but it also depends on browser support. I was personally trying to do this with Cloudflare and found that they don't support cross-origin push (similar to the blog post author's observations about CDNs in 2017).

Related

How to identify https clients through proxy connection

We have developed a corporate NodeJS application served through http/2 protocol and we need to identify clients by their IP address because the server need to send events to clients based on their IP (basically some data about phone calls).
I can successfully get client IP through req.connection.remoteAddress but there are some of the clients that can only reach the server through our proxy server.
I know about x-forwarded-for header, but this doesn't work for us because proxies can't modify http headers in ssl connections.
So i though I could get the IP from client side and send back to the server, for example, during the login process.
But, if I'm not wrong, browsers doesn't provide that information to javascript so we need a way to obtain that information first.
Researching about it, the only option I found out is obtaining from a server which could tell me the IP from where I'm reaching it.
Of course, through https I can't because of the proxy. But I can easily enable an http service just to serve the client IP.
But then I found out that browsers blocks http connections from https-served pages because of "mixed active content" issue.
I read about it and I found out that I can get "mixed passive content" and I succeed in downloading garbage data as image file through <img>, but when I try to do the same thing using an <object> element I get a "mixed active content" block issue again even in MDN documentation it says it's considered passive.
Is there any way to read that data either by that (broken) <img> tag or am I missing something to make the <object> element really passive?
Any other idea to achieve our goal will also be welcome.
Finally I found a solution:
As I said, I was able to perform an http request by putting an <img> tag.
What I was unable to do is to read downloaded data regardless if it were an actual image or not.
...but the fact is that the request was made and to which url is something that I can decide beforehand.
So all what I need to do is to generate a random key for each served login screen and:
Remember it in association with your session data.
Insert a, maybe hidden, <img> tag pointing to some http url containing that id.
As soon as your http server receive the request to download that image, you could read the real IP through the x-forwarded-for header (trusting your proxy, of course) and resolve to which active session it belongs.
Of course, you also must care to clear keys, regardless of being used or not, after a few time to avoid memory leak or even to be reused with malicious intentions.
FINAL NOTE: The only drawback of this approach is the risk that, some day, browsers could start blocking mixed passive content too by default.
For this reason I, in fact, opted by a double strategy approach. That is: additionally to the technique explained above, I also implemented an http redirector which does almost the same: It redirects all petitions to the root route ("/") to our https app. But it does so by a POST request containing a key which is previously associated to the client real IP.
This way, in case some day the first approach stops to work, users would be anyway able to access first through http. ...Which is in fact what we are going to do. But the first approach, while it continue working, could avoid problems if users decide to bookmark the page from within it (which will result in a bookmark to its https url).

Can I stop the "this website does not supply identity information" message

I have a site which is completely on https: and works well, but, some of the images served are from other sources e.g. ebay, or Amazon.
This causes the browser to prompt a message: "this website does not supply identity information"
How can I avoid this? The images must be served from elsewhere sometimes.
"This website does not supply identity information." is not only about the encryption of the link to the website itself but also the identification of the operators/owners of the website - just like it actually says. For that warning (it's not really an error) to stop, I believe you have to apply for the Extended Validation Certificate https://en.wikipedia.org/wiki/Extended_Validation_Certificate. EVC rigorously validates the entity behind the website not just the website itself.
Firefox shows the message
"This website does not supply identity information."
while hovering or clicking the favicon (Site Identity Button) when
you requested a page over HTTP
you requested a page over HTTPS, but the page contains mixed passive content
HTTP
HTTP connections generally don't supply any reliable identity information to the browser. That's normal. HTTP was designed to simply transmit data, not to secure the data it transmits.
On server side you could only avoid that message, if the server would start using a SSL certificate and the code of the page would be changed to exclusively use HTTPS requests.
To avoid the message on client side you could enter about:config in the address bar, confirm you'll be careful and set browser.chrome.toolbar_tips = false.
HTTPS, mixed passive content
When you request a page over HTTPS from a site which is using a SSL certificate, the site does supply identity information to the browser and normally the message wouldn't appear.
But if the requested page embeds at least one <img>, <video>, <audio> or <object> element which includes content over HTTP (which won't supply identity information), than you'll get a so-called mixed passive content * situation.
Firefox won't block mixed passive content by default, but only show said message to warn the user.
To avoid this on server side, you'd first need to identify which requests are producing mixed content.
With Firefox on Windows you can use Ctrl+Shift+K (Control-Option-K on Mac) to open the web console, deactivate the css, js and security filters, and press F5 to reload the page, to show all the requests of the page.
Then fix your code for each line which is showing "mixed content", i.e. change the appropriate parts of your code to use https:// or, depending on your case, protocol-relative URLs.
If the external site an element is requested from doesn't use a SSL certificate, the only chance to avoid the message would be to copy the external content over to your site so your code can refer to it locally via HTTPS.
* Firefox also knows mixed active content, which is blocked by default, but that's another story.
Jürgen Thelen's answer is absolutely correct. If the images (quite often the case) displayed on the page are served over "http" the result will be exactly as described no matter what kind of cert you have, EV or not. This is very common on e-commerce sites due to the way they're constructed. I've encountered this before on my own site AND CORRECTED IT by simply making sure that no images have an "http" address - and this was on a site that did not have an EV cert. Use the ctrl +shift +K process that Jürgen describes and it will point you to the offending objects. If the path to an object is hard coded and the image resides on your server (not called from somewhere else) simply remove the "http://servername.com" and change it to a relative path instead. Correct that and the problem will go away. Note that the problem may be in one of the configuration files as well, such as one of the config.php files.
The real issue is that Firefox's error message is misleading and has nothing to do with whether the SSL is an EV cert or not. It really means there is mixed content on the page but doesn't say that. A couple of weeks ago I had a site with the same problem and Firefox displayed the no-identity message. Chrome, however, (which I normally don't use) displayed an exclamation mark instead of a lock. I clicked on it and it said the cert was valid (with a green dot), it was a secure connection (another green dot), AND had "Mixed Content. The site includes HTTP resources" which was entirely accurate and the source of the problem (with a red dot). Once the offending paths were changed to relative paths, the error messages in both Firefox and Chrome disappeared.
For me, it was a problem of mixed content. I forced everything to make HTTPS requests on the page and it fixed the problem.
For people who come here from Google search, you can use Cloudflare's (free) page rules to accomplish this without touching your source code. Use the "Always use HTTPS" setting for your domain.
You can also transfrom http links to https links using url shortener www.tr.im. That is the only URL-shortener I found that provides shorter links through https.
You just have to change it manually from http://tr.im/xxxxxx to https://tr.im/xxxxxx.

The site uses SSL, but Google Chrome has detected insecure content on the page

I'm using SSL on my website and it is giving me the lock with yellow triangle icon ("The site uses SSL, but Google Chrome has detected insecure content on the page.")
On clicking the lock icon it says:
Your connection to domainname is encrypted with 256-bit encryption. However, this page includes other resources which are not secure. These resources can be viewed by others while in transit, and can be modified by an attacker to change the look of the page. The connection uses TLS 1.0. The connection is encrypted using AES_256_CBC, with SHA1 for message authentication and DHE_RSA as the key exchange mechanism. The connection is not compressed.
How do I ensure I get the green lock?
You must have resources (images, stylesheets, scripts, etc...) which are embedded on the page but are not served over https. Make sure all your resources are served over https, and that warning should go away.
I had the same problem and it occoured because I included a script from Google Analytics using HTTP.
With a provider like Google, one can simply change HTTP to HTTPS - and it will work. This will not work with all providers.
If you are trying to load something from a website that you own, you will have to secure that website with HTTPS.
Google Chrome will detect this and automatically not load the insecure content (from the HTTP domain) which may take away some functionality from the website.
Certain AV/Malware softwares will also detect this and give a security warning which may frighten your visitors away.
If you are using Google Chrome, then you might not notice such a warning, because the AV/Malware software never sees this HTTP-link because it is blocked by Google Chrome.
And if you do not have the kind of AV/Malware software that detects this then you may never notice such a warning while the visitors are.
What you must do is:
Install Google Chrome and go to the website.
Click on "Tools >> JavaScript Console" and see if any warning appears. (this has been commented by Brad Koch on the question as well)
Go throught the different pages on your website and see if any errors appear - if so, then go change the URLs to HTTPS (if this is possible) or find another provider for this javascript.
make sure all references to resources such as images, js files, css files, ads, etc are served through https. If the uri to the resource is relative, e.g. /images/logo.png, then the resource is fetched from the same host and port and protocol as the page itself, in your case https. I would use fiddler to find what files get fetched over http:// when the page is loaded.
This is what I get when I go to TOOLS and then click on JAVASCRIPT CONSOLE ::
Failed to load resource chrome://thumb/https://accounts.google.com/ServiceLogin?service=chromiumsyn...
s%3A%2F%2Fwww.google.com%2Fintl%2Fen-US%2Fchrome%2Fblank.html%3Fsource%3D1
Failed to load resource chrome://thumb/http://www.xe.com/
What do I do after this?

How can I prevent Amazon Cloudfront from hotlinking?

I use Amazon Cloudfront to host all my site's images and videos, to serve them faster to my users which are pretty scattered across the globe. I also apply pretty aggressive forward caching to the elements hosted on Cloudfront, setting Cache-Controlto public, max-age=7776000.
I've recently discovered to my annoyance that third party sites are hotlinking to my Cloudfront server to display images on their own pages, without authorization.
I've configured .htaccessto prevent hotlinking on my own server, but haven't found a way of doing this on Cloudfront, which doesn't seem to support the feature natively. And, annoyingly, Amazon's Bucket Policies, which could be used to prevent hotlinking, have effect only on S3, they have no effect on CloudFront distributions [link]. If you want to take advantage of the policies you have to serve your content from S3 directly.
Scouring my server logs for hotlinkers and manually changing the file names isn't really a realistic option, although I've been doing this to end the most blatant offenses.
You can forward the Referer header to your origin
Go to CloudFront settings
Edit Distributions settings for a distribution
Go to the Behaviors tab and edit or create a behavior
Set Forward Headers to Whitelist
Add Referer as a whitelisted header
Save the settings in the bottom right corner
Make sure to handle the Referer header on your origin as well.
We had numerous hotlinking issues. In the end we created css sprites for many of our images. Either adding white space to the bottom/sides or combining images together.
We displayed them correctly on our pages using CSS, but any hotlinks would show the images incorrectly unless they copied the CSS/HTML as well.
We've found that they don't bother (or don't know how).
The official approach is to use signed urls for your media. For each media piece that you want to distribute, you can generate a specially crafted url that works in a given constraint of time and source IPs.
One approach for static pages, is to generate temporary urls for the medias included in that page, that are valid for 2x the duration as the page's caching time. Let's say your page's caching time is 1 day. Every 2 days, the links would be invalidated, which obligates the hotlinkers to update their urls. It's not foolproof, as they can build tools to get the new urls automatically but it should prevent most people.
If your page is dynamic, you don't need to worry to trash your page's cache so you can simply generate urls that are only working for the requester's IP.
As of Oct. 2015, you can use AWS WAF to restrict access to Cloudfront files. Here's an article from AWS that announces WAF and explains what you can do with it. Here's an article that helped me setup my first ACL to restrict access based on the referrer.
Basically, I created a new ACL with a default action of DENY. I added a rule that checks the end of the referer header string for my domain name (lowercase). If it passes that rule, it ALLOWS access.
After assigning my ACL to my Cloudfront distribution, I tried to load one of my data files directly in Chrome and I got this error:
As far as I know, there is currently no solution, but I have a few possibly relevant, possibly irrelevant suggestions...
First: Numerous people have asked this on the Cloudfront support forums. See here and here, for example.
Clearly AWS benefits from hotlinking: the more hits, the more they charge us for! I think we (Cloudfront users) need to start some sort of heavily orchestrated campaign to get them to offer referer checking as a feature.
Another temporary solution I've thought of is changing the CNAME I use to send traffic to cloudfront/s3. So let's say you currently send all your images to:
cdn.blahblahblah.com (which redirects to some cloudfront/s3 bucket)
You could change it to cdn2.blahblahblah.com and delete the DNS entry for cdn.blahblahblah.com
As a DNS change, that would knock out all the people currently hotlinking before their traffic got anywhere near your server: the DNS entry would simply fail to look up. You'd have to keep changing the cdn CNAME to make this effective (say once a month?), but it would work.
It's actually a bigger problem than it seems because it means people can scrape entire copies of your website's pages (including the images) much more easily - so it's not just the images you lose and not just that you're paying to serve those images. Search engines sometimes conclude your pages are the copies and the copies are the originals... and bang goes your traffic.
I am thinking of abandoning Cloudfront in favor of a strategically positioned, super-fast dedicated server (serving all content to the entire world from one place) to give me much more control over such things.
Anyway, I hope someone else has a better answer!
This question mentioned image and video files.
Referer checking cannot be used to protect multimedia resources from hotlinking because some mobile browsers do not send referer header when requesting for an audio or video file played using HTML5.
I am sure of that about Safari and Chrome on iPhone and Safari on Android.
Too bad! Thank you, Apple and Google.
How about using Signed cookies ? Create signed cookie using custom policy which also supports various kind of restrictions you want to set and also it is wildcard.

why do CDNs always use a seperate host instead of subdomain?

a mirroring CDN can't have the same hostname as you application server, because you need a way for the CDN to explicitly reference the application.
Why, in general, do sites like facebook run their CDN on a totally seperate host, not just a subdomain like cdn.facebook.com? example: http://profile.ak.fbcdn.net/hprofile-ak-snc4/173706_6103645_790537_q.jpg
Is the reason, that they can construct resource URLs with many different hostnames, to avoid the 4-connections-per-host limit on some browsers?
If your domain is www.example.org, you can host your static components on static.example.org. However, if you've already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies.
From: http://developer.yahoo.com/performance/rules.html#cookie_free
Because user generated content can contain nasties that may be able to access data hosted on the primary domain.
It also stops things like cookies and authentication getting sent in the request to CDN content.
Preventing users from inserting
scripts, and at the same time allowing
user submitted html is extremely
difficult to do on the server side -
ergo we must have sandboxing.
Borrowed from a fairly old whatwg post

Resources