server preload with cloudfront - amazon-cloudfront

We are serving our angular 6 bassed website through cloudfront CDN. We have checked HTTP2 in cloudfront setting. How do we use HTTP2 server push (preload) feature to boost performance of website like in cloudflare.
In cloudflare one can pass LINK header in request like Link: ; rel=preload; as=script

As of this writing, CloudFront supports HTTP/2 but does not implement server push based on Link: ... rel=preload response header sniffing, as Cloudflare does.

Related

Hosting Issue on Win 10 with NodeJS React App, IIS, Namecheap DNS

So i have a few issues with Hosting a node react app on my computer on my Computer. I've completed the following steps, but I still seem to have a few problems. Here are some steps i've taken
Namecheap DNS
added A records # and WWW under Advanced DNS, with my IP as Value, and TTL automatic
Router
added a static IP for the computer hosting the Site
added port forwarding for external port 80, internal port 3000 for the host computer
IIS
enabled IIS in Windows
added Application Request Routing
added URL rewrite
added binding Type http Port 3000 IP address * and pointed it to my projects build file
process model identity updated to my login, pw
URL Rewrite
added a reverse proxy rewrite rule http://localhost:3000/{R:1}
HTTP Response Headers
X-Frame-Options value SAMEORIGIN
Content-Security-Policy value default-src https: data: 'unsafe-inline' 'unsafe-eval'
add_header X-Xss-Protection value 1; mode=block
X-Content-Type-Options value nosniff
Node/React Application
using github Basir/Amazona example with some small modifications for my needs.
have installed all dependencies
used npm build
My Issues
When i use NPM start, I can see the website by typing my IP address into the browser. However i can't see the page when i type in the mywebsite.com, i get ERR_CONNECTION_REFUSED using a desktop and chrome.
when start the website in IIS, the same issue persists, but the ip address doesn't show the website either. Additionally a 404 error persists when pulling information from the backend server hosted on localhost:5000
I do use bitdefender, and would prefer not to take it down, as I've seen dozens of exploit attempts since setting this up.
If anyone has some suggestions, or would like to walk me through some steps to take, it would be appreciated. I would like to get this up and running, sooner the better.

Compression behavior in Verizon CDN in Azure

We're using Standard offering of Verizon CDN in Azure. From the documentation it's clear that Verizon gives priority to other compression schemes over Brotli if the client supports multiple ones (https://learn.microsoft.com/en-us/azure/cdn/cdn-improve-performance#azure-cdn-from-verizon-profiles):
If the request supports more than one compression type, those compression types take precedence over brotli compression.
Problem is that our origin gives priority to Brotli. So for a request with Accept-Encoding: gzip, deflate, br header directly made to the origin, the response comes back with Content-Encoding: br header. However, the same request going through CDN comes back with Content-Encoding: gzip.
Azure's documentation isn't clear on what occurs here. Does the POP node decompress the resource and re-compress with gzip and cache? Does it decompress and cache, then compress on the fly based on the request's header? I posed the question to Azure support and sadly didn't get a definitive answer.
I have finally got a conclusive answer from Verizon. The Via header from CDN's POP node to origin was effectively disabling the compression (this page would explain it better: https://community.akamai.com/customers/s/article/Beware-the-Via-header-and-its-bandwidth-impact-on-your-origin?language=en_US). Handling that in our web server (either strip the header or configure the web server to compress regardless) solved the issue. In other words, if the client support Brotli and origin prefers Brotli, Verizon's CDN caches and uses the content compressed with Brotli.
In other words, Microsoft's documentation is misleading and incomplete.

Cloudfront not serving content over http2

I have a website hosted via S3 and served through Cloudfront. The web requests I see coming from my domain are all served over http1.1 and not http2, even though it is checked (by default!). Are there additional tasks I need to do to be able to see my content being served using http2?
I can see in the network tab in Chrome that some assets are being loaded via http2 (resources that do not come from my Cloudfront) but everything being loaded from my Cloudfront is http1.1
Update 2
It seems as though other users are seeing my site loaded over http2 correctly, and I tried using Firefox, and see the same results. So this is a Chrome issue, not a Cloudfront issue.
This was not an issue with Cloudfront, instead I think it was combination of antivirus, network firewall/VPN, and Chrome caching. I turned off all VPN, antivirus, cleared cache in Chrome, restarted my computer and Bam! Page loads over HTTP2

Amazon Cloudfront can't connect after moving to new server

I just moved my Magento site from one server to another host/server. Everything works except for Cloudfront. The new server DOES have SSL, just like the last server did.
But now when I try to view anything from Cloudfront I get the error:
"CloudFront wasn't able to connect to the origin."
Is like the DNS cached at Amazon and taking them forever to update it? Is there something you need to do when moving a site to a new server to keep CloudFront working?
Making CloudFront work with SSL can be tricky, specifically when the hostname of the origin is different from the hostname of the CNAME.
For example, if your hostname is www.example.com, and the origin is www-example.us-west-2.elasticbeanstalk.com, the request from the cloudfront server will contain a Host header of the origin :
> GET /index.html HTTP/1.1
> Host: www-example.us-west.elasticbeanstalk.com
> User-Agent: CloudFront/2.3
> Accept: */*
The origin host needs to be able to handle authenticated SSL requests for www-example.us-west.elasticbeanstalk.com, but usually you set it up in such way that it can handle SSL requests for the original hostname, www.example.com. In which case you have two options :
Whitelist the Host header. This will cause CloudFront to send the same Host header ( Host: www.example.com ) to the origin, which should be able to handle it correctly :
Another option is to set your origin to be the same hostname with a different subdomain, for example set the origin as origin.example.com and set a CNAME between origin.example.com and www-example.us-west.elasticbeanstalk.com

Heroku piggyback SSL with node and express - any config needed?

I'm using express / node js for a simple server.
There's no need for secure https everywhere, but I do want to secure some upload form posts and responses that come back on that to phones.
So far I've setup a standard nodejs server on http with express.js.
I have an app.post('/upload'...)
I'm deployed on heroku, so I changed the app I'm testing to post the form data to https://myapp.herokuapp.com/upload
Is it now posting over https? And will the response be over https?
Or do I need to reconfigure the express server in some way to specifically handle that?
These uploads/responses are the only secure part, and non-visible to users (done by the phone app) - so there's no need to do full http ssl endpoint config for the whole domain/sub domain if the above piggyback solution is ok.
On Heroku, SSL is terminated at the routing layer and a X-Forwarded-Proto: https header is added to the request so your app can know that the request came in over SSL. In other words, from your app's perspective, the request is plain HTTP and doesn't need to do anything special, but you can always check for the X-Forwarded-Proto: https header if you want to make sure the request was made securely. If the request was made over SSL, the response will also be over SSL since it they are both part of the same connection.

Resources