Compression behavior in Verizon CDN in Azure - azure

We're using Standard offering of Verizon CDN in Azure. From the documentation it's clear that Verizon gives priority to other compression schemes over Brotli if the client supports multiple ones (https://learn.microsoft.com/en-us/azure/cdn/cdn-improve-performance#azure-cdn-from-verizon-profiles):
If the request supports more than one compression type, those compression types take precedence over brotli compression.
Problem is that our origin gives priority to Brotli. So for a request with Accept-Encoding: gzip, deflate, br header directly made to the origin, the response comes back with Content-Encoding: br header. However, the same request going through CDN comes back with Content-Encoding: gzip.
Azure's documentation isn't clear on what occurs here. Does the POP node decompress the resource and re-compress with gzip and cache? Does it decompress and cache, then compress on the fly based on the request's header? I posed the question to Azure support and sadly didn't get a definitive answer.

I have finally got a conclusive answer from Verizon. The Via header from CDN's POP node to origin was effectively disabling the compression (this page would explain it better: https://community.akamai.com/customers/s/article/Beware-the-Via-header-and-its-bandwidth-impact-on-your-origin?language=en_US). Handling that in our web server (either strip the header or configure the web server to compress regardless) solved the issue. In other words, if the client support Brotli and origin prefers Brotli, Verizon's CDN caches and uses the content compressed with Brotli.
In other words, Microsoft's documentation is misleading and incomplete.

Related

IIS 8.5 Static Compression missing Content-Encoding response header, yet Failed Request Tracing shows compression with gzip

I observed that Content-Encoded response header was missing, notably Content-Encoded: gzip. I'm using static content compression. The dynamic content compression feature was never installed. I installed it, enabled it, and tested again. This time, Content-Encoded: gzip appeared in the response. The question is why does the response header appear for dynamic content compression but not for static content compression? I'm fairly certain that IIS is applying gzip to static content compression. Here's why:
I have an IIS URL Rewrite outbound rule which modifies the response on an HTML page. The outbound rule yielded Error 500.52, URL Rewrite Module error -- Outbound rewrite rules cannot be applied when the content of the HTTP response is encoded ("gzip"). The rule is not the issue, just evidence that gzip is reportedly being applied. I disabled the rule. That's clue #1.
Clue #2 is I enabled Failed Request Tracing and observed that not only static compression was being applied but the StaticFileModule was storing the compressed file in the following location: C:\INETPUB\TEMP\IIS TEMPORARY COMPRESSED FILES\MY WEBSITE\$^_GZIP_D^\INETPUB\WWWROOT\TEST.HTML.
I read the Microsoft document on IIS HTTP Compression and--I could be wrong--I didn't see any language that suggests gzip can be employed with static compression. Based on the two clues above, gzip is being employed with static compression.
So I go back to the original problem, which is Content-Encoded response header is missing for static content impression, yet evidence suggests that IIS is not only compressing static content but compressing it with gzip. Is this simply a bug? Is this by design?
Static Compression will add Content-Encoded header when it work.
If you enable failed request tracing and trace static compression module. You will see this.
It means static compression won't work if a static file didn't get hit frequently.
If you relay this request doezens times. Then you will see that header.
Be careful that, there is a limit for minimum file size for compression. You could modify that value in IIS manager->server node->configuration manager->system.webServer/httpCompression->minfileforcomp

Handling of requests with Header Accept-Encoding in CloudFront vs request without the header Accept-Encoding

I have a Cloud Front Distribution to cache my images. my origin server is NOT S3, its some server i run.
I use these images in my website(taking the advantage of CF caching). Now to explain the problem, lets assume in my home page i am using an image called banner.png.
I visit my home page lets say from chrome for the first time - for banner.png its a cache miss, so it gets fetched fro origin and cached in CF.
After this i visit my page from FF,opera, Chromium, GET "banner.png" using postman - this all gets me the file from CF cache.
Now i GET "banner.png" using insomnia (Another rest client) - Now CF doesn't send me from cache, it goes back to origin to get the image, and reply me with **"x-cache: RefreshHit from cloudfront"**.
the difference between these 2 sets of clients are first set of clients sends "Accept-Encoding: gzip" header in the request and second client did not.
in my CF behaviour -
"Cache Based on Selected Request Headers" = NONE
Objects Automatically" = NO"Compress
.
Any pointers ?
CloudFront keeps two different copies of cache based on Accept-encoding.
One if Header contains Accept-encoding: gzip
Accept-encoding: any other value or without the header.
You can test it using curl, first without accept-encoding and second request with accept-encoding: gzip and you'll see MISS from CloudFront, this is expected with CloudFront.
The reason being is that CloudFront supports only gzip compression and it keeps this header into consideration to know if it needs to compress the response or not.
However, Your problem seems different, You're seeing Refersh from CloudFront which happens when CloudFront TTLs/Max-age expires and CloudFront is making a Condition GET to the origin to know if the content has been modified or not.
Ideally, it should be a Miss From CloudFront if no accept-header is present.

Azure Verizon CDN - 100% Cache CONFIG_NOCACHE

I set up an Azure Verizon Premium CDN a few days ago as follows:
Origin: An Azure web app (.NET MVC 5 website)
Settings: Custom Domain, no geo-filtering
Caching Rules: standard-cache (doesn't care about parameters)
Compression: Enabled
Optimized for: Dynamic site acceleration
Protocols: HTTP, HTTPS, custom domain HTTPS
Rules: Force HTTPS via Rules Engine (if request scheme = http, 301 redirect to https://{customdomain}/$1)
So - this CDN has been running for a few days now, but the ADN reports are saying that nearly 100% (99.36%) of the cache status is "CONFIG_NOCACHE" (Description: "The object was configured to never be cached in accordance with customer-specific configurations residing on the edge servers, so the response was served via the origin server.") A few (0.64%) of them are "NONE" (Description: "The cache was bypassed entirely for this request. For instance, the request was immediately rejected by the token auth module, or the client request method used an uncacheable request method such as "PUT".") Also, in the "Cache Hit" report, it says "0 hits, 0 misses" for every day. Nothing is coming through the "HTTP Large" side, only "ADN".
I couldn't find these exact messages while searching around, but I've tried:
Updating cache-control header to max-age, public (ie: cache-control: public,max-age=1209600)
Updating the cache-control header to max-age (cache-control: max-age=1209600)
Updating the expires header to a date way in the future (expires: Tue, 19 Jan 2038 03:14:07 GMT)
Using different browsers so the request cache info is different. In Chrome, the request is "cache-control: no-cache" in my browser. In Firefox, it'll say "Cache-Control: max-age=0". In any case, I'd assume the users on the website wouldn't have these same settings, right?
Refreshing the page a bunch of times, and looking at the real time report to see hits/misses/cache statuses, and it shows the same thing - CONFIG_NOCACHE for almost everything.
Tried running a "worldwide" speed test on https://www.dotcom-tools.com/website-speed-test.aspx, but that had the same result - a bunch of "NOCACHE" hits.
Tried adding ADN rules to set the internal and external max age to 864000 sec (10 days).
Tried adding an ADN rule to ignore "no-cache" requests and just return the cached result.
So, the message for "NOCACHE" says it's a node configuration issue... but I haven't really even configured it! I'm so confused. It could also be an application issue, but I feel like I've tried all the different permutations of "cache-control" that I can. Here's an example of one file that I'd expect to be cached:
Ultimately, I would hope that most of the requests are being cached, so I'd see most of the requests be "TCP Hit". Maybe that's incorrect? Thanks in advance for your help!
So, I eventually figured out this issue. Apparently Azure Verzion Premium CDN ADN platform has "bypass cache" enabled by default.
To disable this behavior you need to configure additional features to your caching rules.
Example:
IF Always
Features:
Bypass Cache Disabled
Force Internal Max-Age Response 200 864000 Seconds
Ignore Origin No-Cache 200

Why is compression not working in servicestack

I'm having trouble getting compression to work with ServiceStack. I return .ToOptimizedResult from my server, and I get a log entry that tells my that the header is added:
ServiceStack.WebHost.Endpoints.Extensions.HttpResponseExtensions:
DEBUG: Setting Custom HTTP Header: Content-Encoding: deflate
However the content returned is not compressed. I've checked using both Fiddler and Network inspector in Chrome.
Sorry to all
It seems that maybe my antivirus (BitDefender) decompresses the data to scan for virus, even though I disabled the AV. When testing on other computers the output is compressed.

Azure CDN - Enabling HTTP 304 Caching with ETag - Hosted Web Role

We are trying to enable HTTP compression (gzip) and HTTP 304 Caching via ETags on Azure CDN. We already discovered an issue with enabling Azure CDN Compression, but now we can't get compression and ETag caching (304s) working simultaneously. This issue has been posted to Azure forums here.
Here is an example of the compressed, but not HTTP cacheable (304) link:
https://xxxx.vo.msecnd.net/resourceManager.axd?token=HL80vX5hf3lIAAA&group=core.js
Here is an example of the cacheable (304), but not compressible (gzip) link:
https://xxxx.vo.msecnd.net/resourceManager.axd?token=HL80vX5hf3lIAAA&group=core.png
Does anyone know how to get HTTP Caching (304s) and HTTP Compression working together on the Azure CDN?
It is important to know if you are specifying If-None-Match or If-Match? Based on my experience, most users rely on modification date and GET If-Modified-Since.
ETag is stronger if you need to a cache flag for a given entity with multiple encodings, etc.
For your requirement please use Modified/If-Modified-Since and you don't need variable caching based off encodings and this should work.
More info is here: HttpWebResponse LastModified

Resources