cloudfront Cache-Control headers are different than origin headers - amazon-cloudfront

I'm seeing a situation where requests through Cloudfront have a different Cache-Control than my origin. I have Object Caching set to "Use Origin Cache Headers" and (I don't think this is relevant) Compress Objects Automatically set to "No"
I've found that if I change Object Caching to "Customize" and change the value around that does in fact change the headers returned from the CDN. That's okay and all... but I'm curious to know why with my existing settings this header isn't being passed through.
Thanks!
Compressed Request from Origin - shows Cache-Control of '31536000'
(05:34 PM) jsharpe#mbp:~ curl -I https://staging.testing.com/assets/application-0d5691ba401c3f5a305fda52745a831376545a605a6c16e50fc838fdaa567e57.css --compressed
HTTP/1.1 200 OK
Server: Cowboy
Date: Wed, 16 Aug 2017 21:34:22 GMT
Connection: keep-alive
Last-Modified: Wed, 16 Aug 2017 05:05:25 GMT
Content-Type: text/css
Cache-Control: public, max-age=31536000
Content-Encoding: gzip
Vary: Accept-Encoding, Origin
Content-Length: 33563
Via: 1.1 vegur
Compressed Request from CDN - shows Cache-Control of '86400'
(05:34 PM) jsharpe#mbp:~ curl -I https://staging-cdn.testing.com/assets/application-0d5691ba401c3f5a305fda52745a831376545a605a6c16e50fc838fdaa567e57.css --compressed
HTTP/1.1 200 OK
Content-Type: text/css
Content-Length: 33563
Connection: keep-alive
Server: Cowboy
Date: Wed, 16 Aug 2017 05:07:12 GMT
Last-Modified: Wed, 16 Aug 2017 05:05:25 GMT
Cache-Control: public, max-age=86400
Content-Encoding: gzip
Via: 1.1 vegur, 1.1 7d327ef7e21429ba6a44eb6374c976f3.cloudfront.net (CloudFront)
Vary: Accept-Encoding
Age: 59233
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: TEqKbQ5ZYySY7m8rDft_MAlygEiam6gYvzrXBpS7D2DrBNbVUZ1y3Q==

Related

Varnish not making origin call for infrequently requested cache

I'm noticing this behavior on Varnish 6.5 where it's not making backend calls per the max-age cache control origin response, if the request is not frequently requested by clients.
Below's the expected behavior I see for a cache requested every 1 second. It has 20 seconds max-age cache-control header from origin:
Request 1:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:02 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 1183681 1512819
age: 17
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
Request 2:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:04 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 891620 1512819
age: 19
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
Request 3:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:05 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 1183687 1512819
age: 20
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
Request 4:
HTTP/2 200
date: Tue, 20 Jul 2021 02:02:06 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 854039 1183688
age: 1
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
You can see the Request #4 above makes a new origin request with the cache request id being 1183688.
Now if I wait a long while and make that same request, the cache age is pretty old and varnish does not make an origin request to cache a fresh object:
Request 5 after a while:
HTTP/2 200
date: Tue, 20 Jul 2021 02:10:08 GMT
content-type: application/json
content-length: 33692
server: Apache/2.4.25 (Debian)
x-ua-compatible: IE=edge;chrome=1
pragma:
cache-control: public, max-age=20
x-varnish: 1512998 1183688
age: 482
via: 1.1 varnish (Varnish/6.5)
vary: Accept-Encoding
x-cache: HIT
accept-ranges: bytes
I suppose I could start adding the Expires header from origin, but looking for explanation why varnish behaves this way if the request is infrequent. Thanks.
TTL header precedence in Varnish
Varnish does check the max-age directive, but there might be other factors can cause the TTL to be an unexpected value.
Here's the TTL precedence:
The Cache-Control header's s-maxage directive is checked.
When there's no s-maxage, Varnish will look for max-age to set its TTL.
When there's no Cache-Control header being returned, Varnish will use the Expires header to set its TTL.
When none of the above apply, Varnish will use the default_ttl runtime parameter as the TTL value. Its default value is 120 seconds.
Only then will Varnish enter vcl_backend_response, letting you change the TTL.
Any TTL being set in VCL using set beresp.ttl will get the upper hand, regardless of any other value being set via response headers.
Your specific situation
The best way to figure out what's going on is by running varnishlog and adding a filter for the URL you want to track.
Here's an example for the homepage:
varnishlog -g request -q "ReqUrl eq '/'"
The output will be extremely verbose, but will contain all the info you need.
Tags that are of particular interest are:
TTL see https://varnish-cache.org/docs/6.5/reference/vsl.html#varnish-shared-memory-logging
BerespHeader (specifically the Cache-Control backend response header)
RespHeader (specifically the Cache-Control response header)
Please also have a look at your VCL and check whether or not the TTL is changed by set beresp.ttl =.
What do I need to help you
In summary, if you want further assistance, please provide your full VCL, as well as a varnishlog extract for the transactions that is giving you to unexpected behavior.
Based on that information, we'll have a pretty good idea what's going on.

How can I enable GZIP compression on Azure Functions (v2)?

How can I enable GZIP compression on Azure Functions (v2) ?
I checked with postman it is not showing GZIP
POST /api/values HTTP/1.1
Host: HOSTNAMEHERE.azurewebsites.net
Content-Type: application/json
Cache-Control: no-cache
Postman-Token: e2c3967b-f562-df35-a3d1-01bd56cb4b76
It was my mistake.
GZip seems to be configured ON by default when the function is reached :)
On my case I was hitting the local functions, it does not come with Gzip:
Content-Type →application/json; charset=utf-8
Date →Mon, 19 Mar 2018 00:55:06 GMT
Server →Kestrel
Transfer-Encoding →chunked
I noticed that, when I call the correct URL on Azure, then it comes:
Content-Encoding →gzip
Content-Type →application/json; charset=utf-8
Date →Sat, 17 Mar 2018 17:29:46 GMT
Server →Kestrel
Transfer-Encoding →chunked
Vary →Accept-Encoding
X-Powered-By →ASP.NET

Why is CloudFront not forwarding my Content-Type header for an svg served from S3?

I'm trying to load a page from CloudFront, and the svg is showing up as a missing image.
When I look into the response headers, I see that when I load the S3 bucket directly, the response contains the proper content type: image/svg+xml
$ curl -I https://s3-eu-west-1.amazonaws.com/pages.ivizone.com/1/19/1509969889/images/kenzo-logo-v2.svg
HTTP/1.1 200 OK
x-amz-id-2: k3+bRpJLp+avBaUWO4VSgB+Djxb+nebnGJs3u6kQ0rMeX95h3XeLHA03XYaWioat+JqNG6x61x8=
x-amz-request-id: 43D8ED0E9EB4490C
Date: Mon, 06 Nov 2017 15:06:13 GMT
Last-Modified: Mon, 06 Nov 2017 14:08:00 GMT
ETag: "4b8f9e399ec9bc166040a2641cf33fb3"
Accept-Ranges: bytes
Content-Type: image/svg+xml
Content-Length: 9484
Server: AmazonS3
However when I pass through CloudFront, the header is missing:
$ curl -I https://pages.ivizone.com/1/19/1509969889/images/kenzo-logo-v2.svg
HTTP/1.1 200 OK
Content-Length: 9484
Connection: keep-alive
Date: Mon, 06 Nov 2017 14:01:01 GMT
Last-Modified: Mon, 06 Nov 2017 12:04:52 GMT
ETag: "4b8f9e399ec9bc166040a2641cf33fb3"
Server: AmazonS3
X-Cache: RefreshHit from cloudfront
Via: 1.1 ed9babcd75a95b818a6df1694ba95225.cloudfront.net (CloudFront)
X-Amz-Cf-Id: va4AIkAzw7-tNZ-qQo4KA_czM29tFQAzmNH_P0wjYd_TiboSBAyohA==
As a result, this is causing problems rendering my images.
Would anyone know why Cloudfront strips the header, and how to fix it?
Thanks!
Ok, It looks like I screwed up somewhere. When uploading the svg image to S3, I had to add the content type string to the S3 Object metadata:
"image/svg+xml"
(no spaces)
Once I added this on upload, the image was served properly.
S3 doesn't send a content-type header by default, so my browser probably interpreted the svg in an incorrect format. By specifying the header, it knew how to handle it

Sails Not Caching Compressed Files

Any idea why Chrome is not caching gzip files from Sails or Express?
Files not cached: js, css
Files cached: png, woff
Response header for javascript
HTTP/1.1 200 OK
Access-Control-Allow-Origin:
Access-Control-Allow-Credentials:
Access-Control-Allow-Methods:
Access-Control-Allow-Headers:
Accept-Ranges: bytes
Date: Thu, 29 Oct 2015 17:37:01 GMT
Cache-Control: public, max-age=31536000
Last-Modified: Thu, 29 Oct 2015 17:10:18 GMT
ETag: W/"fKReHkHilgulZYl81EvdUg=="
Content-Type: application/javascript
Vary: Accept-Encoding
Content-Encoding: gzip
Connection: keep-alive
Transfer-Encoding: chunked
Response Header for image
HTTP/1.1 200 OK
Access-Control-Allow-Origin:
Access-Control-Allow-Credentials:
Access-Control-Allow-Methods:
Access-Control-Allow-Headers:
Accept-Ranges: bytes
Date: Thu, 29 Oct 2015 17:37:02 GMT
Cache-Control: public, max-age=31536000
Last-Modified: Thu, 29 Oct 2015 17:10:18 GMT
ETag: W/"d+496iEH7ze9Df3G7Jytiw=="
Content-Type: image/png
Content-Length: 4121
Connection: keep-alive

Foursquare venue photos API only occasionally working with client_id/client_secret?

I've found that some venues will only return photos if I use a signed in user instead of a client_id / client_secret. Is this intentional?
curl -i https://api.foursquare.com/v2/venues/4c36476d93db0f47f6cc1d92/photos?client_id=xxx\&client_secret=xxx\&group=venue\&v=20120304
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-cache, private, no-store
Content-Type: application/json; charset=utf-8
Date: Mon, 05 Mar 2012 00:28:34 GMT
Expires: Mon, 5 Mar 2012 00:28:34 GMT
Pragma: no-cache
Server: nginx/0.8.52
X-RateLimit-Limit: 5000
X-RateLimit-Remaining: 4999
Content-Length: 66
Connection: keep-alive
{"meta":{"code":200},"response":{"photos":{"count":0,"items":[]}}}
curl -i https://api.foursquare.com/v2/venues/4c36476d93db0f47f6cc1d92/photos?group=venue\&v=20120304\&oauth_token=xxx\&v=20120304
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-cache, private, no-store
Content-Type: application/json; charset=utf-8
Date: Mon, 05 Mar 2012 00:29:19 GMT
Expires: Mon, 5 Mar 2012 00:29:19 GMT
Pragma: no-cache
Server: nginx/0.8.52
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 1000
Content-Length: 15311
Connection: keep-alive
{"meta":{"code":200},"notifications":[{"type":"notificationTray","item":{"unreadCount":0}}],"response":{"photos":{"count":14,"items":[lots of images here]}}}
I want to fetch a photo to associate with a given place as a background process, not tied to the specific user. Is it intended that this API only functions correctly for signed in users?
Looks like there's a bug in userless access to /venues/photos. The team is investigating. The intended behavior is that userless access of that endpoint returns all public photos attached to that venue.

Resources