Varnish responds with Transfer-Encoding: chunked for ESIed pages, but this does not work for some proxies(squid). I want to disable transfer-encoding for ESI.
The use of chunked encoding is a requirement for Varnish ESI.
Each ESI fragment will be a fresh chunk, that way Varnish doesn't have to buffer anything to build the page.
There are loads and loads of big web pages that use ESI with Varnish. Are you sure the proxy argument is valid?
Related
I'm working on a web app and I'm having the following problem:
When I go on some page my server sends a response with cache-control: no-cache header.
Then I do some changes (graphql mutations) on that page.
When I go to an other page and then click browser back then my browser reads the outdated "response" from the disk cache instead of sending a request to the server to get the change data.
browser loads response from cache although no-cache header is set
I wondering if there is something missing in my headers telling the browser to not use the disk cache?
Some info:
The browser does not send a request to my server. (So it is not cached somewhere else.)
It is not the back-forward cache. (There is already some logic handling the bfcache.)
I can reproduce it in all my browsers. (e.g. Firefox, Chrome, ...)
When I disable the disk cache in the Firefox settings then it is working correctly. (Now, the bfcache kicks in.)
I also found the following thread. Is there a better solution?
Chrome is caching even with HTTP no-cache headers
I have a Cloud Front Distribution to cache my images. my origin server is NOT S3, its some server i run.
I use these images in my website(taking the advantage of CF caching). Now to explain the problem, lets assume in my home page i am using an image called banner.png.
I visit my home page lets say from chrome for the first time - for banner.png its a cache miss, so it gets fetched fro origin and cached in CF.
After this i visit my page from FF,opera, Chromium, GET "banner.png" using postman - this all gets me the file from CF cache.
Now i GET "banner.png" using insomnia (Another rest client) - Now CF doesn't send me from cache, it goes back to origin to get the image, and reply me with **"x-cache: RefreshHit from cloudfront"**.
the difference between these 2 sets of clients are first set of clients sends "Accept-Encoding: gzip" header in the request and second client did not.
in my CF behaviour -
"Cache Based on Selected Request Headers" = NONE
Objects Automatically" = NO"Compress
.
Any pointers ?
CloudFront keeps two different copies of cache based on Accept-encoding.
One if Header contains Accept-encoding: gzip
Accept-encoding: any other value or without the header.
You can test it using curl, first without accept-encoding and second request with accept-encoding: gzip and you'll see MISS from CloudFront, this is expected with CloudFront.
The reason being is that CloudFront supports only gzip compression and it keeps this header into consideration to know if it needs to compress the response or not.
However, Your problem seems different, You're seeing Refersh from CloudFront which happens when CloudFront TTLs/Max-age expires and CloudFront is making a Condition GET to the origin to know if the content has been modified or not.
Ideally, it should be a Miss From CloudFront if no accept-header is present.
Maybe it sounds a novice question in Varnish Cache world, but why in WordPress it seems that is a need to install a external cache plugin, to working fully cached?
Websites are correctly loaded via Varnish, a curl -I command:
HTTP/1.1 200 OK
Server: nginx/1.11.12
Date: Thu, 11 Oct 2018 09:39:07 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
Cache-Control: max-age=0, public
Expires: Thu, 11 Oct 2018 09:39:07 GMT
Vary: Accept-Encoding
X-Varnish: 19575855
Age: 0
Via: 1.1 varnish-v4
X-Cache: MISS
Accept-Ranges: bytes
Pragma: public
Cache-Control: public
Vary: Accept-Encoding
With this configuration, by default WordPress installations are not being cached.
After test multiple cache plugins -some not working, or not working without complex configuration- i found the Swift Performance, in their Lite version, simply activating the Cache option, here really takes all advantages and here i can see varnish is working fully with very good results in stress test.
This could be ok for a single site on a single environment, but in shared hosting terms, when every customer can have their own WP (or other CMS) installation could be a problem.
So the key is there are no way to take full caching advantage from Varnish without installing 3rd party caching (and complex) plugins? Why not caching all by default?
Any kind of suggestions and help will be high welcome, thanks in advance.
With this configuration, by default WordPress installations are not being cached
By default, if you don't change anything in neither Wordpress or Varnish configuration, things would work together in a way that Wordpress pages are cached for 120 seconds. So real caching is possible, but it will be a short lived cache and highly ineffective one.
Your specific headers indicate that no caching should happen. They are either sent by Varnish itself (we're all guilty of copy pasting stuff without thinking what it does), or a Wordpress plugin (more often bad ones, than good). Without knowing your specific configuration, it's hard to decipher anything.
Varnish is a transparent HTTP caching proxy. Which means it’s just going to, by default, use HTTP headers, which are sent by backend (Wordpress), like Cache-Control, to make a decision on whether resource can be cached and for how long.
Wordpress, in fact, does not send cache related headers other than in a few specific areas (error pages, login POST submission, etc).
The standard approach outlined here is configuring Varnish with the highest TTL. With that:
Varnish has no idea when you update an article contents, or change theme. Typical solution to this lies in using cache invalidation plugin like Varnish HTTP Purge.
A plugin requirement comes from necessity to purge cache, when content is changed.
Suppose that you update a Wordpress page's text. You had that same page previously visited and it went into Varnish cache for storage. What happens upon the next visit, is that Varnish will serve the same, now stale content to all the next visitors.
The Wordpress plugins for Varnish, like Varnish HTTP Purge, will hook into Wordpress in a way that they will instruct Varnish to clear cache when pages are updated. This is their primary purpose.
That kind of approach (high TTL and cache purging) is de-facto standard with Varnish. As Varnish has no information about when you update content, the inner workings of purging cache is with the application itself. The cache purging feature is either bundled into CMS code itself (Magento 2, for example has it out of the box, without any extra plugins), or a Wordpress plugin.
What is the correct way to handle Content-Length when writing body content in an Owin middleware?
Currently we're using
context.Response.Write(data); // data is a string.
In most cases it works, but in some (IIS host on Win 2012 R2) the Content-Length gets a value of 0.
I thought that setting the Content-Length header, or the option to use Transfer-Encoding: chuncked was up to the host to handle, but it looks like I'm missing something.
What is the correct way to handle the Content-Length when writing body content to an OWIN Response stream?
We've also discussed the bug in a github issue.
It is primarily a host concern. The application may choose to set content-length if it knows exactly what it will be, but otherwise the app should let the server choose to set content-length or chunked.
I am making a a request to an image and the response headers that I get back are:
Accept-Ranges:bytes
Content-Length:4499
Content-Type:image/png
Date:Tue, 24 May 2011 20:09:39 GMT
ETag:"0cfe867f5b8cb1:0"
Last-Modified:Thu, 20 Jan 2011 22:57:26 GMT
Server:Microsoft-IIS/7.5
X-Powered-By:ASP.NET
Note the absence of the Cache-Control header.
On subsequent requests on Chrome, Chrome knows to go to the cache to retrieve the image. How does it know to use the cache? I was under the impression that I would have to tell it with the Cache-Control header.
You have both an ETag and a Last-Modified header. It probably uses those. But for that to happen, it still needs to make a request with If-None-Match or If-Modified-Since respectively.
To set the Cache-Control You have to specify it yourself. You can either do it in web.config , IIS Manager for selected folders (static, images ...) or set it in code. The HTTP 1.1 standard recommends one year in future as the maximum expiration time.
Setting expiration date one year in future is considered good practice for all static content in your site. Not having it in headers results in If-Modified-Since requests which can take longer then first time requests for small static files. In these calls ETag header is used.
When You have Cache-Control: max-age=315360000 basic HTTP responses will outnumber If-Modified-Since> calls and because of that it is good to remove ETag header and result in smaller static file response headers. IIS doesn't have setting for that so You have to do response.Headers.Remove("ETag"); in OnPreServerRequestHeaders()
And if You want to optimize Your headers further You can remove X-Powered-By:ASP.NET in IIS settings and X-Aspnet-Version header (altough I don't see in Your response) in web.config - enableVersionHeader="false" in system.web/httpRuntime element.
For more tips I suggest great book - http://www.amazon.com/Ultra-Fast-ASP-NET-Build-Ultra-Scalable-Server/dp/1430223839