I want to push to Varnish specific url to cache it after processing. Flow:
processing image
when finished "push" url reference to the image to varnish to cache it
I do not want to wait for requests by clients to cache it then. It should be ready after processing to return it with high performance. Is it possible?
I can send an internal request GET like a standard client and make it cached, but I would prefer defining i.e PUT request in varnish config, and make it cached without returning it in that process.
Your only option is an internal HEAD (better than GET; it will be internally converted to a GET by Varnish when submitting the request to the backend side). The PUT approach is not possible, at least not without implementing a VMOD for it, and it probably won't be a simple one.
Related
I want to send a cached page back to the user But the problem is that I need to generate a unique VISITOR_ID for every new user and
send it back to the user through headers , so I need to send an API call from varnish proxy server to my backend servers to fetch VISITOR_ID and then append it to the response
We were earlier using Akamai and we were able to implement this using edge workers present there,
I want to know if such a thing is possible to do in varnish or not.
Thanks in Advance
Open source solution for HTTP calls
You can use https://github.com/varnish/libvmod-curl and add this VMOD to perform HTTP calls from within VCL. The API for this module can be found here: https://github.com/varnish/libvmod-curl/blob/master/src/vmod_curl.vcc
Commercial solution for HTTP calls
Despite there being an open source solution, I want to mention that there might be a more stable solution that is actually supported and receives frequent updates. See https://docs.varnish-software.com/varnish-cache-plus/vmods/http/ for the HTTP VMOD that is part of Varnish Enterprise.
Generate the VISITOR_ID in VCL
Your solution implies that an HTTP call is needed for every single response. While that is possible through various VMODs, this will result in a lot of extra HTTP calls. Unless you cache every variation that includes the VISITOR_ID.
You could also consider generating the unique ID yourself in VCL.
See https://github.com/otto-de/libvmod-uuid for a VMOD that generates UUIDs or https://github.com/varnish/libvmod-digest for a VMOD that generates hashes.
Fetch VISITOR_ID from Redis
If you prefer to generate the VISITOR_ID in your origin application, you could use a Key/Value store like Redis to store or generate values.
You can generate the ID in your application and store it in Redis. You could also generate and store it using LUA scripting in Redis.
Varnish can then fetch the key from Redis and inject it in the response.
While this is a similar approach to the HTTP calls, at leasts we know Redis is capable of keeping up with Varnish in terms of high performance.
See https://github.com/carlosabalde/libvmod-redis to learn how to interface with Redis from Varnish.
How can I make Varnish work like a switch?
I need to consult an authentication service with a request of the original client request. That authentication service checks upon original request if access is permitted and replies simply with a status code and probably some more information in the header. Upon that status code and header information from that auth service, I would like varnish to serve content from different backends. Depending on the status code the backend can vary and I would like to add some additional header before Varnish fetches the content.
Finally varnish should cache and reply to client.
Yes, that's doable using some VCL and VMODs. For example, you could use the cURL VMOD during vcl_recv in order to trigger the HTTP request against the authentication service, check response, and then use that info for backend selection and other caching decisions (that would be just simple VCL). A much better alternative would be the http VMOD, but that one is only available in Varnish Enterprise. In fact, a similar example to what you want to achieve is available in the linked documentation; see 'HTTP Request' section.
In any case, it would be a good idea to minimise interactions with the authentication service using some high performance caching mechanism. For example, you could use the redis VMOD for that (or even Varnish itself!).
I'm trying to use varnish to cache rpms and other giant binaries. What I would've expected is that when an object is expired in the cache varnish would send a request with If-Not-Modified to the backend and then assuming the object didn't change, varnish would refresh the ttl on the local cached object without downloading a new one. I wrote a test backend to generate specific request (set small max-age and whatnot, as well as see the header varnish sends) but I never get anything else then full fetch. If-Not-Modified is never sent. My VCL is basically the default VCL. I tried playing around with setting small ttl/grace but never got any interesting behavior.
Is varnish even able to do what I want it to ? If so has anyone done anything similar and can give tips ?
The request sent to the backend when an object is expired is the one that Varnish receives from the client.
So when testing your setup, are you sending an If-Not-Modified header in your requests to Varnish?
Have a look at https://www.varnish-software.com/wiki/content/tutorials/varnish/builtin_vcl.html to see what the built in VCL is.
Under vcl_backend_fetch, which will be called if there is no object in the cache, you can see there is no complex logic around stale objects, it is just passing on the request as is.
First of all, quite a bit has happened in varnish-cache since this question was posted. I am answering the questions for varnish-cache 6.0 and later:
The behavior the OP expects is how varnish should behave now if the backend returns the Last-Modified and/or Etag headers.
Obviously, an object can only be refreshed if it still exist in cache. This is what beresp.keep is for. It extends the time an object is kept in cache after ttl and grace have expired. Note that objects are also LRU evicted if the cache is too small to keep all objects for their maximum lifetime.
On the comment by #maxschlepzig, it might be based on a misunderstanding:
When an object is not in cache but is to be cached, varnish can not forward the client request's conditional headers (If-Modified-Since, If-None-Match) because a 304 response would not be good for caching (it has not body and is relevant only for a particular request). Instead, varnish strips to conditional headers for this case to (potentially) get a 200 response with an object to put into cache.
As explained above, for a subsequent backend request after the ttl has expired, the conditional headers are constructed based on the cached response. The conditional headers from the client are not used for this case either.
All of this above applies for the case that an object is to be cached at all (Fetch, Hit-for-Miss (as created by setting beresp.uncacheable)).
For Pass and Hit-for-Pass (as created by return(pass(duration)) in vcl_backend_response), the client conditional headers are passed to the backend.
I'm wondering if my (possibly strange) use case is possible to implement in Varnish with VCL. My application depends on receiving responses from a cacheable API server with very low latencies (i.e. sub-millisecond if possible). The application is written in such a way that an "empty" response is handled appropriately (and is a valid response in some cases), and the API is designed in such a way that non-empty responses are valid for a long time (i.e. days).
So, what I would like to do is configure varnish so that it:
Attempts to look up (and return) a cached response for the given API call
On a cache miss, immediately return an "empty" response, and queue the request for the backend
On a future call to a URL which was a cache miss in #2, return the now-cached response
Is it possible to make Varnish act in this way using VCL alone? If not, is it possible to write a VMOD to do this (and if so, pointers, tips, etc, would be greatly appreciated!)
I don't think you can do it with VCL alone, but with VCL and some client logic you could manage it quite easily, I think.
In vcl_miss, return an empty document using error 200 and set a response header called X-Try-Again in the default case.
In the client app, when receiving an empty response with X-Try-Again set, request the same resource asynchronously but add a header called X-Always-Fetch to the request. Your app does not wait for the response or do anything with it once it arrives.
Also in vcl_miss, check for the presence of the same X-Always-Fetch header. If present, return (fetch) instead of the empty document. This will request the content from the back end and cache it for future requests.
I also found this article which may provide some help though the implementation is a bit clunky to me compared to just using your client code: http://lassekarstensen.wordpress.com/2012/10/11/varnish-trick-serve-stale-content-while-refetching/
I'm afraid I am fairly new to varnish but I have a problem whch I cannot find a solution to anywhere (yet): Varnish is set up to cache GET requests. We have some requests which have so many parameters that we decided to pass them in the body of the request. This works fine when we bypass Varnish but when we go through Varnish (for caching), the request is passed on without the body, so the service behind Varnish fails.
I know we could use POST, but we want to GET data. I also know that Varnish CAN pass the request body on if we use pass mode but as far as I can see, requests made in pass mode aren't cached. I've already put a hash into the url so that when things work, we will actually get the correct data from cache (as far as the url goes the calls would otherwise all look to be the same).
The problem now is "just" how to rewrite vcl_fetch to pass on the request body to the webserver? Any hints and tips welcome!
Thanks in advance
Jon
I don't think you can, but, even if you can, it's very dangerous : Varnish won't store request body into cache or hash table, so it won't be able to see any difference between 2 requests with same URI and different body.
I haven't heard about a VCL key to read request body but, if it exists, you can pass it to req.hash to differenciate requests.
Anyway, request body should only be used with POST or PUT...and POST/PUT requests should not be cached.
Request body is supposed to send data to the server. A cache is used to get data...
I don't know the details, but I think there's a design issue in your process...
I am not sure I got your question right but if you try to interact with the request body in some way this is not possible with VCL. You do not have any VCL variable/subroutine to do this.
You can find the list of variables available in VCL here (or in man vcl) :
https://github.com/varnish/Varnish-Cache/blob/master/lib/libvcl/generate.py#L105
I agree with Gauthier, you seem to have a design issue in your system.
'Hope that helps.