I want to know how to cache for mobile and desktop site. I have mobile and desktop site whose root is written in nginx. Mobile/Desktop is served to the user based on the user-agent whenever the user visits the site, so in this scenario how to cache for mobile and cache for desktop site so that when the user visits the website, Get the right content from the cache.
Please help to write VCL for mobile and desktop cache in Varnish.
You can download https://github.com/varnishcache/varnish-devicedetect/blob/master/devicedetect.vcl and include this file in your main VCL file. By calling the devicedetect subroutine in your main VCL file.
This subroutine will set a X-UA-Device header that contains the device type, which you can then vary on.
Here's an example:
vcl 4.1;
backend default {
.port = "8080";
}
include "devicedetect.vcl";
sub vcl_recv {
call devicedetect;
if(req.http.X-UA-Device ~ "^(mobile|tablet)\-.+$") {
set req.http.X-UA-Device = "mobile";
} else {
set req.http.X-UA-Device = "desktop";
}
}
sub vcl_hash {
hash_data(req.http.X-UA-Device);
}
Related
Is there any way to cache request with auth headers in varnish?
I want to ignore the auth headers while caching the request
There are various ways to approach this, depending on the importance of auth headers.
1. You don't care about auth
If you don't care about the auth part and if you want to risk serving cached content to unauthorized users, you can just use the following VCL code:
sub vcl_recv {
unset req.http.Authorization;
}
2. Ignore authorization to some extent
It is also possible to care about auth a bit, but not too much.
The following VCL snippet will allow caching even if there is an Authorization header:
sub vcl_recv {
if(req.http.Authorization) {
return(hash);
}
}
The consequence of this is that the initial cache miss will pass through to the backend and will be processed there. Potential unauthorized access will be handled there.
But as soon as the has been dealt with, the object is stored in the cache and the next requests will get cached content regardless of the authorization status of that request.
3. Perform auth on the edge
It is also possible to handle the auth part in Varnish while caching the content.
The following VCL code will handle this:
sub vcl_recv {
if(req.http.Authorization != "Basic YWRtaW46c2VjcmV0") {
return (synth(401, "Restricted"));
}
unset req.http.Authorization;
}
sub vcl_synth {
if (resp.status == 401) {
set resp.http.WWW-Authenticate = {"Basic realm="Restricted area""};
}
}
This code will actively inspect the content of the Authorization header and will ensure the username admin is used with password secret.
The YWRtaW46c2VjcmV0 string is nothing more than a base64 encoding of admin:secret.
4. Use vmod_basicauth
A more advanced and flexible way to terminate auth on the edge is by using https://git.gnu.org.ua/vmod-basicauth.git/. This VMOD can be compiled from source and can be downloaded from ftp://download.gnu.org.ua/release/vmod-basicauth.
Assuming the credentials are stored in /var/www/.htpasswd, you can leverage this VMOD to match the Authorization header to the content of the .htpasswd file.
Here's the VCL:
vcl 4.1;
import basicauth;
sub vcl_recv {
if (!basicauth.match("/var/www/.htpasswd",req.http.Authorization)) {
return (synth(401, "Restricted"));
}
unset req.http.Authorization;
}
sub vcl_synth {
if (resp.status == 401) {
set resp.http.WWW-Authenticate = {"Basic realm="Restricted area""};
}
}
This is entirely possible but also extremely dangerous: Varnish would return the same cached (authorized) content to all requests.
Example:
User A requests resource Z with proper authentication. Varnish relays the request to backend, caches the response and returns the resource.
User B requests resource Z with proper authentication. They will get the cached resource Z even if Z contains user A's content.
User X requests resource Z with invalid authentication. They will too get the cached resource anyway since the backend is bypassed.
Having said that, you can override Varnish's built-in VCL. Details are documented but the main idea is:
Copy default vcl_recv VCL (for your version) from source and add it to the end of your vcl_recv.
Remove the safeguards from vcl_recv: Just remove vcl_req_authorization which disables caching:
sub vcl_req_authorization {
if (req.http.Authorization) {
# Not cacheable by default.
return (pass);
}
}
In your vcl file issue a return statement at the end so built-in vcl is not used.
We're currently using the s-maxage directive in the Cache-Control header from our origin to control the TTL in Varnish. However, I'd like to remove it from the response before delivery, so that no other caches in the request chain act on it.
I'm currently looking at the header VMOD, to remove s-maxage from the header, but leave the rest of it intact. I believe this could be achieved with something like this:
sub vcl_deliver {
header.regsub(resp, "s-maxage=[0-9]+,?\s?", "")
}
As a newcomer to Varnish, I wanted to sanity-check this approach and make sure there isn't a better way to tackle it?
Appreciate any support or advice.
Replace header at delivery time
The following VCL snippet will strip off the s-maxage attribute from the Cache-Control header before it is sent to the client.
sub vcl_deliver {
set resp.http.cache-control = regsub(resp.http.cache-control,
"(,\s*s-maxage=[0-9]+\s*$)|(\s*s-maxage=[0-9]+\s*,)","");
}
Replace header at storage time
It is also possible to strip off this attribute from the Cache-Control header before it gets stored into a cache object. In that case, you'll use the beresp.http.cache-control variable inside vcl_backend_response.
sub vcl_backend_response {
set beresp.http.cache-control = regsub(beresp.http.cache-control,
"(,\s*s-maxage=[0-9]+\s*$)|(\s*s-maxage=[0-9]+\s*,)","");
}
Using vmod_headerplus
If you're using Varnish Enterprise, you can use the vmod_headerplus module to easily delete header attributes:
vcl 4.1;
import headerplus;
sub vcl_deliver {
headerplus.init(resp);
headerplus.attr_delete("Cache-Control","s-maxage",",");
headerplus.write();
}
vcl 4.1;
import headerplus;
sub vcl_backend_response {
headerplus.init(beresp);
headerplus.attr_delete("Cache-Control","s-maxage",",");
headerplus.write();
}
Although Varnish Enterprise is the commercial version of Varnish Cache, you can still use it without upfront license payments if you use it on AWS, Azure or GCP.
Varnish Enterprise on AWS
Varnish Enterprise on Azure
Varnish Enterprise on GCP
TL;DR: How to detect local website users?
I have a self-hosted website running in the student-building I live in. In this website I would like a page for and links to certain local applications, like the webremote of the RPi running Kodi, an FTP, a page of instructions etc.
I don't want those to be visible to random internet users, so is there any way for a website to detect whether the user is accessing the website from inside the local network? Preferably in JavaScript, but PHP would also be fine.
in PHP it can be done in several ways. The simplest way is to do a simple IP check.
if ($_SERVER['REMOTE_ADDR'] = "10.1.0.25") { // Internet IP or IP rangehere
// Show links for internal users
} else {
// Show stuff for all other users
}
or for range of IPs assuming 192.168.1.x addresses
if ($_SERVER['REMOTE_ADDR'] >= "192.168.1.1" && $_SERVER['REMOTE_ADDR'] <= "192.168.1.1") {
// Internal Info
} else {
// External Info
}
I'm running a stand alone instance of varnish on a Digital Ocean Ubuntu VM which basically works fine. The setup is used to take load of an older wordpress server that sits anyhwere else. That works quite well but i'm having a hard time getting content purged. And when talking about purge i mean to invalidate the cache for a URL to force varnish to fetch a fresh version from the backend (just to make sure as i've seen some irritation about purge/ban).
I have setup an ACL for purge and as far as i can see with varnishlog the purges get accepted - on one side from the WordPress blog (where W3TC handles the purges) as well es from the local console where i tried to purge with curl -X PURGE http://url.to.purge
The problem is that i still get the old versions of the URL in the browser on matter what i do locally.
This is how i handle purge in vcl_recv:
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
and i get VCL_error(200, Purged) on every purge so i guess it's probably ok.
Looks like i'm still doing things wrong. After giving service varnish a restart the full cache refreshes and the pages refresh too - until then varnish keeps everything for ages - no matter how much i'm purging.
my Varnish version is 4.0.3.
Any idea?
Thanks,
Frank
Got same behavior on Varnish 6 with vcl 4.1.
The only way to solve it was explicitly define sub vcl_purge like this:
sub vcl_purge {
set req.method = "GET";
set req.http.X-Purger = "Purged";
return (restart);
}
Didn't find the reason and this may not be exactly what you want because after purge it will get content from the backend without waiting for client request.
But still didn't find another way and this is good enough for me.
My softwares are:
Liferay 6.0.6 with Tomocat 6.0.29, OpenSSO 9.5.2_RC1 Build 563 with tomcat 6.0.35, CentOS 6.2 Operating system
Setup:
I have setup both liferay and opensso on the same CenOS machine, making sure that both of its tomcat run on very different port, I have installed and configured OpenSSO with Liferay as per the guidelines availaible on liferay forums
Problem:
when i hit my application URL i get redirected to Opensso login page which is what i want, when i login with proper authentication details it trys to redirect to my application which is exactly how it should behave, however this redirect goes in a loop and i don't see my application dashboard. The conclusion i come to is that the redirect is trying to authenticate in liferay but somehow it does not get what it is looking for and goes back to opensso and this repeats infinitely. I can find similar issues been reported here. Unfortunetly, it did not work.
Later i decided to debug the liferay code and i put a break point on com.liferay.portal.servlet.filters.sso.opensso.OpenSSOUtil and com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter. The way i understand this code is written is it first goes to the OpenSSOUtil.processFilter() method which get's the openSSO setting information that i have configured on liferay and later checks if it is authenticated by calling the method OpenSSOUtil.isAuthenticated(). This particular implementation basically reads the cookie information sent and tries to set the cookie property on liferay by calling the method OpenSSOUtil._setCookieProperty(). This is where it fails, it tries to read the cookie with name [iPlanetDirectoryPro] from the liferay class com.liferay.util.CookieUtil using the HttpServletRequest object but all it get's a NULL. this value set's the authenticate status to false and hence the loop executes.
Following is the code from class com.liferay.util.CookieUtil
public static String get(HttpServletRequest request, String name) {
Cookie[] cookies = request.getCookies();
if (cookies == null) {
return null;
}
for (int i = 0; i < cookies.length; i++) {
Cookie cookie = cookies;
String cookieName = GetterUtil.getString(cookie.getName());
if (cookieName.equalsIgnoreCase(name)) {
return cookie.getValue();
}
}
return null;
}
Can anyone please let me know why liferay is not able to find the cookie that opensso sent. If its related to Opensso setting about enable cookie value, then i have done that already which is here
In OpenSSO go to: Configuration -> Servers and Sites -> -> Security -> Cookie -> check Encode Cookie Value (set to Yes)
What works:
when this loop is executing i open another tab and login to my application explicitly, from my application when i signout it get's signout from opensso also. This is strange to me.
For more information, while this redirect loop happens, following URL's give me these set of information
http://opensso.ple.com:9090/openam/identity/getCookieNameForToken
string=iPlanetDirectoryPro
http://opensso.ple.com:9090/openam/identity/isTokenValid
boolean=true