Varnish: Purge says it works but doesn't remove old content - varnish

I'm running a stand alone instance of varnish on a Digital Ocean Ubuntu VM which basically works fine. The setup is used to take load of an older wordpress server that sits anyhwere else. That works quite well but i'm having a hard time getting content purged. And when talking about purge i mean to invalidate the cache for a URL to force varnish to fetch a fresh version from the backend (just to make sure as i've seen some irritation about purge/ban).
I have setup an ACL for purge and as far as i can see with varnishlog the purges get accepted - on one side from the WordPress blog (where W3TC handles the purges) as well es from the local console where i tried to purge with curl -X PURGE http://url.to.purge
The problem is that i still get the old versions of the URL in the browser on matter what i do locally.
This is how i handle purge in vcl_recv:
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
and i get VCL_error(200, Purged) on every purge so i guess it's probably ok.
Looks like i'm still doing things wrong. After giving service varnish a restart the full cache refreshes and the pages refresh too - until then varnish keeps everything for ages - no matter how much i'm purging.
my Varnish version is 4.0.3.
Any idea?
Thanks,
Frank

Got same behavior on Varnish 6 with vcl 4.1.
The only way to solve it was explicitly define sub vcl_purge like this:
sub vcl_purge {
set req.method = "GET";
set req.http.X-Purger = "Purged";
return (restart);
}
Didn't find the reason and this may not be exactly what you want because after purge it will get content from the backend without waiting for client request.
But still didn't find another way and this is good enough for me.

Related

No 'Access-Control-Allow-Origin' header is present on the requested resource

Getting this error:
Access to fetch at 'https://myurl.azurewebsites.net/Player/1' from origin 'http://localhost:19006' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
I'm not the first with this error, but I feel like I have tried everything that one can find through searching for the problem. I'm developing a web API with ASP.net core, that's supposed to communicate with my react-native frontend. Problem is, I cannot for the life of me get the connection to work.
In program.cs I have added
var MyAllowSpecificOrigins = "_myAllowSpecificOrigins";
builder.Services.AddCors(options =>
{
options.AddPolicy(name: MyAllowSpecificOrigins,
policy =>
{
policy.AllowAnyMethod();
policy.AllowAnyHeader();
policy.AllowAnyOrigin();
});
});
and
app.UseCors(MyAllowSpecificOrigins);
I have tried adding no cors to the method itself
[DisableCors]
[HttpGet("{id}")]
public List<Player> GetPlayers(int id)
{
return (from c in _context.Player.Take(1)
where c.PlayerId == id
select c).ToList();
}
I even deployed the server and database on Azure (I was supposed to sooner or later anyway) just hoping that would allow me to get it to work. The API runs fine if I visit the URL and run it through that one. It also works great if I host it locally and go through the web.
On Azure I've changed my cors settings to allow everything:
I can even access the API through expo web if I run it locally at the same time. But I need to be able to do it through my phone as well, or at least an android emulator. Neither of those works for neither a locally hosted server, or one that's on Azure.
How can I solve this?
Actually, shortly after setting my Azure cors settings, it did indeed start to work. Finally, I can at least demo it. Unfortunately, I still have no solution that solves it when hosting locally.

Database context not allowed

We have a cluster with 3 servers with Load Balancer in front (CloudFlare). Things worked well when we had 2 servers (A & B) in the cluster but after we added a 3-rd server (C) we noticed few odd things.
One of them is quite important and I do not understand how it happens at all.
Our web application makes AJAX requests to itself in order to get some JSON data back and if requests hit new server (C) response looks like that:
{
code: 404,
text: "Not Found",
message: "Database context not allowed."
}
Our application does not throw such error and so I searched in google a bit and noticed that it's mentioned on: OpenNTF XPagesExtensionLibrary
However, we do not use XPages at all so I wonder how could it be that our AJAX requests somehow involve that logic.
Any suggestion & tip would be appreciated.
UPDATE
The backend code of my agent is not important (it could be also an empty agent, I checked), because the request does not come to my agent.
The AJAX call is triggered by jQuery
let url = "domain.tld/api/key";
let params = {"a": 1};
$.post(url, params, function (data) {
// some code
},
"json"
).always(function() {
// some code
});
The URL, which I suspect is an issue starts with /api/key and I believe it's an issue (because all other ajax calls where endpoint do not start from /api/ work well).
Thanks.
Figured that our with help from comments (which you can see under my original post).
Apparently there is DAS servlet that handles all requests starting from /api/* and it runs if XPages engine is loaded.
In my case the 2 servers out of 3 have XPages shut down so the issue happened only on 1 server.
The solution would be:
Shut down XPages (or find a way to shut down DAS).
Alternatively change a URL from /api/path to something else (this is what we will do).

Why am I getting an error when making HTTP request to one Docker container running Sanic application, but not the other?

I am trying to develop a distributed system using Docker containers and the Sanic framework for Python. I am starting out by creating a single 'view' or network of redundant servers that share the same data store. Any one of them should be able to be accessed at any time by the client, and they should all back each others' data storage up. However, I am running into what is (probably) a very simple problem but I don't know how to fix.
When I spin up one server, it works fine. I can perform delete operations, put operations, etc, and everything works.
The way things should work when requests are made (desired functionality)
However, once I spin up the second server (so now two servers are running at once), I get an error when trying to perform "view" operations (i.e. deleting a host address from the view or putting a host address into the view) on the second server. It doesn't matter which server I spin up first, the error is always thrown by the second server that was spun up. The first server that was spun up still responds correctly, but the second server will throw an error.
using curl to make delete request of second server, error being thrown (this is the problem pic)
I am new to all of this, including using Docker, (I really like Docker, but I think the documentation is a bit confusing, tbh, but man it just seems all around neat), so any help you guys could give would be greatly appreciated. Thanks so much in advance.
Below is the code from my "view ops" function. It is the "response.json()" calls that seem to be causing the problem (although I don't know why and I suspect the problem is deeper than that, because like I said, that function call works fine as long as it's being called from the first server to be spun up). Essentially, I'm just using a single function to handle all PUT, DELETE, and GET requests. PUT in this case adds a server address to the "view" or address list of redundant servers. GET simply returns the list of active server addresses to the client, and DELETE deletes a server address from the list of active server addresses.
#app.route('/key-value-store-view', methods=["GET", "PUT", "DELETE"])
async def index(request):
if request.method == "GET":
initString = ", "
returnString = initString.join(viewList)
return response.json({"message":"View retrieved successfully","view":returnString})
if request.method == "DELETE":
addrToBeDeletedFromView = request.json['socket-address']
print("THis is from DELETE: " + str(addrToBeDeletedFromView))
try:
viewList.remove(addrToBeDeletedFromView)
except:
return response.json({"error":"Socket address does not exist in the view","message":"Error in DELETE"}, status=404)
return response.json({"message":"Replica deleted successfully from the view"})
if request.method == "PUT":
print("PUT has been hit.")
addrToBeAppendedToView = request.json['socket-address']
if (addrToBeAppendedToView in viewList):
return response.json({"error":"Socket address already exists in the view","message":"Error in PUT"}, status=404)
elif (addrToBeAppendedToView not in viewList and addrToBeAppendedToView in viewListCopy):
viewList.append(addrToBeAppendedToView)
return response.json({"message":"Replica added successfully to the view"}, status=201)

Service Worker Causing a Blank Page After Refresh in React Application

I have a Node application deployed to Azure, with a test branch leading to a staging instance, and a master branch pointed at the prod deployment. My application works fine on all instances of the application locally, but production and staging are having an issue in which they will not load if they are already cached, and will appear blank after a refresh, and lastly will work properly with a cache reset.
Whenever I refresh the page in production, it is just blank. The service worker is running (I can see it in the chrome serviceworkers-internal tool), but the page just never loads. The file references generated are correct. You can see an example of what is happening here: Live Site and then you can also see the testing site which is also failing with the exact same code deployed: Test Site.
The entirety of the ServiceWorker implementation is out of the box from create-react-app. I've spent several hours trying to track this bug down across a variety of GitHub issues under the react-boilerplate and create-react-app repos and none of them really get anywhere beyond restricting page caching, which I tried to do with no avail with:
Index.html
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate"/>
<meta http-equiv="Pragma" content="no-cache"/>
<meta http-equiv="Expires" content="0"/>
You can find any of the code you have questions about any code at the repo that hosts all of this code.
I'm just kind of getting my feet wet with React/Node, so I'm at a wall and don't really see how to get around this without completely ripping out the ServiceWorker registration.
EDIT: I completely removed the ServiceWorker code from the index.js file and the sight reloads without any issues now. Is there a step I need to complete to get the ServiceWorker to properly reload the page from cache or something?
In case someone else finds this question like me, the answer was to disable browser cache on the service worker file (service-worker.js for me) and index.html, and cache forever the rest.
It appears when a change is made the old version of the app continues to run, and on startup it then detects there is a new version. If like me you don't cache the static resources then when it tries to fetch the old version it gets a 404 resulting in a blank page. If you refresh a second time it gets the new version of the app.
I added some code into mine to automatically reload after 5 seconds by modifying registerServiceWorker.js
function registerValidSW(swUrl) {
console.debug('Registering serviceWorker URL [' + swUrl + ']');
navigator.serviceWorker
.register(swUrl)
.then(registration => {
registration.onupdatefound = () => {
const installingWorker = registration.installing;
installingWorker.onstatechange = () => {
if (installingWorker.state === 'installed') {
if (navigator.serviceWorker.controller) {
// At this point, the old content will have been purged and
// the fresh content will have been added to the cache.
// It's the perfect time to display a "New content is
// available; please refresh." message in your web app.
reportInfo('App has been updated. This page will refresh in 5 seconds.');
setInterval(()=>window.location.reload(), 5000);
} else {
// At this point, everything has been precached.
// It's the perfect time to display a
// "Content is cached for offline use." message.
reportInfo('App is cached for offline use.');
}
}
};
};
})
.catch(error => {
reportError('Error during service worker registration:', error);
});
}
For anyone having the same issue as me using react and react-router-dom for a single page application, I had only a blank page on my installed application after calling window.location.reload() (on an android One Plus) because I was not redirecting from my route '/user' to the home '/' or '/index.html' on my server
If you have a single page application, please see this awnser for different ways to fix it: https://stackoverflow.com/a/36623117/17434198

Remove Etag Header in express

I have searched a lot and still couldn't find a solution, I am using nodejs with express which is setting etag to true by default, I tried all of the solutions i found online and it is still set, examples:
res.set('etag', false);
res.removeHeader('ETag');
app.disable('etag');
app.use(express.static(__dirname + '/public'), { etag: false });
And still it is set, so, is there something i am missing here since i am not really that experienced in node or express.
My question is obviously, how to disable this header, because, I have a page with a lot of images (A LOT) and all of them are static and etag is causing a lot of blocking since it's sending requests to check validity and preventing the browser from relying on cache-control, which is hugely increasing page load time.
Thanks for the help
Refer to: http://expressjs.com/4x/api.html#app.set
You can do it in ExpressJS 4 using:
app.set('etag', false);
Setting it to false disables the etag header altogether while the default is set to true.
Possible option values are:
Boolean (true,false)
String ('strong', 'weak')
Function
This is not a full answer but I am adding it just in case anyone faces the same issue.
It turned out what I was missing is that the browser forces a cache validity check on first load (including page refresh), and that's why I kept seeing the etag header.
To properly test if the header is removed you have to browse to the url and check not go directly to it.
I hope this helps someone, because it took me a while to find it out

Resources