I believe I set up my service worker incorrectly from the get-go, and now I'm having trouble resetting everything. My problem now is that the only way for users to fix the issue is to manually unregister the service worker and cached files, which isn't very helpful. I've added a script in web pack to unregister the service worker and delete the cached storage files, however since the bundle file is cached anyways by the user's browser, they cannot even see the changes that I make to the file, so they are stuck with old code.
I've tried everything I could think of and starting to exhaust resources, but it seems like I can't do a whole lot for the user agent since they keep receiving old files, and are unable to even see these new changes.
Also, I added cache busting which I thought would fix this issue, however, it did not.
Any suggestions?
Unregistering your service worker should be unnecessary. You should be able to just deploy a new SW and your users will wills start getting that.
See this answer about SW updates. Basically anytime a user visits your site the SW will be fetched fresh if the current version the browser has is 24 hours old.
In the new version of your SW, change the prefix or suffix and workbox will switch to a fresh cache and ignore all of the existing cached assets.
You'll probably also want to cleanup the old cache with something like this:
const suffix = 'v1';
self.addEventListener('activate', function(event) {
event.waitUntil(
caches.keys()
.then(keys => keys.filter(key => !key.endsWith(suffix)))
.then(keys => Promise.all(keys.map(key => caches.delete(key))))
);
});
Related
I have a Blazor app that I'm deploying to Azure for some alpha testing. I've managed to do this and I can run the app from the website just fine.
The problem comes when I make changes to the client and server projects and republish them. Whatever browser is running the client will run whatever is already in the browser cache until the browser history is cleared. That means until the history is cleared the app appears broken because the client requests on the old version don't match the new server API - not to mention my client side changes don't get tested.
How can I force a reload of the client when I publish my changes? Do I have to tell the browser not to cache my app (not sure how on blazor) and take the performance hit until my app stabilizes? Or is there a way to force a client reload after the first API call using some middleware or something?
Or am I missing something?
Edit: It may be relevant that I used the PWA template provided in Blazor WebAssembly 3.2.0 Preview 2. I'm still running the app from a browser, but it seems possible that enabling the PWA option changed the behavior of the app even when running it as a regular website.
Since your app is a PWA, you can declare a js file for registration in the navigator.serviceWorker object. This my.js file can contain a const CACHE_VERSION = 1.0. Updating this value should force the client to download the latest files. See Jeremy Likness' blog post for more info.
if you are working on dot net core3.X PWA app, you can add comment in service-worker.published.js so that when browser will compare its cached service worker with updated one, browser will track the changes and load new one.
If anyone looks solution for Azure Pipelines and Azure Static Web App deployment, here is what worked for me:
Added following line on top of service-worker.published.js file.
const CACHE_VERSION = '{#CACHE_VERSION#}'
Added Bash step to *.yaml file, just change your app location accordingly.
- master
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
submodules: true
- bash: 'sed -i ''s/{#CACHE_VERSION#}/$(Build.BuildId)/'' MathApp/wwwroot/service-worker.published.js'
- task: AzureStaticWebApp#0
inputs:
app_location: 'MathApp'
output_location: 'wwwroot'
azure_static_web_apps_api_token: $(deployment_token)
This is an extension to the above solution if you are working with Github actions.
Add following step in .github\workflows\whatever-file-name-you-have.yml
- name: UpdateVersion
uses: datamonsters/replace-action#v2
id: sub
with:enter code here
files: 'Your.App/wwwroot/service-worker.published.js'
replacements: '%%CACHE_VERSION%%=${{ github.run_id }}'
I have used {{ github.run_id }} to get a unique-ish value, but you could opt for any other random string either from https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context or bring your own version.
And the main solution lies in editing service-worker.published.js
const CACHE_VERSION = '%%CACHE_VERSION%%'
const cacheName = `${cacheNamePrefix}${CACHE_VERSION}`;
This answer is intended to complement Ryan Hill's answer:
Blazor automatically updates to the newer version (at least in .NET 5, possibly earlier too). This happens when the application is first closed and restarted in ALL browser tabs including the installed app (which is another browser tab).
The following article helped me a lot:
https://learn.microsoft.com/en-us/aspnet/core/blazor/progressive-web-app?view=aspnetcore-5.0&tabs=visual-studio#installation-and-app-manifest-1
Also helpful:
https://www.eugenechiang.com/2021/05/22/forcing-reload-of-blazor-client-after-publishing-changes/
I finally found an answer for this that helped me. If you do not need PWA, you can remove it and Chrome (and probably other browsers) should stop clinging on to an old client even after deleting cache and fully reloading. Because after having done this my Chrome still reloaded it. For my app I do not need to use this caching feature or to be able to install it as an app.
I found the answer here:
Blazor Chrome caching issues
How to remove PWA:
Delete the following files from wwwroot:
/wwwroot/manifest.json
/wwwroot/service-worker.js
/wwwroot/service-worker.published.js
Delete this line from /wwwroot/index.html
<script>navigator.serviceWorker.register('service-worker.js');</script>
Delete those lines from your csproj file
<ServiceWorkerAssetsManifest>service-worker-assets.js</ServiceWorkerAssetsManifest>
<ServiceWorker Include="wwwroot\service-worker.js" PublishedContent="wwwroot\service-worker.published.js" />
Also... to actually get rid of the app in Chrome, because I had already loaded it this way, I installed it as an app and then uninstalled the app and removed it from Chrome. That finally got me out of this whole loop with Chrome keeping the app and refusing to fully update it in the browser cache / memory / wherever it kept the app
I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.
We have an Azure WebApp with WEBSITE_LOCAL_CACHE_OPTION = Always set in order to reduce static file latency.
We have made a file change there and would like to force the instances to reload the cache.
How do I force a file cache refresh?
The only way to refresh the cache is to restart the website.
Azure Web apps WEBSITE_LOCAL_CACHE_OPTION=Always requires stop and start of the site
I know it's an old subject, but maybe it will be helpfull for someone :
If you are using C# you can add a button inside your application to remove the cache :
IDictionaryEnumerator enumerator = HttpContext.Current.Cache.GetEnumerator();
while (enumerator.MoveNext())
{
HttpContext.Current.Cache.Remove((string)enumerator.Key);
}
From this question, Sails js using models outside web server I learned how to run a command from the terminal to update records. However, when I do this the changes don't show up until I restart the server. I'm using the sails-disk adapter and v0.9
According to the source code, the application using sails-disk adapter loads the data from file only once, when the corresponding Waterline collection is being created. After that all the updates and destroys happen in the memory, then the data is being dumped to the file, but not being re-read.
That said, what's happening in your case is that once your server is running, it doesn't matter if you are changing the DB file (.tmp/disk.db) using your CLI instance, 'cause lifted Sails server won't know about the changes until it's restarted.
Long story short, the solution is simple: use another adapter. I would suggest you to check out sails-mongo or sails-redis (though the latter is still in development), for both Mongo and Redis have data auto expiry functionality (http://docs.mongodb.org/manual/tutorial/expire-data/, http://redis.io/commands/expire). Besides, sails-disk is not production-suitable anyways, so sooner or later you would need something else.
One way to accomplish deleting "expired records" over time is by rolling your own "cron-like job" in /config/bootstrap.js. In psuedo code it would look something like this:
module.exports.bootstrap = function (cb) {
setInterval(function() { < Insert Model Delete Code here> }, 300000);
cb();
};
The downside to this approach is if it throws it will stop the server. You might also take a look at kue.
Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.