Manifest v3 background service worker seems to go stale. How to verify this is the case - google-chrome-extension

My goal is to make sure I have a stable process for handling extension updates. I'm about to release a new extension and have noticed that the background service worker goes stale. Here are the details:
My method of testing this lifecycle is by generating a new build artifact and drag-n-dropping it into the page chrome://extensions. I hope this is the way to test this lifecycle. But if not, I would be happy to hear the alternative.
Once I drag and drop, I open the background console to check for the correct version, but seems it lags behind. The only way to remedy this issue is by removing the extension completely and loading it back, this gives me the impression a regular user would experience the same issue.
The screenshot below describes the situation:
version 0.3.33 is currently installed
drag and drop new build version 0.3.34
after successful drag and drop, notice version is lagging with version 0.3.31
But the chrome.runtime.onInstall release accurately outputs the previous version
Furthermore, I took a look at the unpacked extension directory and the compiled code is accurate with the expected version
What I've done so far to troubleshoot, was to remove the extension completely. It does successfully update after a series of manifest.version updates, but then it goes stale. To be clear, all I do is update the version property of the manifest.json file and that seems to cause this issue
Any help would be greatly appreciated. In the meantime, I'll brush up on the architecture for service workers as it relates to chrome manifest v3. Thanks!
EDIT - Sunday, August 22 2021:
The latest development:
First of all, thanks woxxom for additional details to debug.
I've created an example project that demonstrates the problem with reproducible steps. Hopefully, this helps identify the problem but it does seem to point to a Chrome bug so I'll also report it on crbug.com
Github project: https://github.com/miguelespinoza/-v3-service-worker-update
Ultimately, what I've identified is that loading an iframe causes the service worker lifecycle to go "stale".
Looking at chrome://serviceworker-internals/?devtools, I've noticed that the extension includes two workers, that seems odd because I'd imagine only one should exist, the one in the background. This could mean nothing, but wanted to share that as well. one "Active worker" and one "Waiting worker"
Important Note: Service worker background events stop working. In my case, chrome.commands.onCommand or chrome.action.onClicked

Related

DNN 9.01.01 Export/Import Stuck on Submitted Status

We are transitioning to DNN 9.01.01 build, but it seems that the import/export feature is not working properly. I submitted an import but it has been sitting on submitted status for the last 8 hours.
Is this a known issue or is there configuration on the server that preventing the import/export to work?
Our instance is installed on Azure
Thanks
This thread hasn't been touched in a long time but I dug around and found the problem. I fixed this problem by directly editing the DNN database. I'm on version 9.4 although, I'm sure this would work with any version as this issue is apparently caused some wonky code in the Azure AppService deployment packages.
To resolve, I just had to manually edit the dbo.Schedule table. I use Azure Data Studio because I'm on a Mac but SSMS or any other manager will work as well. I'm sure you can even use the DNN built-in editor although I'm not very familiar with it.
While digging through the dependencies I noticed that unlike the non-operational Export/Import job, all the working jobs had a NULL value in the "Server" field whereas the Export/Import job had the Azure server name written to it. I manually changed the value of this field to NULL and the Site Import job that had been perpetually spinning, started immediately.
Also, for posterity, you will want to make sure you don't have 15 different import jobs queued up before you do this because they will ALL begin processing once you commit the new value to the DB. If it took you a few times to figure out they were spinning you will probably want to go to the scheduler and delete anything you don't want to run prior to the DB edit.
Hope this helps save someone else some time. Cheers!
We contacted support as well and looks like it was an issue with installing DNN as Azure webapps.
We had to delete all the unused server and set the task to run on the current active server and start the import/export feature manually on the scheduler tab.
I had this issue, when checking the other task scheduled for execution I noticed the server field was empty while on the import/export there were comma separated inputs. When I cleared the import/export field the task ran correctly.
I believe they left out coded for this I'm DNN 9. I tried using it for a customer and it was useless.
I inquired and got a response that said it was an oversight.
To add to the possible issues that can cause this, We had renamed our server and the scheduled task still had the original name of the server. Once we changed the name to the new one under the task, it started running as scheduled.
HTH
Dave

IndexNotReadyException when building the application

Writing an android application. It has no errors when I run it on an emulator. But when I tried to build it to an apk file to test on a device, I get the following error.
com.intellij.openapi.project.IndexNotReadyException: Please change caller according to com.intellij.openapi.project.IndexNotReadyException documentation
at com.intellij.util.indexing.FileBasedIndexImpl.handleDumbMode(FileBasedIndexImpl.java:853)
at com.intellij.util.indexing.FileBasedIndexImpl.ensureUpToDate(FileBasedIndexImpl.java:802)
at com.intellij.util.indexing.FileBasedIndexImpl.ensureUpToDate(FileBasedIndexImpl.java:786)
at com.intellij.psi.stubs.StubIndexImpl.processElements(StubIndexImpl.java:250)
at com.intellij.psi.stubs.StubIndexImpl.processElements(StubIndexImpl.java:238).......
I seen one similar case raised here which mentions to "Wait till indexing is complete."But I am not sure what I am doing to even interfere with any indexing.
Tried on different computers. Same problem. The only thing I did different before I got this error was to add a dependency at gradle(app) for a RingButton and added that RingButton class to the project to modify it to what I wanted.
As I said it is working fine in emulator.
Just to test if that is indeed the isse, I deleted the class and removed the dependency but the issue is not resolved.
I think this can be solved by creating a new project and moving the files into the new one. Maybe the index is corrupt. But before that can you clear the cache ? in android studio if your prepared to lose some local history then try this:
and then chose invalidate and restart:
I was using Dropbox to constantly backup my project every now and then. So I would click my project and send to Dropbox. I was also instantly clicking run to run the app on my emulator at the same time. During this period, when I mouse over my Dropbox icon, there is an indication that it is "Indexing".
Based on previous answer "Wait till indexing is complete", I can only assume this Dropbox indexing interfered somehow.
Not exactly elegant but I copied each and every file in this project into a New Project and everything is working fine now and able to install the apk file. I didn't change anything so I believe there is some form of corruption in the old project.

How can I get more information about a failed CodeDeploy deployment?

I've just started working with AWS CodeDeploy.
My first few deployments have failed, which is fine. With new tools comes new learning, and I expected to have to iterate a bit initially. Each of my first few deployments has failed in a useful way.
In the AWS Console I see something like this:
Here I can see some useful details. I can click the View Events link to see even more details, and from there I can view logs on the target EC2 instance.
In contrast, my most recent failed deployment shows this:
As you can see, this is missing much of the detail from the previous screenshot. The missing View Events link is particularly unfortunate. It might be significant that this deployment took longer to fail, but not long enough that one of my hook scripts might have reached its timeout.
Re-deploying resulted in the same thing.
How should I go about troubleshooting this?
After trying this one more time while keeping an eye on /var/log/aws/codedeploy-agent/codedeploy-agent.log I realized that there was no new log activity being generated.
Restarting the agent with sudo /etc/init.d/codedeploy-agent restart and deploying again generated the output I expected.

SiteMapPath empty after upgrade to MVCSiteMap 4

I just attempted to update the site map provider from 3.3.6.0 to 4.0.14. I followed the instruction on the wiki however my SiteMapPath now renders empty. I made no changes to the mvc.sitemap file other than updating the schema to 4.0. When I debug into the SiteMapPathHelperModel I find that the model has no nodes defined. I am using the internal DI container (I would like to get this working before switching over to the application container).
When I check the sitemap.xml file it is well populated which makes me think that the mvc.sitemap is being read.
I'm out of ideas on this one and would be happy to provide any additional information which may be useful. I'm not even sure where I can hook into debug this problem. Literally the only thing I changed between a working 3.3.6.0 and a not working 4.0.14 was what was prescribed on the wiki.
Ok, since your /sitemap.xml endpoint appears to be working, you are correct the sitemap is being populated and loaded correctly.
There are a couple of things I know of that can cause this to happen:
If you are using Dynamic Node Providers, they must be added to a node that is not otherwise part of the sitemap. See my question here - I am trying to figure out why this is the case as well.
Your routes don't match your nodes - please read Routing Basics and/or post your routes and Mvc.sitemap XML.
If you check the above and everything appears to be correct, please make a small demo project showing a sample of your configuration and open an issue on GitHub, as it is highly likely your specific configuration has something to do with the problem.
BTW - You can debug by cloning the current repo on your system or downloading the solution as a zip, enabling NuGet package restore on your solution (right click the solution > Enable NuGet Package restore), add the MvcSiteMapProvider project to your solution, and then in your project remove the reference to MvcSiteMapProvider and add the reference to the newly added MvcSiteMapProvider project from your MVC project. Then you can add breakpoints and step through the code. I suggest making a backup of your solution (or ensure you can roll back another way) before doing this, and reverting back to your current state when done.
I have documented the whole procedure here: http://www.shiningtreasures.com/post/2013/08/21/debugging-an-mvcsitemapprovider-configuration

SharePoint 2010 GAC deployment doesn't update

The following issue just crept up on me. The steps mentioned below had worked just fine until about 2 days ago.
When I deploy a update to a solution (of web parts) to a SharePoint 2010 server I don't see the update. The solution does get installed, but from what I can tell the installed web parts are over a month old (nothing new is installed).
I do the following steps through PowerShell:
retract the solution from the web app
remove the solution
add the solution
install the solution to the web app
I have tried restarting the Web App, restarting IIS and also restarting the server. Nothing seems to work.
I notice that after I remove the solution it does get removed from the GAC. After I add/install it the solution does reappears in the GAC.
Am I missing something? Am I overlooking a step that I should be doing? Something to try?
I never deactivated/reactivated the Feature.
After following the same steps I mentioned in my question I just deactivated, then reactivated, the Feature and everything started to working fine.
This is an easy thing to I can start to implement with my solution updates. However, why did I never have to do this step before?
In general, you should check your ULS log to see which version of your solution is running. If you see the old one, then you can be sure that your activated site feature is still bound to the old version. In this case you have to Inactivate the site feature indeed to loose that tie and then Activate to bind to the new one (it appears Activate always ties the site feature to the newest version of the solution).
Maybe you had not to do this earlier, because you did not change the version number of your solution, appearing as the same version in GAC on the server. In this case you had your site feature already pointing to the correct version of your solution, therefore didn't have to reset the feature.
You have probably checked, but just in case. Make sure that the powershell script is not adding a month old package.
Is the problem in the web part code or the configuration? The configuration usually unghosts itself sooner or later and refuses to update from the solution - you can update the file in the gallery manually if anything has changed there. For most updates there won't be any changes because existing web parts won't get updates applied anyway - they will use new code but old configuration.
If the problem is the code itself, does the assembly appear to the system to be unchanged? All the hardcoded full name references in SharePoint config files mean that usually you are deploying a new assembly but with the same version numbers. This can mean that the system doesn't bother making the update. I have found it very useful to update AssemblyFileVersion (which does not affect binding) on every build and have a page in _layouts that displays the file versions of all the loaded assemblies so I know exactly what is running.

Resources