I am in the process of converting a Chrome extension from manifest version 2 (MV2) to manifest version 3 (MV3). In the MV2 version, the background page script has a prominent role in the extension: At startup, the background script reads in a large amount of data from IndexedDB to RAM, and then (during operation) handles queries from content scripts injected into the pages that the user visits. To move to MV3, the functionality provided by the MV2 background script now needs to be translated to be performed by a MV3 service worker instead.
Because reading the data from IndexedDB to RAM is slow (relatively speaking), it is not feasible to do this dynamically upon a query from a content script; rather it must be performed by a service worker only once and then the data kept around in order to serve the incoming requests from the content scripts. Thus, the service worker should be persistent (and not be terminated by the browser); for this there exists at least one solution: https://stackoverflow.com/a/66618269/4432991
However, in some cases (I have not been able to pin-point exactly under what circumstances) when a content script sends a message to the service worker, Chrome starts up a second copy of the service worker, even while the first one is still alive, and the two service workers co-exist for some duration. Is there some way to avoid this, and ensure that if a service worker is still alive (in memory), no activity (by content scripts) will cause a second copy of the service worker to be created, and rather all requests by the content scripts will thus be handled by the previously existing service worker?
Thanks for any help!
Related
I am using an IIS web garden for long running requests with 15 worker processes.
With, for example, 3 browsers, typically multiple worker processes are used.
With Apache jMeter, all requests are using the same worker process.
Is there a way to force the use of multiple worker processes?
This may have at least 2 explanations:
You have some hard coded ID or session ID in your test plan. Check for their presence and remove them, add Cookie Manager to your test
You have a load balancer that work in Source IP mode, in this case you need to either change policy to Round Robin or add 2 other machines
If you are using 1 thread with X iterations and expecting different workers then check that:
Cookie Manager is configured this way:
And Thread Group this way (notice "Same User on each iteration is unchecked"):
If issue persists, then please share you plan and check that you don't have somewhere in Header Manager a hardcoded id leading to using 1 worker
Well-behaved JMeter script should produce the same network footprint as the real browser do so if you're observing inconsistencies most probably your JMeter configuration is not matching requests which are being sent by the real browser.
Make sure that your JMeter test is doing what it is supposed to be doing by inspecting requests/responses details using View Results Tree listener
Use a 3rd-party tool like Wireshark or Fiddler to capture the requests originating from browser/JMeter, detect the differences and amend your JMeter configuration to eliminate them
More information: How to make JMeter behave more like a real browser
In the absolute majority of cases JMeter script is not working as expected due to missing or improperly implemented correlation of the dynamic values
I have a site that makes the standard data-bound calls, but then also have a few CPU-intensive tasks which are ran a few times per day, mainly by the admin.
These tasks include grabbing data from the db, running a few time-consuming different algorithms, then reuploading the data. What would be the best method for making these calls and having them run without blocking the event loop?
I definitely want to keep the calculations on the server so web workers wouldn't work here. Would a child process be enough here? Or should I have a separate thread running in the background handling all /api/admin calls?
The basic answer to this scenario in Node.js land is to use the core cluster module - https://nodejs.org/docs/latest/api/cluster.html
It is an acceptable API to :
easily launch worker node.js instances on the same machine (each instance will have its own event loop)
keep a live communication channel for short messages between instances
this way, any work done in the child instance will not block your master event loop.
In my current webapp the display can contain multiple editable objects - the data for which is either fetched from the server (and then stored for future use)or picked up from the local IndexedDB objectstore.
This I have implemented and it works perfectly. However, now I want to go a step further - fetching data that are not available locally when the user needs to work with them is liable to break the rythm of work for the user.
So I am thinking of implementing a lookahead that gets server side data prior to the user wanting to work with them. The way this would work
When the app is launched I spawn a web worker that watches an entry, call it PreFetch, in an IndexedDB that it shares with the main app.
The user hovers over an editable item bearing, say, the HTML id, abcd1234.
In the app I add this id to the IndexedDB PreFetch key value - which is a comma separated list of ids.
The web worker periodically picks up the PreFetch CSV list, then resets it, and fetches those data that are not locally available and stores them in the objectstore.
IndexedDB is nice - no doubts about that. However, it is not clear to me that what I am planning - having two threads updating the same objectstore will not create a deadlock (or worse - bring the whole house crashing down around my ears).
Given the asynchronous nature of IndexedDB operations I am concerned about two kinds of issues
a. The main thread is writing the PreFetch key whilst the worker is deleting its contents.
b. The main thread attempts to fetch data from IndexedDB and decides "it is not there" while at the same time the worker has just fetched those data and is storing them.
The former is liable to defeat the purpose of doing worker driven data prefetches whilst the latter is liable to trigger unnecessary server traffic to fetch information that has already been fetched.
The former I can probably avoid by using localStorage to share the PreFetch list. The latter I cannot control.
My question then - are IndexedDB methods threadsafe? Googling for IndexedDB and thread safety has not yielded anything terribly useful other than one or two posts on this forum.
I have thought of a way to avoid this issue - the main thread and the worker both check a flag variable in localStorage prior to attempting to read/write the objectstore. However, it is not clear to me that I need to do this.
Javascript has only one thread, so it is thread safe.
Use transaction for synchronization lock creating an object store of task id as key and value of enum 'pending' , 'working', 'done'.
Producer thread create task with pending value if not existing in object store. Consumer thread take pending task to working and change done after finish. It should work.
You can use webstorage change event for sync lock among pages, but not with indexeddb.
On cloudControl, I can either run a local task via a worker or I can run a cronjob.
What if I want to perform a local task on a regular basis (I don't want to call a publicly accessible website).
I see possible solutions:
According to the documentation,
"cronjobs on cloudControl are periodical calls to a URL you specify."
So calling the file locally is not possible(?). So I'd have to create a page I can call via URL. And I have to perform checks, if the client is on localhost (=the server) -- I would like to avoid this way.
I make the worker sleep() for the desired amount of time and then make it re-run.
// do some arbitrary action
Foo::doSomeAction();
// e.g. sleep 1 day
sleep(86400);
// restart worker
exit(2);
Which one is recommended?
(Or: Can I simply call a local file via cron?)
The first option is not possible, because the url request is made from a seperate webservice.
You could either use HTTP authentication in the cron task, but the worker solution is also completely valid.
Just keep in mind that the worker can get migrated to a different server (in case of software updates or hardware failure), so do SomeAction() may get executed more often than once per day from time to time.
In my application, I have a multiple file upload AJAX client. I noticed (using a stub file processing class) that Spring usually opens 6 threads at once, and the rest of the file upload requests are blocked until any of those 6 threads finishes its job. It is then assigned a new request, as in a thread pool.
I haven't done anything specific to reach this behavior. Is this something that Spring does by default behind the scenes?
While uploading, I haven't had any problems browsing the other parts of the application, with pretty much no significant overhead in performance.
I noticed however that one of my "behind the scenes" calls to the server (I poll for new notifications every 20 secs) gets blocked as well. On the server side, my app calls a Redis-based key-value store which should always return even if there are no new notifications. The requests to it start getting normally processed only after the uploads get finished. Any explanation for this kind of blocking?
Edit: I think it has to deal with a maximum of concurrent requests per session
I believe this type of treading belongs to the Servlet Container but not to Spring.