Chrome Extension: Alarms not being fired - google-chrome-extension

I am working on a chrome extension that uses the chrome alarms api. The alarms are not being triggered consistently. They get triggered most of the times and it works fine. However, sometimes they don't get triggered at all, even though they have been created.
I can see that the alarms have been created on the console, and that it's past the scheduledTime.
Also, if one of the alarms does not get triggered, then the remaining alarms don't get triggered either. If I restart chrome, then those alarms fire right away when it reopens.
What might be causing this inconsistent behavior? And how can I ensure that the alarms get triggered every time?
chrome.alarms.getAll((alarm) => console.log(alarm))
Alarms exist. The first alarm is 20 min past the scheduled time.

Related

Trying to use ports to keep service worker persistent in Chrome extension doesn't work (Manifest v3)

I'm working on an extension that should have a persistent service worker (while using Manifest V3).
I've tried the solution proposed in this answer (https://stackoverflow.com/a/66618269/10364842), and it works when tested by itself (I used the files from here to verify: https://bugs.chromium.org/p/chromium/issues/detail?id=1152255#c25).
However, when I put the code in the target extension, it doesn't prevent the worker from unloading.
I'm using worker_wrapper.js and injectScripts inside worker_wrapper.js. I added the code that keeps the service worker alive at the top of worker_wrapper.js (tried other locations as well).
I've verified that this code gets injected into one of the tabs:
chrome.runtime.connect({ name: 'keepAlive' });
console.log('keepAlive');
I can see 'keepAlive' printed in the console for that tab.
However, the service worker still gets unloaded.
Last time I tested, it unloaded ~1 minute after the last 'keepAlive' was printed to the console.
So it seems it works sometimes, but often ~1 minute after keepAlive function is called the service worker still unloads.
Unfortunately I can't attach a minimal reproducible example, as I'm not sure what causes the problem. And the code from https://bugs.chromium.org/p/chromium/issues/detail?id=1152255#c25 works when run by itself.
I've also tested with DevTools open, and it still unloads (with the message 'DevTools was disconnected from the page. Once page is reloaded, DevTools will automatically reconnect.')
Tested Chrome versions: 99.0.4844.82, 101.0.4947.0
Tested OS: Ubuntu 20.04
Could there be any other bugs that could cause this? Should I try injecting the chrome.runtime.connect in each tab? Or try to run the keepAlive function every 55 seconds instead of every 4 minutes 55 seconds?

Azure Function Queue Trigger: Queue message dropping even before it got picked by Azure function Intermittently

I have a Queue trigger Azure function which is triggered whenever a queue msg appears in Azure Queue Storage.
My work flow is:
A user may schedule a task which needs to run after a few days at a particular time (execute at time)
So I put a message in azure queue with visibility timeout as the time difference b/w current time and the execute at time of that task.
So when the msg is visible in the queue it gets picked up by the azure Function and gets executed.
I'm facing an intermittent issue when the queue message is supposed to be visible after a few days(<7 days). But somehow it got dropped/removed from the queue. So it was never picked up by the function, and that task still shows pending.
I've gone through all the articles I have found on the internet and didn't find solution to my problem.
The worst part is that it works fine for a few weeks but every now and then the queue messages (invisible ones)
Suddenly disappears. (I use azure storage explorer to check number of invisible messages)

Monitoring Timer Triggered Azure Functions For Inactivity

We have a couple of functions in a function app. Two of them are triggered by a timer, do some processing and write to queues to trigger other functions.
They normally work very well until recently where the timer trigger just stopped triggering. We fixed this by restarting the application which resolved the issue. The problem is that we were completely unaware of the trigger stopping as there were no failures and the function app is not constantly 'looked at' by our people.
I'd like to configure automatic monitoring and alerting for this special case. I configured Application Insights for the function app and tried to write an alert which watches the count metric of the functions which are triggered by a timer. If the metric is below the set threshold (below 1 in the last 5 minutes) the alert should be triggered.
I tested this by just stopping the function app. My reasoning behind this was that a function app that does not run should fullfill this condition and should trigger an alert within a reasonable time frame. Unfortunately this was not the case. Apparently a non-existing count is not measured and the alert will never be triggered.
Did someone else experience a similar problem and has a way to work around this?
I've added Application Insights alert:
Type: Custom log search
Search query:
requests | where cloud_RoleName =~ '<FUNCTION_APP_NAME_HERE>' and name == '<FUNCTION_NAME_HEER>'
Alert logic: Number of results less than 1
Evaluated based on: Over last N hours, Run every M hours
Alert fires if there are no launches over last N hours.

Azure Function app periodically not firing on trigger or timer

I have an Azure Function app with 4 functions
one triggered on a timer every 24 hours
one triggered on events from IoT Hub
two others triggered on events from Service Bus as a result of the previous function
All functions work as expected when first deployed but after a period of time the functions stop running and the app appears to be scaled down with no servers online. At this point the functions are never triggered again unless I either restart the app, or drill into a function and view details of it (supposedly, forcing the function to start up).
I have the exact same code deployed to a different environment and it runs perfectly and has never encountered this issue. I've checked all the settings and configuration and can't see any material differences between the two.
This is really frustrating and is becoming a big issue. Any help would be much appreciated.
Function App is hosted in Australia Southeast.
This is the last execution (as of now)
10:45 PM UTC - Function started (Id=4d29555b-d3af-43d7-95e9-1a4a2d43dc46)
The event triggered function should run every few minutes as the IoT Hub it's triggering from has a steady stream of events coming in. When I prod the function (or restart it) and it comes to life it quickly churns through a backlog of messages queued in the IoT Hub.
I see the problem: you have comments in your host.json, which makes it invalid and throws off the parser at the scale controller level.
Admittedly, the error handling is quite poor here. But anyway, remove the commented out logger, and it should all work.

Logic apps trigger history

I have created a logic app which triggers when a message is received in the Service Bus queue. It works fine but when I look at the trigger history, it shows a lot records with status as "Skipped" even when the logic app is idle. So what are these records?
A status of skipped just indicates that the nothing new was found to fire the logic. If the trigger is getting fired every minute over an hour for example and you only ever had 20 messages, then the logic would only fire 20 times, every other time you should see skipped appear in the trigger history.
Logic App_WorkFlow
This is the expected behavior. Based on this configuration, the Logic App is scheduled to trigger every 3 minutes and continue with this work flow only if there is any new message arrived in the queue. If there are no messages received in a queue within this 3 minute (i.e. between the each trigger instance), then it will get skipped and further actions in the workflow will not be continued.

Resources