what's the purpose of eventTime in NotificationOptions of chrome.notifications API? - google-chrome-extension

When creating a notification from an extension using chrome.notifications API, the NotificationOptions: eventTime seems to be ignored as the notification is created immediately which supposed to be delayed by milliseconds set for eventTime. As per documentation:
eventTime
A timestamp associated with the notification, in milliseconds past the epoch (e.g. Date.now() + n).
Although this is not clearly stated that this will cause a delay in creation of the notification by the milliseconds set in eventTime, then what's the purpose of this option?
I found a similar question was asked some years ago: Chrome notification ignoring eventTime but the above question was not answered there. Instead the solution was talking about the other approaches (setTimeout, chrome.alarms) to delay the creation of notification.

According to the source code, eventTime does not delay the notification.
It's displayed in the title of the notification (source).
For example, setting it to Date.now() + (5 * 60 + 1) * 1000 shows In 5m i.e. five minutes.
Note that we had to add one second, otherwise the API shows 4m as it subtracts the few microseconds spent in creating the notification internally.
It's used to sort notifications within the same priority group (source)

Related

Azure Monitor Custom log search Query - understanding Period and Frequency

UPDATE:
the actual problem is different from what I've described. I'll provide and update/edit to this ticket once we'll resolve the issue. More details may be found at this thread - https://techcommunity.microsoft.com/t5/Azure-Log-Analytics/Reliably-trigger-alerts-for-Log-Analytics-log-entries/m-p/319315/highlight/false#M1224
Original question:
We use Azure Monitor to create alerts based on logs in Log Analytics. For this we choose our Log Analytics account as a "RESOURCE", then choose "Custom log search" signal name for "CONDITION". Alert logic - "Number of results greater than 0".
Sample query:
search *
| where ResourceProvider == "MICROSOFT.DATAFACTORY" and status_s == "Failed"
For Period and Frequency lets set 15 minutes. All looks simple, but...
The issue: described above setup does not work (it works sometimes), because alerts are fired only sometimes, a lot of them are missed which is completely unacceptable behavior.
If we set Period = Frequency = 5 minutes we basically miss almost every event. Period = Frequency = 15 minutes works better, but still a lot of events are missing. Period = Frequency = 30 works even better, but all this looks weird.
Important notice - logs are collected from Data Factory V2 into Log Analytics. I suspect that alert misses are due to the fact that logs are delivered to Log Analytics with some delay (up to several minutes). So when Azure Monitor evaluates alert query for the last 15 minutes (Period=15) it might be that most resent log entries are still not in Log Analytics. When next query evaluation is executed in 15 minutes it will miss the logs that were ingressed with a delay for prev 15 minutes interval. Is this assumption correct? If so, this is very weird - how then we supposed to configure Period and Frequency values? If I set Period > Frequency (e.g. Period = 30 and Frequency = 5, which means "evaluate expression every 5 minutes, take data for last 30 minutes from current time") then we get multiple duplicated alerts because Period is larger than Frequency so there is a big chance of log search query returning the same log entries every 5 minutes - this is highly undesirable behavior.
Issue happened to be with a buggy bahavior of ARM template creating alerts. Thanks to Stanislav Zhelyazkov it has been nailed down and resolved - I use alternative ARM API now and it seems to work fine. More details on the topic may be found here - https://techcommunity.microsoft.com/t5/Azure-Log-Analytics/Reliably-trigger-alerts-for-Log-Analytics-log-entries/m-p/309610.

Does IoTHub delay messages by a batching interval in a custom endpoint to Azure Storage?

I am sending some messages in a pipeline using Azure IoT Edge. There is a custom endpoint (say, GenericEndpoint) that I have set up, which will send/put the messages to Azure Blob storage. I am using a route to push the device messages to the specific endpoint GenericEndpoint.
The batch frequency of GenericEndpoint is set at 60 seconds. So 1 batch creates 1 single file with some messages, in the container specified.
Lets say, there are N messages in a single blob batch file (say, blobX) in the specific container. If I take the average of the difference between the IoTHub.EnqueuedTime(i) of each message i, in blobX and the 'Creation Time' of blobX, and call it AVG, I get:
I think, this essentially gives me the average time that those N messages spent in iothub before being written in the blob storage. Now what I observe here is that, if p and q are respectively the first and last message written in blobX, then
But since the batching interval was set to 60 seconds, I would expect this average or AVG to be approximately near 30 seconds. Because, if the messages are written as soon as they arrive, then the average for each batch file would be near 30 seconds.
But in my case, AVG ≈ 90 seconds, which suggests the messages wait for atleast approximately one batching interval (60 seconds in this case) before being considered for a particular batch.
Assumption: When a batch of messages are written in a blob file, they are written all at once.
My question:
Is this delay of one batch interval or 60 seconds intentional? If yes, then I assume it will change on changing the batching interval to say 100 seconds.
If, no, then, does it usually take 60 seconds to process a message in iothub and then send it through a route to a custom endpoint? Or am I looking at this from a completely wrong angle?
I apologize beforehand if my question seems confusing.

Solution for delaying events for N days

We're currently writing an application in Microsoft Azure and we're planning to use Event Hubs to handle processing of real time events.
However, after an initial processing we will have to delay further processing of the events for N number of days. The process will work like this:
Event triggered -> Place event in Event Hub -> Event gets fetched from Event Hub and processed -> Event should be delay for X days -> Event gets' further processed (two last steps might be a loop)
How can we achieve this delay of further event processing without using polling or similar strategies. One idea is to use Azure Queues and their visibility timeout, but 7 days is the supported maximum according to the documentation and our business demands are in the 1-3 months maximum range. Number of events in our system should be max 10k per day.
Any ideas would be appreciated, thanks!
As you already mentioned - EventHubs supports only 7 days window of data to be retained.
Event Hubs are typically used as real-time telemetry data pipe-lines where data seek performance is critical. For 99.9% usecases/scenarios our users typically require last couple of hours, if not seconds.
However, after the real-time processing is over, and If you still need to re-analyze the data after a while, for ex: run a Hadoop job on last months data - our seek pattern & store are not optimized for it. We recommend to forward the messages to other data archival stores which are specialized for big-data queries.
As - data archival is an ask that most of our customers naturally look for - we are releasing a new feature which automatically archives the data in AVRO format into Azure storage.

How to set message time to live unlimited in azure service bus queue?

I am trying to create azure service bus queue using azure-sdk-for-node but not able to find the resource to set time to live unlimited .
Here is my sample code :
var queueOptions = {
MaxSizeInMegabytes: '5120',
DefaultMessageTimeToLive: 'PT1M'
};
serviceBusService.createQueueIfNotExists('myqueue', queueOptions, function(error){
if(!error){
// Queue exists
}
});
What will be in DefaultMessageTimeToLive for unlimited time ?
Your code sets the message TTL to 1 minute only.
You can't set TTL to unlimited as it requires a TimeSpan value, so you have to assign something. It could be a fairly large value, but I'd recommend to avoid this practice for a few reasons:
It's a hosted service. TTL is not constrained today, but could be.
For messaging, having a very long TTL is an indication of something that should not be done (messages should be small and processed fast).
Saying that, as of today, you could set TTL to the TimeSpan.MaxValue, which is
10675199 days
2 hours
48 minutes
5 seconds
477 milliseconds
or in iso8601 format is P10675199DT2H48M5.4775807S.
Realistically, 365 days (P365D) or even 30 days (P30D) is way too much for messaging.
"The default time-to-live value for a brokered message is the largest possible value for a signed 64-bit integer if not otherwise specified." (From Microsoft docs)

How Gmail API's Quota Units Work?

According to gmail's API docs, the limits are as follows:
API Limit Type Limit
Daily Usage 1,000,000,000 quota units per day
Per User Rate Limit 250 quota units per user per second, moving average (allows short bursts)
In the table further below, the docs say that a messages.get costs 5 quota units.
In my case, I am interesting in polling my inbox every second to check for new messages, and get the contents of those messages if there any.
Question 1: Does this mean that I'd be spending 5 quota units each second, and that I'd be well under my quota?
Question 2: How should check for only "new" messages? That is, messages that have arrived since the last time I made the API call? Would I need to add "read" labels to the messages after each API call (spending extra quota units on the "modify" API call), or is there an easier way?
Question 1:
That's right. You would spend (5 * 60 * 60 * 24 =) 432000 quota points on the polling, which is nowhere near the limit. You could also implement push notifications if you want Google to notify you of new messages rather than polling yourself.
Question 2:
Listing messages has an undocumented feature of querying for messages after a certain timestamp, given in seconds since the epoch.
If you would like to get messages after Sun, 29 May 2016 07:00:00 GMT, you would just give the the value after:1464505200 in the q query parameter.
Question 1:
You're right about that and as also detailed in your given documentation.
Question 2:
For an easier way, as you've asked, and also encouraged is with the use of Batching Requests. As discussed in the documentation, Gmail API supports batching to allow your client to put several API calls into a single HTTP request.
This related SO post - Gmail API limitations for getting mails can further provide helpful ideas on the usage of batching.

Resources