I am trying to cast just a snippet of a file (say, only from 00:00:30 to 00:00:40) from a Chrome sender to the default receiver. Reading the API reference documentation documentation for LoadRequest, MediaInfo, and QueueItem, it seemed like I should be able to do this with some combination of these. In particular, the first queued item (loaded with CastSession#loadMedia) would need LoadRequest#currentTime set to the offset (30 seconds in my example above) and MediaInfo#duration set to the duration (10 seconds in my example), while subsequently queued items would set QueueItem#startTime and QueueItem#playbackDuration to the offset and duration (respectively).
However, this isn't happening in practice. I can confirm that the queue on the receiver has these fields set, but the no matter how I go about this, I can't get the right snippet to play. When I add the first media item as described above, the receiver just plays the track from beginning to end, neither respecting the offset nor the duration. Since the combination of LoadRequest#currentTime and MediaInfo#duration is a bit odd, I tried using only the QueueItem method (add the first media item with autoplay = false, add another queue item, remove the first, and then start playing the queue). In this case, the offset was still not respected, and the duration ended up being (very strangely) the sum of startTime and playbackDuration (in addition, any subsequently queued items would load, and then "finish" playing without starting, which I also can't figure out).
Does anyone else have experience with this part of the API? Am I reading the documentation incorrectly and what I'm doing just isn't supported, or am I just piecing things together incorrectly?
I am not sure I understand why you are attempting to use a queue with multiple items. First, the duration field is not what you think it is; it is not the duration of play back that you want, it is the total duration of the media that is being loaded, regardless of where you start or stop the playback. In fact, in most cases, you don't even need to set that; the receiver gets the total duration of the media when it loads he item, at least in the majority of the cases. The currentTime should work (if it is not, please file a bug on our SDK issue tracker) and alternatively, you can load a media (with autoplay off) and "seek" to the time you want and then play. To stop at a certain point, you need to monitor the the playback location and when it reaches that point, pause the playback.
Related
My extension(manifest v3) needs to track the number of times a set of websites are visited either during the whole day or during certain time windows and then perform an action if the visit count exceeds a limit.
There are two ways I could think of implementing this:
alarm + history: Create an alarm that runs every 5 mins, search the history for the required websites and count the visits. If the count exceeds the limit perform an action
storage + history: Add a listener to chrome.history.onVisited. If the visited site is from the required list, increment the visit count in storage. If the storage count exceeds the limit perform an action
Which of the above approaches has least impact on Chrome's browsing performance? Or, is there any another api(s) that I can use to achieve the same?
I would like my extension to consume least amount of user's battery :)
In 1 the extension will do a lot of unnecessary work when the user isn't using the browser.
In 2 the extension's background script will restart more often if the user navigates a lot but makes pauses between navigating for more than the lifetime duration of the service worker (30 seconds by default), which is a typical interaction scenario.
In both cases the bigger inherent problem of ManifestV3 for an extension such as yours that observes user activity is not what the extension does itself, but the extremely huge overhead to restart the background worker, which is automatically terminated after 30 seconds since the last observed event (or 5 minutes if you use waitUntil). Such pauses in user activity are typical when browsing/interacting so for many users the worker will restart hundreds of times a day. Starting the worker takes 50-100ms and stresses the CPU+memory+disk for the entire duration, while a typical time spent in a simple observation extension's code is just 1-2ms.
In other words, an extension that observes user activity, such as yours, is inherently 25-100 times less efficient in ManifestV3 than it would be in ManifestV2 with a persistent background script.
Solutions.
Prolong the service worker's lifetime to reduce the amount of its restarts as shown here. To avoid wasting memory for users that keep the browser open without using it for hours you can dynamically adjust the lifetime duration by measuring and averaging intervals between the events or offer an option to set the duration in your extension UI. Hopefully, in the future the browser will do it automatically, but it may take years before this feature is actually implemented and even then it will still likely restart the background script way too often.
Use chrome.webNavigation events with a URL filter for your sites so that the background script wakes up only when these specific URLs are visited. If the URLs are configured by the user, you will need to unregister the listener first (e.g. by making the listener a named global function), then register it with the new URL filter. You may still need to prolong the worker's lifetime if these URLs are visited a lot.
I am trying to understand change feeds in Azure. I see I can trigger an event when something changes in cosmos db. This is useful. However, in some situations, I expect a document to be changed after a while. A question should have a status change that it has been answered. After a while an order should have a status change "confirmed" and a problem should have status change "resolved" or should a have priority change (to "low"). It is useful to trigger an event when such a change is happening for a certain document. However, it is even more useful to trigger an event when such a change after a (specified) while (like 1 hour) does not happen. A problem needs to be resolved after a while, an order needs to be confirmed after while etc. Can I use change feeds and azure functions for that too? Or do I need something different? It is great that I can visualize changes (for example in power BI) once they happen after a while but I am also interested in visualizing changes that do not occur after a while when they are expected to occur.
Achieving that with Change Feed doesn't sound possible, because as you describe it, Change Feed is reacting based on operations/events that happen.
In your case it sounds as if you needed an agent that needs to be running every X amount of time (maybe an Azure Functions with a TimerTrigger?) and executes a query to find items with X state that have not been modified in the past Y pre-defined interval (possibly the time interval associated with the TimerTrigger). This could be done by checking the _ts field of the state documents or your own timestamp field, see https://stackoverflow.com/a/39214165/5641598.
If your goal is to just deploy it on a dashboard, you could query using Power BI too.
As long as you don't need too much time precision (the Change Feed notifications are usually delayed by a few seconds) for this task, the Azure CosmosDB Change Feed could be easily used as a solution, but it would require some extra work from the Microsoft team to also support capturing deletion TTL expiration events.
A potential solution, if the Change Feed were to capture such TTL expiration events, would be: whenever you insert (or in your use case: change priority of) a document for which you want to monitor lack of changes, you also insert another document (possibly in another collection) that acts as a timer, specifying a TTL of 1h.
You would delete the timer document manually or by consuming the Change Feed for changes, in case a change actually happened.
You could also easily consume from the Change Feed the TTL expiration event and assert that if the TTL expired then there were no changes in the specified time window.
If you'd like this feature, you should consider voting issues such as this one: https://github.com/Azure/azure-cosmos-dotnet-v2/issues/402 and feature requests such as this one: https://feedback.azure.com/forums/263030-azure-cosmos-db/suggestions/14603412-execute-a-procedure-when-ttl-expires, which would make the Change Feed a perfect fit for scenarios such as yours. Sadly it is not available yet :(
TL;DR No, the Change Feed as it stands would not be a right fit for your use case. It would need some extra functionalities that are planned but not implemented yet.
PS. In case you'd like to know more about the Change Feed and its main use cases anyways, you can check out this article of mine :)
In the jukebox.c example of libspotify I count all frames of the current track in the music_delivery callback. When end_of_track is called the frames count is different each time I played the same track. So end_of_track is called several seconds after the song is over. And this timespan differs for each playback.
How can I determine if the song is really over? Do I have to take the duration of the song in seconds and multiply it with the sample rate to take care when the song is over?
Why are more frames delivered than necessary for the track? And why is end_of_track not called on the real end of it? Or I am missing something?
end_of_track is called when libspotify has finished delivering audio frames for that track. This is not information about playback - every playback implementation I've seen keeps an internal buffer between libspotify and the sound driver.
Depending on where you're counting, this will account for the difference you're seeing. Since the audio code is outside of libspotify, you need to keep track of what's actually going to the sound driver yourself and stop playback, skip to the next track or whatever you need to do accordingly. end_of_track is basically there to let you know that you can close any output streams you may have from the delivery callback to your audio code or something along those lines.
I'd like to write an extension that displays a desktop notification every day at a specified time. Having a quick look through the Chrome APIs, it seems like the only way to do this would be to:
create a background page for my extension,
use setInterval() with a sufficiently low resolution to not tax the CPU (even 5 min is fine),
when interval fires, check if the current time is after the desired time,
ensure that the user has not already been displayed the notification today.
(The details of the last step are irrelevant to my question, just put in to show I realize I need to prevent "flapping" of the notice).
This seems rather indirect and potentially expensive though; is there any way around this? Is the background page needed?
I suppose I could just call setTimeout() and only fire the event once (by calculating how long between now & desired time), then call it again after the notification is shown. For some reason that sounds more "brittle", though I'm not sure why...
I think you will want the background page to do this smoothly. You can't use a content script because you need to keep the "state"/timer.
So when background page first loads (browser start) you work out the current time and the offset to the next notification time and setInterval to that exact interval. That way you won't need to poll every five minutes and/or work out if you've shown the message. You simply show it at the exact time required. This has to be far more efficient, effective and cleaner than polling. At notification you just reset the interval again.
Some sample functions here:
setTimeout but for a given time
From reading the above post and from a quick search on the net it appears that you should have no problem calling setInterval for an interval such as once a day. Calvin suggests 25 days!
That is how I would approach it.
EDIT: Since posting one thing that has sprung to mind is what happens if a PC gets hibernated for n hours? I need to test this myself for a similar project so I will update once I've had a chance to test this out.
I have built a standalone app version of a project that until now was just a VST/audiounit. I am providing audio support via rtaudio.
I would like to add MIDI support using rtmidi but it's not clear to me how to synchronise the audio and MIDI parts.
In VST/audiounit land, I am used to MIDI events that have a timestamp indicating their offset in samples from the start of the audio block.
rtmidi provides a delta time in seconds since the previous event, but I am not sure how I should grab those events and how I can work out their time in relation to the current sample in the audio thread.
How do plugin hosts do this?
I can understand how events can be sample accurate on playback, but it's not clear how they could be sample accurate when using realtime input.
rtaudio gives me a callback function. I will run at a low block size (32 samples). I guess I will pass a pointer to an rtmidi instance as the userdata part of the callback and then call midiin->getMessage( &message ); inside the audio callback, but I am not sure if this is thread-sensible.
Many thanks for any tips you can give me
In your case, you don't need to worry about it. Your program should send the MIDI events to the plugin with a timestamp of zero as soon as they arrive. I think you have perhaps misunderstood the idea behind what it means to be "sample accurate".
As #Brad noted in his comment to your question, MIDI is indeed very slow. But that's only part of the problem... when you are working in a block-based environment, incoming MIDI events cannot be processed by the plugin until the start of a block. When computers were slower and block sizes of 512 (or god forbid, >1024) were common, this introduced a non-trivial amount of latency which results in the arrangement not sounding as "tight". Therefore sequencers came up with a clever way to get around this problem. Since the MIDI events are already known ahead of time, these events can be sent to the instrument one block early with an offset in sample frames. The plugin then receives these events at the start of the block, and knows not to start actually processing them until N samples have passed. This is what "sample accurate" means in sequencers.
However, if you are dealing with live input from a keyboard or some sort of other MIDI device, there is no way to "schedule" these events. In fact, by the time you receive them, the clock is already ticking! Therefore these events should just be sent to the plugin at the start of the very next block with an offset of 0. Sequencers such as Ableton Live, which allow a plugin to simultaneously receive both pre-sequenced and live events, simply send any live events with an offset of 0 frames.
Since you are using a very small block size, the worst-case scenario is a latency of .7ms, which isn't too bad at all. In the case of rtmidi, the timestamp does not represent an offset which you need to schedule around, but rather the time which the event was captured. But since you only intend to receive live events (you aren't writing a sequencer, are you?), you can simply pass any incoming MIDI to the plugin right away.