Since iOS 9 / tvOS 9 AVPlayerItem instances have the new Boolean variable canUseNetworkResourcesForLiveStreamingWhilePaused.
The documentation states:
"Indicates whether the player item can use network resources to keep playback state up to date while paused
For live streaming content, the player item may need to use extra networking and power resources to keep playback state up to date when paused. For example, when this property is set to YES, the seekableTimeRanges property will be periodically updated to reflect the current state of the live stream."
I have been testing this property on apple TV and observed the seekableTimeRanges whilst the live video stream is paused, but it doesn't appear to matter whether I set it to yes/no -- the time range contained in seekableTimeRanges updates periodically in both cases.
Are you getting similar results?
Related
I'm working on analyzing traffic flow on 3 different bus stops as a research by watching recordings of it on VLC; I need to write down real-world timestamp of every bus stop and go along with a number of passengers boarding and exiting bus.
I'm looking for a way to speed things up by maybe using some kind of plugin in VLC player that would allow me to mark all these events with bound hotkey(s) and later export it to Excel/Notepad using offset time (I know the start time of recording, so I would export them in Excel with offset value in comparison to real-world timestamp). Timestamp format is hh:mm:ss, is there any way for this to work?
When doing scanning with passive RFID tags, you can set the SESSION to '2' in order for the tag state of 'B' to persist for "an indefinite amount of time" even when it is not being energized by the scanner, according to the standards. Your tag will then not be visible to the scanner until this indefinite amount of time expires.
My question is, does anyone have any idea what the maximum amount of time is for RFID tags? I'm sure it's different for different tag manufacturers , etc. However, are we talking seconds, minutes, hours, or even days? I don't want to keep seeing the same tags over and over again while doing a scan in the storeroom, but at the same time, I don't want the tag to be hidden if they need to be scanned again at a later time.
The answer is: it depends. Please note that the standard says 'indefinite when powered'. When powered, it is really indefinite. When not powered, the standard defines it is longer than 5 seconds. For most modern tags, it is typically less than 30s, of course depending on environment conditions.
About the definition of 'powered': please note that this power can originate from any RFID reader, not only the one you are using to interrogate the tags with. Or any other radio device that transmits at the same frequency.
To circumvent this, you can use a SELECT statement to revert the session flag back from B to A.
Once in a while, seekbar values of ExpandedControllerActivity goes out of sync after back to back playback. The max value is 1 and current progress is 1 and this causes the thumb to be at the end irrespective of the actual playback duration. Receiver has correct duration values and sends correct values (when using chrome inspect).
Are there any recommendation to debug these issues at client end as ExpandedControllerActivity source code is obfuscated.
Similar issue tracked at https://issuetracker.google.com/issues/120069343.
In our case, this was resolved by receiver setting stream duration explicitly.
How can I detect power outage on bridge? I tried using CLIP sensor daylight's lastupdated object and checked it against none but it does not help. As per meet hue description of 'lastupdated' object, it should none.
"Last time (based on /config/utc) the sensor send state data reflected in the state field. No value change is required to update the field. “none” (asof 1.x.0 null) when not initialized/no recent update has been received since the last bridge power cycle
"
But it always returns as timestamp. Can somebody suggest a way out please?
regards.
You can create a CLIPGenericStatus sensor and set it to a value that is not 0.
When the bridge restarts it will be 0 again.
You don't describe how you want to use this value (read it with by external process or trigger a rule on the bridge), but this is an indicator that you can use.
A Philips support developer recently came up with a solution on the meethue forums.
The idea here is that schedules start running when the bridge boots
and the state of a ClipGenericStatus sensor initiates its status to 0
after a reboot. This might be subject to change.
Create a ClipGenericStatus sensor.
Create a schedule that will change the status of the above ClipGenericStatus sensor to 1 every 10 to 15 seconds.
Create a rule that will do something with the lights when the above ClipGenericStatus sensor is equal to 1. The rule can for example
turn off all lights if the time is between 23.00 and 07.00. Some
downsides are:
It will also trigger when there is a reboot after disconnecting and connecting the powercord manually.
It will also trigger when there is a reboot after bridge firmware update or internal crash.
This isn't a solution for configurable startup behaviour.
Going back to last state, with saving all lightstates to a scene at a specific interval, is not recommended as it will degrade the life
expectancy of the lamps involved.
Link to original post: https://developers.meethue.com/comment/2918#comment-2918
Can anyone give me a concise explanation of how and why DirectShow DirectSound Audio Renderer will adjust the rate when I have my custom capture filter that does not expose a clock?
I cannot make any sense of it at all. When audio starts, I assign a rtStart of zero plus the duration of the sample (numbytes / m_wfx.nAvgBytesPerSec). Then the next sample has a start time of the end of the previous sample, and so on....
Some time later, the capture filter senses Directshow is consuming samples too rapidly, and tries to set a timestamp of some time in the future, which the audio renderer completely ignores. I can, as a test, suddenly tell a sample it must not be rendered until 20 secs in the future (StreamTime() + UNITS), and again the renderer just ignores it. However, the Null Audio Renderer does what it is told, and the whole graph freezes for 20 seconds, which is the expected behaviour.
In a nutshell, then, I want the audio renderer to use either my capture clock (or its own, or the graph's, I dont care) but I do need it to obey the time stamps I'm sending to it. What I need it to do is squish or stretch samples, ever so subtly, to make up for the difference in the rates between DSound and the oncoming stream (whose rate I cannot control).
MSDN explains the technology here: Live Sources, I suppose you are aware of this documentation topic.
Rate matching takes place when your source is live, otherwise audio renderer does not need to bother and it expects the source to keep input queue pre-loaded with data, so that data is consumed at the rate it is needed.
It seems that your filter is capturing in real time (capture filter and then you mention you don't control the rate of data you obtain externally). So you need to make sure your capture filter is recognized as live source and then you choose the clock for playback, and overall the mode of operation. I suppose you want the behavior described hear AM_PUSHSOURCECAPS_PRIVATE_CLOCK:
the source filter is using a private clock to generate time stamps. In this case, the audio renderer matches rates against the time stamps.
This is what you write about above:
you time stamp according to external source
playback is using audio device clock
audio renderer does rate matching to match the rates
To see how exactly rate matching takes place, you need to open audio renderer property pages, Advanced page:
Data under Slaving Info will show the rate matching details (48000/48300 matching in my example). The data is also available programmatically via IAMAudioRendererStats::GetStatParam.