Android Sender Seekbar status is inaccurate - google-cast

Once in a while, seekbar values of ExpandedControllerActivity goes out of sync after back to back playback. The max value is 1 and current progress is 1 and this causes the thumb to be at the end irrespective of the actual playback duration. Receiver has correct duration values and sends correct values (when using chrome inspect).
Are there any recommendation to debug these issues at client end as ExpandedControllerActivity source code is obfuscated.

Similar issue tracked at https://issuetracker.google.com/issues/120069343.
In our case, this was resolved by receiver setting stream duration explicitly.

Related

Custom player using NDK/C++/MediaCodec - starvation/buffering in decoder

I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.

AVPlayerItem's canUseNetworkResourcesForLiveStreamingWhilePaused doesn't have any effect

Since iOS 9 / tvOS 9 AVPlayerItem instances have the new Boolean variable canUseNetworkResourcesForLiveStreamingWhilePaused.
The documentation states:
"Indicates whether the player item can use network resources to keep playback state up to date while paused
For live streaming content, the player item may need to use extra networking and power resources to keep playback state up to date when paused. For example, when this property is set to YES, the seekableTimeRanges property will be periodically updated to reflect the current state of the live stream."
I have been testing this property on apple TV and observed the seekableTimeRanges whilst the live video stream is paused, but it doesn't appear to matter whether I set it to yes/no -- the time range contained in seekableTimeRanges updates periodically in both cases.
Are you getting similar results?

Mediafilesegmenter inserts timed metadata ID3 tags in HLS stream but at the wrong point in time

I am inserting timed metadata in a HLS (HTTP Live Stream) using id3taggenerator and mediafilesegmenter. I have followed the instructions from Jake's Blog.
First, I create the id3tag using id3taggenerator:
id3taggenerator -o text.id3 -t "video"
Then add the tag to the id3macro file:
0 id3 /path/to/file/text.id3
And segment the video and insert the id3 tags with mediafilesegmenter:
mediafilesegmenter -M /path/to/id3macro -I -B "my_video" video.mp4
However, the timed metadata is inserted at the wrong point in time. Instead of showing up at the beginning of the video (point in time 0), it is added with a delay of 10 s (give or take 0.05 seconds, sometimes more, sometimes less).
I've wrote a simple iOS player app that logs whenever it is notified of an id3 tag in the video. The app is notified after playing the video for around 10 seconds of the ID3 tag. I've also tried with another id3macro file, with multiple timed metadata inserted in the video (around 0s, 5s, 7s), all showing up with the same approximate delay. I have also changed with the duration of the segment to 5s, but each time it's the same result.
The mediafilesegmenter I am using is Beta Version 1.1(140602).
Can anyone else confirm this problem, or pin-point to what am I doing wrong here?
Cheers!
I can confirm that I experience the same issue, using the same version of mediafilesegmenter:
mediafilesegmenter: Beta Version 1.1(140602)
Moreover, I can see that the packet with ID3 is inserted in the right moment in the stream. Eg. if I specify a 10 second delay – I can see that my ID3 is inserted in the end of the first 10 second segment.
However, it appears 10 seconds later in iOS notifications.
I can see the following possible reasons:
mediafilesegmenter inserts metadata packet in the right place, but timestamp is delayed by 10 seconds for some reason. Therefore, clients (eg. iOS player) show the tag 10 seconds later. Apple tools are not well documented so it's hard to verify.
Maybe iOS player receives metadata in time (because I know the tag was included in previous segment file) but issues a notification with 10 second delay, for whatever reason.
I cannot dig further because I don't have any Flash/desktop HLS players that support in-stream ID3 tags. If I had one, I would check whether desktop player will display/process ID3 in time, without delay. Then, it would mean the problem is iOS, not mediafilesegmenter.
Another useful thing to do would be – extracting MPEG-TS frame with ID3 tag from the segment file, and checking headers, looking for any strange things there (eg. wrong timestamp).
Update:
I did some more research including reverse engineering of TS segments created with Apple tools, and it seems:
mediafilesegmenter starts PTS (presentation time stamps) from 10 seconds while, for example, ffmpeg starts from 0.
mediafilesegmenter adds ID3 frame at the correct place in TS file but with wrong PTS that is 10 seconds ahead of what was specified in meta file.
While the first issue doesn't seem to affect the playback (as far as I understand it's more important that PTS goes on continuously, not where it starts), the second is definitely an issue and the reason you/we are experiencing the problem.
Therefore, iOS player receives ID3 frame in time but since its PTS is 10 seconds ahead – it waits 10 seconds before issuing a notification. As far as I can say now – some other players simply ignore this ID3 frame because it's in the wrong place.
As a workaround, you can shift all ID3 files by 10 seconds in your meta file, but obviously, you won't be able to put anything in the beginning.

uuid_record not recording audio on second record command

I have a setup where I open a connection to freeswitch through the ESL and start exchanging commands.
In one specific scenario I want for freeswitch to call me and record a message. So I call a phone number with sofia and park the call
originate {set some private variables and origination_caller_id_number}sofia/gateway// &park()
During the call I play a few messages
uuid_broadcast playback::
And listen to events waiting specific for DTMF tones so I can take action. Play another message or start recording
To stop a playback and start recording
uuid_break uuid_record start
I also playback the recorded file to the user using the same playback command
Now the issue, the first time a message is recorded it works fine, I can listen to it. After I record a new message on the same call nothing is recorded in the file. I can download the file to listen to it directly and still no sound. I see that the file is created and it's size is compatible with the length recorded but even looking with Audacity there is no audio in it.
What can be causing this and does anyone have an idea on how to fix it?
Thanks for the help!
this looks like a bug, probably worth submitting to Jira. Do you specify a different file name for a new recording?
I have reproduced this issue as well. I believe this is a bug in FreeSwitch. I am not aware of any work around.
If you want to hear audio in the second recording, you must make the recording LONGER than the first recording. I believe the reason for this is that the audio buffer from the first recording is still associated to the recording session and it is full of silence. The stale first audio buffer is saved at the beginning of the second audio recording. So if you want to hear anything from the second recording, make it a longer recording.

How can I determine the length of time since the last screen refresh on X11?

I'm trying to debug a laggy machine vision camera by writing text timestamps to a terminal window and then observing how long it takes for the camera to 'detect' the screen change. My monitor has a 60hz refresh rate, so the screen is updated every ~17ms. Is there a way to determine at what point within that 17ms window the refresh timer currently is for an X11 application.
EDIT: After wrestling with the problem for nearly a day, I think the real question I should have asked was how to generate a visual signal that was sufficiently fast to test the camera images. My working hypothesis was that the camera was buffering frames before transmitting them, as the video stream seemed to lag behind other synchronised digital events (in this case, output signals to a robotic controller)
'xrefresh' is a tool which can trigger a refresh event on an X server. It does this by painting a global window of a specified color and then removing it, causing all subsequent windows to repaint. Even with this, I was still getting very inconsistent results when trying to correlate the captured frames against the monitor output, no matter what I tried to do, the video stream seemed to lag behind what I expected the monitor state to be. This could mean that either the camera was slow to capture or the monitor was slow to update. Fortunately, I eventually hit upon the idea of using the keyboard leds to verify the synchronicity of the camera frames. ('xset led' and 'xset -led'). This showed me immediately that in fact my computer monitor was slow to update, instead of the camera lagging behind.

Resources