I am having issues with HLS playback using the media player library at least since v0.3.0 which continue until the current version (v0.5.0). I know the player library is in beta so I am wondering if others see what I see.
Basically, the issue manifests itself in such a way that, after some time, Chromecast device becomes unresponsive. The debugger stops showing any output, closing it and attempting to access it again results in a timeout error. Sometimes, after some time, device just crashes to homescreen (no brainfreeze).
I tried looking at the profiles and timeline before this happens and I don't see any unusual spikes. I did notice some errors in the log (but they could be unrelated to this), saying something like:
An attempt was made to use an object that is not, or is no longer, usable
The only "unusual" thing I am doing is that I broadcast status on every video timeupdate event. This does not cause any such issues in normal playback though.
Hoping that you already fixed that, I come with a workaround for people having the same problem.
I have a receiver streaming HLS (correctly encoded, using CORS headers and AES encryption). I noticed that sometimes the Chromecast goes crazy with huge segments (>25Mo) driving it to crash (almost) randomly when appending the latter segment.
Believing that I was probably asking too much of this small device, there are two solutions to lower the device load :
Disabling AES encryption (not always acceptable)
Reducing the segments quality
About solution 2, this works well :
window.host = new cast.player.api.Host({'mediaElement':mediaElement, 'url':url});
window.protocol = cast.player.api.CreateHlsStreamingProtocol( host );
window.host.getQualityLevel = function(streamIndex, qualityLevel){
var lowestQuality = protocol.getStreamInfo()["bitrates"].length-1;
var plusOneQuality = (qualityLevel == lowestQuality)?qualityLevel:qualityLevel+1;
console.log( "original QualityLevel : " + qualityLevel, "returned QualityLevel", plusOneQuality );
return plusOneQuality;
}
I'd love having some feedback about this. Does someone already had to use such a trick to prevent HLS HD streaming to crash the device ?
Related
I only began using icecast a few days ago, so if I stuffed something up somewhere, please let me know.
I have a weird problem with icecast. Everytime a track is "finished" on icecast, a section of the end of the currently playing track (i think 64kbs of the track) is repeated about 2 to 3 times before the next song plays, but the next song doesn't begin playing in the start, but a few seconds of the way through. Also, I can notice that the playback speed (and hence the pitch) sometimes differs from the original as well.
I consulted this post and this post that was quoted below which taught me what the <burst-on-connect> and the <burst-size> tags are used for. It also taught me this:
What's happening here is that nothing is being added to the buffer, so clients connect, get the contents of that buffer, and then the stream ends. The client must be re-connecting repeatedly, and it keeps getting that same buffer.
Cheers to Brad for that post. A solution to this problem was provided in a comments section of that post and it said to decrease the <source-timeout> of the icecast server, so that it will close the connection quicker and stop any repeating. But this is assuming I want to close the mountpoint, and I dont, because what I am using Icecast for is actually a 24/7 radio player. If I did close my mountpoint, then what happens is VLC just turns off and doesn't repeatedly attempt to connect anymore. Unless this is wrong. I don't know.
I use VLC to hear the playback of the icecast streams and I use nodeshout which is a bunch of bindings from libshout built for node.js. I use nodeshout to send data to a bunch of mounts on my icecast server. In the future I plan to make a site that will listen to the icecast streams, meaning it will replace VLC.
icecast.xml
<limits>
<clients>100</clients>
<sources>4</sources>
<queue-size>1008576</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>30</source-timeout>
<burst-on-connect>1</burst-on-connect>
<burst-size>252144</burst-size>
</limits>
This is a summary of the audio sending code on my node.js server.
nodejs
// these lines of code is a smaller part of a function, and this sets all the information. The variables name, description etc come from the arguments of the function
var nodeshout = require("nodeshout");
let shout = nodeshout.create();
shout.setHost('localhost');
shout.setPort(8000);
shout.setUser('source');
shout.setPassword(process.env.icecastPassword); //password in .env file
shout.setName(name);
shout.setDescription(description);
shout.setMount(mount);
shout.setGenre(genre);
shout.setFormat(1); // 0=ogg, 1=mp3
shout.setAudioInfo('bitrate', '128');
shout.setAudioInfo('samplerate', '44100');
shout.setAudioInfo('channels', '2');
return shout
// now meanwhile somewhere lower in the file, there is this summary of how the audio is sent to the icecast server
var nodeshout = require("nodeshout")
var {FileReadStream, ShoutStream} = require("nodeshout") //here is where the FileReadStream and ShoutStream functions come from
const filecontent = new FileReadStream(pathToSong, 65536); //if I change the 65536 to a higher value, then more bytes are being repeated at the end of the track. If I decrease this, it starts sounding buggy and off.
var streamcontent = filecontent.pipe(new ShoutStream(shoutstream))
streamcontent.on('finish', () => {
next()
console.log("Track has finished on " + stream.name + ": " + chosenTrack)
})
I also notice weirder behaviour. After the previous song had it's last chunk repeated a few times, that's when the server calls the streamcontent.on('finish') event that is located in the nodejs script, and only then does it warn me that the track is finished.
What I have tried
I tried messing around with the <source-timeout> tag, the number of bytes (or bits im not sure) that are being sent on nodejs, the burst size, I also tried turning bursting off completely but it results in super strange behavior.
I also thought creating a new stream every time per song was a bad idea as seen in new ShoutStream(shoutstream) when piping the file data, but using the same stream meant that the program would return an error because it would write the next track to the shoutstream after it had said it had closed.
If any more information is necessary to figure out what is going on, I can provide it. Thanks for your time.
Edit: I would like to add: Do you think I should manually control how many bytes are sent to icecast and then use the same stream object instead of calling a new one every time?
I found out why the stream didn't play some tracks as opposed to others.
How I got there
I could not switch to ogg/vorbis or ogg/opus for my stream, so I had to do something with my source client. I double checked everything was correct and that my audio files were in the correct bitrate. When i ran the ffprobe tool with ffprobe audio.mp3 sometimes the bitrates did not adhere to the typical rates of 120kbps, 128kbps, 192, 312, etc etc so on. It was always some strange value such as 129852 just to provide an example.
I then downloaded the checkmate mp3 checker here and checked my audio files, and they were all encoded in a variable bitrate!!! VBR damnit!
TLDR
I fixed my problem by re-encoding all my tracks to a constant bitrate of 128kbps using ffmpeg.
Quick Edit: I am pretty sure that programs such as Darkice might already support variable bit rate transfers to Icecast servers, but it would be impractical for me to use darkice, hence why I stuck with nodeshout.
I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)
I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.
When engaging in a hangout (or via custom webrtc application) on a video and audio call, one caller will observe distortion in the affected party's audio. This distortion will directly correlate to the packetsSentPerSecond (as observed from chrome://webrtc-internals (from the affected side))
I've observed this pattern on two separate machines. Both Windows 7, (8 to 10) with Chrome 45.0.2454.93 m. The audio interface has been varying, with both internal and USB interfaces tested. Once it occurs, it continues to occur more repeatedly. The pattern also seems to reset. In the above figure, the valleys increase in frequency, and seemingly reset (~9:13) and repeat that pattern.
Wondering if anyone has seen a similar problem or has any thoughts on how this can be further diagnosed.
I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...