Google Cast Video Player becomes unresponsive after network error - google-cast

I am working on a Chromecast custom receiver app, built on top of the sample app provided by Google (sampleplayer.CastPlayer)
The app manages a playlist, I would like the player to move on to the next item in the list after a video fails to play for whatever reason.
I am running into a situation where, after a video fails to load because of a network error, the player becomes unresponsive: in the 'onError_()' handler, my custom code will do this
var queueLoadRequest = ...
var mediaManager = ...
setTimeout (function(){mediaManager.queueLoad(queueLoadRequest)}), 5000
...the player does receive the LOAD event according to the receiver logs, but nothing happens on the screen, the player's status remains IDLEand the mediaManager.getMediaQueue().getItems() remains undefined. Same result trying to use the client controller to try to load a different video.
I have tried to recover with mediaManager.resetMediaElement() and player.reset() in the onError_ handler, but no luck.
For reference, here is a screenshot of the logs (filtered for errors only) leading up to the player becoming unresponsive. Note that I am not interested in fixing the original error, what I need to figure out is how to recover from it:
My custom code is most likely responsible for the issue, however after spending many hours + stripping the custom code to a bare minimum in an effort to isolate the responsible bit of code, I have not made any progress. I am not looking for a fix but rather for some guidance in troubleshooting the root cause: what could possibly cause the Player to become unresponsive? or alternatively how can one recover from an unresponsive Player?

Related

How to simply set the default alarm tone to a custom tone

My app currently uses the default alarm ringtone for certain events, which is realized using the following two lines of code in onCreate():
Uri notification = RingtoneManager.getDefaultUri(RingtoneManager.TYPE_ALARM);
ringtone = RingtoneManager.getRingtone(getApplicationContext(), notification);
At the start of the activity class, I have defined ringtone:
Ringtone ringtone;
At the places in the code where the alarm should actually sound, I use ringtone.play() and ringtone.stop(). Works fine.
Now, I would now like to replace the default alarm tone with a custom tone (alarmsound.mp3). To that purpose, I've placed that custom tone in the resources of the app (res/raw/alarmsound.mp3).
How do I change my code to play the custom tone? I've checked this question for duplicates, but the answers (and even the questions) seem incredibly complicated and long (imagine accessing custom-made drawables would require so much code). Is there an easy way to do this, like by modifying or adding to the above code (and without having to ask the user any extra permissions)?
I already tried
Uri notification = RingtoneManager.getActualDefaultRingtoneUri(MainActivity.this, R.raw.alarmsound);
ringtone = RingtoneManager.getRingtone(getApplicationContext(), notification);
but that gave a NullPointerException in ringtone.play()
RingtoneManager.setActualDefaultRingtoneUri(baseContext, RingtoneManager.TYPE_RINGTONE,
Uri.parse("android.resource://com.apppackage/" + R.raw.alarmsound));
RingtoneManager.getRingtone(baseContext,
Uri.parse("android.resource://com.apppackage/" + R.raw.alarmsound));

Does v3 Google Cast receiver parse alternative audio tracks from an hls master playlist automatically or do I have to define them in the sender?

I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.

AUDIO_OUTPUT_FLAG_FAST denied by client OR E/MediaPlayer: Error (1,-19)

I am creating a game using Libgdx. I have a lot of small sounds files in the MP3-format and since it is so many I do not preload them. I only load the sound I want to play when it is to be used, like this:
actorSound = Gdx.audio.newSound(Gdx.files.internal(sound));
The code above works great, but the rest of my sounds do not unfortunately. The actor above has its own class and plays a different sound every time it is touched.
When I try to play sounds in my Gamescreen I get the following error:
AUDIO_OUTPUT_FLAG_FAST denied by client
All the audiofiles have the same properties and have been recorded using the same microphone & Audacity. The files are all 44100Hz and only a few kb in size each.
I wonder why the sounds the Actor plays work and the other sounds do not?
I decided to try to change the non-working sounds to music instead and now they play fine - for a little while that is. I can play a full game, return to the menu and start a new game again. The second time I start a game I only get 3 sounds from the Gamescreen and then it is silent except for the sounds from the actor. The error that appears looks like this:
E/MediaPlayer: Error (1,-19)
I load the Music just the same way as the Sound:
gameSound = Gdx.audio.newMusic(Gdx.files.internal(soundeffect));
I have looked into the two errors by reading posts like these:
AUDIO_OUTPUT_FLAG_FAST denied by client
Mediaplayer error (-19,0) after repeated plays
But I'm not sure what to do or change to solve my problem. I would prefer Sound if that is possible, but Music is an acceptable workaround...
When it comes to Music it is probably as suggested in the URL, that I do not release the media players. I am not sure how to do that?
When I leave the GameScreen for another screen, like the MenuScreen or RewardScreen, I dispose Music first. The other screens use Music as well and the sounds are loaded when needed. When I change back to the GameScreen I dispose again and then start a new game...
Any ideas or suggestions? Any help is greatly appreciated.
I added AssetManager as suggested, and I now have a loding screen that loads all the sounds. I load them as sounds and not music, which is how I prefer it.
The sounds work well in the actual game but once I get to the reward screen, only the first sound plays and after that the app crashes with the following error:
06-02 07:47:09.774 6208-6282E/AndroidRuntime: FATAL EXCEPTION: GLThread 2935
Process: PID: 6208
java.lang.NullPointerException: Attempt to read from field 'com.badlogic.gdx.assets.AssetManager
Assets.Assets.manager' on a null object reference
at DelayedSounds(RewardsScreen.java:538)
at RewardsScreen.Update(RewardsScreen.java:567)
at Screens.RewardsScreen.render(RewardsScreen.java:577)
at Game.render(Game.java:46)
at com.badlogic.gdx.backends.android.AndroidGraphics.onDrawFrame(AndroidGraphics.java:459)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1562)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1262)
06-02 07:47:13.823 6208-6208 E/AndroidGraphics: waiting for pause
synchronization took too long; assuming deadlock and killing
The DelayedSounds method looks like this:
public void DelayedSounds(){
timer = timer + Gdx.graphics.getDeltaTime();
if(playitem == true && timer > 2){
itemsound = "vinster/" + item + ".mp3";
assets.manager.get(itemsound, Sound.class).play(volume);
//sound = Gdx.audio.newMusic(Gdx.files.internal(itemsound));
//sound.setVolume(volume);
//sound.play();
playitem = false;
}
if(playkeep == true && time > 3){
assets.manager.get("dialog/VINSTEN.mp3", Sound.class).play(volume);
//sound = Gdx.audio.newMusic(Gdx.files.internal("prizes/prize.mp3"));
//sound.setVolume(volume);
//sound.play();
playkeep = false;
}
}
As can be seen I use AssetManager now, and have commented out my old code for testing purposes. I define AssetManager in my main class and then pass it around to all other classes that uses sounds.
The RewardsScreen will only play the first sound and then it crashes on a NULL object reference as it seems.
If I change my code back to using Music in the RewardsScreen it works fine (see the code that is commmented out)
The sound I try to play is exactly the same in both cases. Assets class that handles the loading of all my assets has the sounds included and since I still get a NULL object I assume one or more items fails to load?
I search the logs and find this where it loads the sounds:
06-02 08:05:09.360 9844-9888/E/WVMExtractor: Failed to open libwvm.so: dlopen failed: library "libwvm.so" not found
06-02 08:05:09.395 9844-9888/ E/NdkMediaExtractor: sf error code: -1010
06-02 08:05:09.395 9844-9888/ E/SoundPool: Unable to load sample
Maybe this is related to my problem?
I load all the sounds in the same manner and most definitely seem to work, the loading is the regular:
manager.load("prizes/cash.mp3", Sound.class);
I still find the AUDIO_OUTPUT_FLAG_FAST denied by client in my logs but now the sounds are playing instead of being rejected.
Any more ideas about how to solve this?
Your link having enough information for your bug.
It's not good way to create Resource instance each time in Game, Create once and user All over your game, if possible use AssetManager.
Like create Music instance in onCreate() method of your game.
gameSound = Gdx.audio.newMusic(Gdx.files.internal(soundeffect));
Music having play(), resume() and many other helpful methods.
You can also take a look of this, it may be helpful.

libspotify: logging out or releasing session causes crash

This is in response to dan's (dan^spotify on IRC) offer to take a look at my testcase, but I post it here in case anyone has encountered similar issues.
I'm experiencing a problem with libspotify where the application crashes (memory access violation) in both of these two scenarios:
the first sp_session_process_events (triggered by notify main thread callback) that's called after the sp_session_logout() function is called crashes the application
skipping logout and calling sp_session_release() crashes the application
I've applied sufficient synchronization from the session callbacks, and I'm otherwise operating on a single thread.
I've made a small testcase that does the following:
Creates session
Logs in
Waits 10 seconds
Attempts to logout, upon which it crashes (when calling sp_session_process_events())
If it were successful in logging out (which it isn't), would call sp_session_release()
I made a Gist for the testcase. It can be found here: https://gist.github.com/4496396
The test case is made using Qt (which is what I'm using for my project), so you'd need Qt 5 to compile it. I've also only written it with Windows and Linux in mind (don't have Mac). Assuming you have Qt 5 and Qt Creator installed, the instructions are as follows:
Download the gist
Copy the libspotify folder into the same folder as the .pro file
Copy your appkey.c file into the same folder
Edit main.cpp to login with your username and password
Edit line 38-39 in sessiontest.cpp and set the cache and settings path to your liking
Open up the .pro file and run from Qt Creator
I'd be very grateful if someone could tell me what I'm doing wrong, as I've spent so many hours trying anything I could think of or just staring at it, and I fear I've gone blind to my own mistakes by now.
I've tested it on both Windows 7 and Linux Ubuntu 12.10, and I've found some difference in behavior:
On Windows, the testcase crashes invariably regardless of settings and cache paths.
On Linux, if setting settings and cache to "" (empty string), logging out and releasing the session works fine.
On Linux, if paths are anything else, the first run (when folder does not already exist) logs out and releases session as it should, but on the next run (when folder already exists), it crashes in the exact same way as it does on Windows.
Also, I can report that sp_session_flush_caches() does not cause a crash.
EDIT: Also, hugo___ on IRC was kind enough to test it on OSX for me. He reported no crashes despite running the application several times in a row.
While you very well may be looking at a bug in libspotify, I'd like to point out a possibly redundant call to sp_session_process_events(), from what I gathered from looking at your code.
void SessionTest::processSpotifyEvents()
{
if (m_session == 0)
{
qDebug() << "Process: No session.";
return;
}
int interval = 0;
sp_session_process_events(m_session, &interval);
qDebug() << interval;
m_timerId = startTimer(interval);
}
It seems this code will pickup the interval value and start a timer on that to trigger a subsequent call to event(). However, this code will also call startTimer when interval is 0, which is strictly not necessary, or rather means that the app can go about doing other stuff until it gets a notify_main_thread callback. The docs on startTimer says "If interval is 0, then the timer event occurs once every time there are no more window system events to process.". I'm not sure what that means exactly but it seems like it can produce at least one redundant call to sp_session_process_events().
http://qt-project.org/doc/qt-4.8/qobject.html#startTimer
I notice that you will get a crash on sp_session_release if you have a track playing when you call it.
I have been chasing this issue today. Login/logout works just fine on Mac, but the issue was 100% repeatable as you described on Windows.
By registering empty callbacks for offline_status_updated and credentials_blob_updated, the crash went away. That was a pretty unsatisfying fix, and I wonder if any libspotify developers want to comment on it.
Callbacks registered in my app are:
logged_in
logged_out
notify_main_thread
log_message
offline_status_updated
credentials_blob_updated
I should explicitly point out that I did not try this on the code you supplied. It would be interesting to know if adding those two extra callbacks works for you. Note that the functions I supply do absolutely nothing. They just have to be there and be registered when you create the session.
Adding the following call in your "logged in" libspotify callback seems to fix this crash as detailed in this SO post:
sp_session_playlistcontainer(session);

AudioQueueStart fail -12985

I made a streaming music player and it works fine in the foreground.
But in the background iOS4, it doesn't play the next song automatically. ( remote control works )
The reason is AudioQueueStart return -12985.
I already check the audio session. it just fine. I use AudioQueueStart when it start to play the music.
How can you remove AudioQueueStart?
- (void)play
{
[self setupAudioQueueBuffers]; // calcluate the size to use for each audio queue buffer, and calculate the // number of packets to read into each buffer
OSStatus status = AudioQueueStart(self.queueObject, NULL);
}
I read the answer in the web about the AudioQueueStart fail subject.
One thing to check is that the AudioSession is active first.
In my case, I had previously set the session to inactive between song changes before starting a new song:AudioSessionSetActive(false);
Once I removed this AudioQueueStart works just fine from the background.
In my experience, the -12985 message occurs because another app already has an audio session active when you try to start playback in your app. Options are to 1) instruct the user to close the other app, or 2) set mix mode (see kAudioSessionProperty_OverrideCategoryMixWithOthers).
The disadvantage of mix mode is if you depend on lock screen art or remote controls, they won't work with mix mode set.
I also faced with such problem week ago. I've spent two days to find solution and I found it. May be this link will help (it is official answer): http://developer.apple.com/library/ios/#qa/qa1668/_index.html
Make sure that you activate session from applicationDidEnterBackground task handler. Now my application can play sound in background.
See this.
You probably need to include the following:
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
Towards the bottom there is a reiteration of the how important that line is. As it is not mentioned in any of the three main audio audio guides (AVFoundation, AudioSession, or AudioQueue) it can easily be missed.
I have the same problem.
I registry the AudioSessionInterruptionListener, pause the audio when phone call, resume it after the call end. but get -12985 error code when call AudioQueueStart to resume.
My solution is that I try to call AudioQueueStart after 0.02s.
I don't know the reason.
On iOS7, AudioQueueStart was returning '!int' ('tni!'), though i'm sure no one would be surprised to find that it's not documented in the docs or headers. It was the same issue, though, and the same fix (setting the audio session to active in the background task handler) worked for me.

Resources