What is the correct way to recover from onStalled method on chromecast player when it is playing a drm stream ?
I can call to 'Load' method if there is a currentTime in the mediaElement, but what about the drm's license?
I get:
Uncaught InvalidStateError: An attempt was made to use an object that is not, or is no longer, usable.
On line 105 ( media_player.js ) if i call to:
this.mediaElement.load(); // this is called in onStalled method
Thanks in advance
Related
I am trying to fetch the recording from a vivotek camera using onvif interface. I tried using the function exportRecordedData using the documentation http://www.onvif.org/ver10/recording.wsdl but there was no result.
I am getting error
2022-06-10T04:39:59.728Z error: exportRecordedData(): Error: Error: ONVIF SOAP Fault: Operation Action Not Implemented. The requested action operation is not implemented by the device.
I think ExportRecordedData() implementations might be rare (I think I don't have them on TVT an Uniview).
The other option is using RTSP stream as returned by GetRecordings(). For TVT there is also alternative url: rtsp://IP:port/chID=1&date=2020-03-09&time=17:00:00&timelen=100&StreamType=main. Both TVT and Uniview work with Range in RTSP PLAY when using URL from GetRecordings(), but both also have some problems - TVT plays all recordings as one stream (at least I cannot distinguish them at the moment), Uniview plays only first recording from specified range.
I would like my java card applet to emulate our legacy non-java card (native OS) in our organization. The following is the targeted behaviour of the applet:
Select the applet (A4) and return 0x61XX.
Use GET RESPONSE (C0) to read the response
Protocol is T1.
My sample java card is from NXP compatible with JCRE 2.2.2. In my code,
//dataLen is 10 bytes
if (selectingApplet()){
apdu.setOutgoing();
apdu.setOutgoingLength((short)dataLen);
apdu.sendBytesLong(data, (short)0, dataLen);
ISOException.throwIt((short)(ISO7816.SW_BYTES_REMAINING_00 + dataLen)
}
I loaded my applet into the test card. The following is the result:
Select applet
Result: 0x610A
GET RESPONSE
Result: 0x6982
What could be wrong here ? What is the proper way of achieving this if this is even possible with java card ?
I don't think this is possible. The differences between T=0 and T=1 are handled by the Java card framework. GET RESPONSE is specific to T=0.
That means that the 61XX would be generated automatically when using T=0. And of course that the response of SELECT for INSTALL should be automatically returned - unless if the applet thrown an exception that generates a status word, in which case it is likely disregarded.
Similarly, I would expect the framework to catch the GET RESPONSE early, before you can do anything with it. The only thing you could try is to handle the GET RESPONSE yourself and hope the OS passes the APDU along.
But I think the best way of doing this is to configure the chip to use T=0. Then ISO case 4 commands (response and command data) should automatically use GET RESPONSE.
I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.
I am creating a game using Libgdx. I have a lot of small sounds files in the MP3-format and since it is so many I do not preload them. I only load the sound I want to play when it is to be used, like this:
actorSound = Gdx.audio.newSound(Gdx.files.internal(sound));
The code above works great, but the rest of my sounds do not unfortunately. The actor above has its own class and plays a different sound every time it is touched.
When I try to play sounds in my Gamescreen I get the following error:
AUDIO_OUTPUT_FLAG_FAST denied by client
All the audiofiles have the same properties and have been recorded using the same microphone & Audacity. The files are all 44100Hz and only a few kb in size each.
I wonder why the sounds the Actor plays work and the other sounds do not?
I decided to try to change the non-working sounds to music instead and now they play fine - for a little while that is. I can play a full game, return to the menu and start a new game again. The second time I start a game I only get 3 sounds from the Gamescreen and then it is silent except for the sounds from the actor. The error that appears looks like this:
E/MediaPlayer: Error (1,-19)
I load the Music just the same way as the Sound:
gameSound = Gdx.audio.newMusic(Gdx.files.internal(soundeffect));
I have looked into the two errors by reading posts like these:
AUDIO_OUTPUT_FLAG_FAST denied by client
Mediaplayer error (-19,0) after repeated plays
But I'm not sure what to do or change to solve my problem. I would prefer Sound if that is possible, but Music is an acceptable workaround...
When it comes to Music it is probably as suggested in the URL, that I do not release the media players. I am not sure how to do that?
When I leave the GameScreen for another screen, like the MenuScreen or RewardScreen, I dispose Music first. The other screens use Music as well and the sounds are loaded when needed. When I change back to the GameScreen I dispose again and then start a new game...
Any ideas or suggestions? Any help is greatly appreciated.
I added AssetManager as suggested, and I now have a loding screen that loads all the sounds. I load them as sounds and not music, which is how I prefer it.
The sounds work well in the actual game but once I get to the reward screen, only the first sound plays and after that the app crashes with the following error:
06-02 07:47:09.774 6208-6282E/AndroidRuntime: FATAL EXCEPTION: GLThread 2935
Process: PID: 6208
java.lang.NullPointerException: Attempt to read from field 'com.badlogic.gdx.assets.AssetManager
Assets.Assets.manager' on a null object reference
at DelayedSounds(RewardsScreen.java:538)
at RewardsScreen.Update(RewardsScreen.java:567)
at Screens.RewardsScreen.render(RewardsScreen.java:577)
at Game.render(Game.java:46)
at com.badlogic.gdx.backends.android.AndroidGraphics.onDrawFrame(AndroidGraphics.java:459)
at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1562)
at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1262)
06-02 07:47:13.823 6208-6208 E/AndroidGraphics: waiting for pause
synchronization took too long; assuming deadlock and killing
The DelayedSounds method looks like this:
public void DelayedSounds(){
timer = timer + Gdx.graphics.getDeltaTime();
if(playitem == true && timer > 2){
itemsound = "vinster/" + item + ".mp3";
assets.manager.get(itemsound, Sound.class).play(volume);
//sound = Gdx.audio.newMusic(Gdx.files.internal(itemsound));
//sound.setVolume(volume);
//sound.play();
playitem = false;
}
if(playkeep == true && time > 3){
assets.manager.get("dialog/VINSTEN.mp3", Sound.class).play(volume);
//sound = Gdx.audio.newMusic(Gdx.files.internal("prizes/prize.mp3"));
//sound.setVolume(volume);
//sound.play();
playkeep = false;
}
}
As can be seen I use AssetManager now, and have commented out my old code for testing purposes. I define AssetManager in my main class and then pass it around to all other classes that uses sounds.
The RewardsScreen will only play the first sound and then it crashes on a NULL object reference as it seems.
If I change my code back to using Music in the RewardsScreen it works fine (see the code that is commmented out)
The sound I try to play is exactly the same in both cases. Assets class that handles the loading of all my assets has the sounds included and since I still get a NULL object I assume one or more items fails to load?
I search the logs and find this where it loads the sounds:
06-02 08:05:09.360 9844-9888/E/WVMExtractor: Failed to open libwvm.so: dlopen failed: library "libwvm.so" not found
06-02 08:05:09.395 9844-9888/ E/NdkMediaExtractor: sf error code: -1010
06-02 08:05:09.395 9844-9888/ E/SoundPool: Unable to load sample
Maybe this is related to my problem?
I load all the sounds in the same manner and most definitely seem to work, the loading is the regular:
manager.load("prizes/cash.mp3", Sound.class);
I still find the AUDIO_OUTPUT_FLAG_FAST denied by client in my logs but now the sounds are playing instead of being rejected.
Any more ideas about how to solve this?
Your link having enough information for your bug.
It's not good way to create Resource instance each time in Game, Create once and user All over your game, if possible use AssetManager.
Like create Music instance in onCreate() method of your game.
gameSound = Gdx.audio.newMusic(Gdx.files.internal(soundeffect));
Music having play(), resume() and many other helpful methods.
You can also take a look of this, it may be helpful.
I am writing a component that reads data from a specific filetype. Currently, it has a property for filepath - I would like for this block to quit as hard as possible when passed an invalid file/no file found.
Throwing an exception causes it to stop execution, but also deletes the block from the chalkboard while I am testing (?), which makes me think there is a more "approved" way to do it.
My current solution is something like:
LOG_ERROR( MyReader_i, "Unable to open file at " + Filepath );
return FINISH;
Is there another way to stop if something is wrong, that will hopefully stop all downstream processing as well?
Have you taken a look at the Data Reader component in the basic components? It also has a file path as an input. It deals with this during the onConfigure call as shown below:
def onconfigure_prop_InputFile(self, oldvalue, newvalue):
self.InputFile = newvalue
if not os.path.exists(self.InputFile):
self._log.error("InputFile path provided can not be accessed")
And then again in the service function by returning NOOP.
def process(self):
if (self.Play == False):
return NOOP
if not (os.path.exists(self.InputFile)):
return NOOP
This isn't the only way to deal with invalid input however. It's a design decision that is up to the developer.
If you'd like additional components down stream to know about an issue elsewhere in the chain, you have a few options. You could use the End of Stream bit, available in bulkio port implementations, to signal to down stream components that there is no additional data. They can then use this information to clean up and shut down. You could also use messaging to send a message out to an event channel and anyone who has subscribed to this event channel can be made aware of the message. Again, it's a design decision.