How to Fetch Recording from Onvif using javascript - node.js

I am trying to fetch the recording from a vivotek camera using onvif interface. I tried using the function exportRecordedData using the documentation http://www.onvif.org/ver10/recording.wsdl but there was no result.
I am getting error
2022-06-10T04:39:59.728Z error: exportRecordedData(): Error: Error: ONVIF SOAP Fault: Operation Action Not Implemented. The requested action operation is not implemented by the device.

I think ExportRecordedData() implementations might be rare (I think I don't have them on TVT an Uniview).
The other option is using RTSP stream as returned by GetRecordings(). For TVT there is also alternative url: rtsp://IP:port/chID=1&date=2020-03-09&time=17:00:00&timelen=100&StreamType=main. Both TVT and Uniview work with Range in RTSP PLAY when using URL from GetRecordings(), but both also have some problems - TVT plays all recordings as one stream (at least I cannot distinguish them at the moment), Uniview plays only first recording from specified range.

Related

Does v3 Google Cast receiver parse alternative audio tracks from an hls master playlist automatically or do I have to define them in the sender?

I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.

Google Cast Video Player becomes unresponsive after network error

I am working on a Chromecast custom receiver app, built on top of the sample app provided by Google (sampleplayer.CastPlayer)
The app manages a playlist, I would like the player to move on to the next item in the list after a video fails to play for whatever reason.
I am running into a situation where, after a video fails to load because of a network error, the player becomes unresponsive: in the 'onError_()' handler, my custom code will do this
var queueLoadRequest = ...
var mediaManager = ...
setTimeout (function(){mediaManager.queueLoad(queueLoadRequest)}), 5000
...the player does receive the LOAD event according to the receiver logs, but nothing happens on the screen, the player's status remains IDLEand the mediaManager.getMediaQueue().getItems() remains undefined. Same result trying to use the client controller to try to load a different video.
I have tried to recover with mediaManager.resetMediaElement() and player.reset() in the onError_ handler, but no luck.
For reference, here is a screenshot of the logs (filtered for errors only) leading up to the player becoming unresponsive. Note that I am not interested in fixing the original error, what I need to figure out is how to recover from it:
My custom code is most likely responsible for the issue, however after spending many hours + stripping the custom code to a bare minimum in an effort to isolate the responsible bit of code, I have not made any progress. I am not looking for a fix but rather for some guidance in troubleshooting the root cause: what could possibly cause the Player to become unresponsive? or alternatively how can one recover from an unresponsive Player?

ONVIF PullMessages Fault

I understand that cameras that do not have WSBaseNotification feature do not support push-style notifications (Notify), so I have to do the pull-style way (CreatePullPointSubscription and PullMessages).
First I obtain the SubscriptionReference address from CreatePullPointSubscription and pass it to the "To" address in PullMessages. This has succeeded with one of the three cameras I have tested but failed with the other.
Here is a sample of response for CreatePullPointSubscription:
<SOAP-ENV:Header><wsa5:MessageID>urn:uuid:18764990-3fd8-4175-b074-bfdd6816d5a2</
wsa5:MessageID><wsa5:RelatesTo>urn:uuid:1adbe268-c822-eb58-8560-b07639671351</wsa5:RelatesTo><wsa5:To SOAP-
ENV:mustUnderstand="true">http://www.w3.org/2005/08/addressing/anonymous</wsa5:To><wsa5:Action SOAP-
ENV:mustUnderstand="true">http://www.onvif.org/ver10/events/wsdl/EventPortType/
CreatePullPointSubscriptionResponse</wsa5:Action></SOAP-ENV:Header><SOAP-
ENV:Body><tev:CreatePullPointSubscriptionResponse><tev:SubscriptionReference><wsa5:Address>http://172.22.22.35:80/
onvif/device_service?Idx=0</wsa5:Address></tev:SubscriptionReference><wsnt:CurrentTime>2015-11-26T17:05:55Z</
wsnt:CurrentTime><wsnt:TerminationTime>2038-01-19T03:14:07Z</wsnt:TerminationTime></
tev:CreatePullPointSubscriptionResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>
And PulMessagesRequest:
<s:Header><wsa:To>http://172.22.22.35:80/onvi /device_service?Idx=0</wsa:To><wsse:Security><wsse:UsernameToken>
<wsse:Username>admin</wsse:Username><wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-
profile-1.0#PasswordDigest">XWhDcuw3cztspGCLlpQfVaqM1mU=</wsse:Password><wsse:Nonce>NTY1NmNiODFjYTk4MWZlNjFmNDA=</wsse:Nonce>
<wsu:Created>2015-11-26T09:06:09Z</wsu:Created></wsse:UsernameToken></wsse:Security></s:Header><s:Body><tev:PullMessages>
<tev:Timeout>PT5S</tev:Timeout><tev:MessageLimit>2</tev:MessageLimit></tev:PullMessages></s:Body></s:Envelope>
And PullMessagesResponse:
<SOAP-ENV:Header><wsa5:To SOAP-ENV:mustUnderstand="true">http://172.22.22.35:80/onvif
/device_service?Idx=0</wsa5:To></SOAP-ENV:Header><SOAP-ENV:Body><SOAP-ENV:Fault><SOAP-ENV:Code><SOAP-ENV:Value>SOAP-ENV:Sender</SOAP-ENV:Value>
<SOAP-ENV:Subcode><SOAP-ENV:Value>InvalidArgVal</SOAP-ENV:Value></SOAP-ENV:Subcode></SOAP-ENV:Code><SOAP-ENV:Reason><SOAP-ENV:Text
xml:lang="en">InvalidArgVal</SOAP-ENV:Text></SOAP-ENV:Reason><SOAP-
ENV:Detail>There is no subscribe.</SOAP-ENV:Detail></SOAP-ENV:Fault></SOAP-ENV:Body></SOAP-ENV:Envelope>
From the ONVIF core specs:
9.1.2 Pull messages
The device shall provide the following PullMessages command for all SubscriptionManager endpoints returned by the CreatePullPointSubscription command.
Therefore you need to pull the messages from the address returned int the CreatePullPointSubscription . Populating the wsa5:To field in the body of the request but using the URL of the event service is in general not enough.
You posted only the body of the soap requests and not the head, thus it's impossible to check the URL you're using.

libvlc / vlcj, Get video metadata (num of audio tracks) without playing the video

I have a EmbeddedMediaPlayerComponent and I want to check before playing if the video has audio track.
The getMediaPlayer().getAudioTrackCount() method works fine but only when I play the video and I am inside the public void playing(MediaPlayer mp) event.
I also tryed
getMediaPlayer().prepareMedia("/path/to/media", null);
getMediaPlayer().play();
System.out.println("TRACKS: "+getMediaPlayer().getAudioTrackCount());
But it does not work. it says 0.
I also tryed:
MediaPlayerFactory factory = new MediaPlayerFactory();
HeadlessMediaPlayer p = factory.newHeadlessMediaPlayer();
p.prepareMedia("/path/to/video", null);
p.parseMedia();
System.out.println("TRACKS: "+p.getAudioTrackCount());
But it also says -1. Is there a way I can do that ? or using another technique?
The track count is not metadata, so using parseMedia() here is not going to help.
parseMedia() will work to get e.g. ID3 tag data, title, artist, album, and so on.
The track data is usually not available until after the media has started playing, since it is the particular decoder plugin that knows how many tracks there are. Even then, it is not always available immediately after the media has started playing, sometimes there's an indeterminate delay (and no LibVLC event).
In applications where I need the track information before playing the media, I usually would use something like the native MediaInfo application and parse the output - this has a plain-text out format, or an XML output format and IIRC the newer versions have a JSON output format. The downside is you have to launch a native process to do this, I use CommonsExec for things like this. It's pretty simple and does work even though it's not a pure Java solution, but neither is vlcj!
A slight aside if you did actually want the meta data there is an easier way, just use
this method on the MediaPlayerFactory:
public MediaMeta getMediaMeta(String mediaPath, boolean parse);
This gives you the meta data without having to prepare, play or parse media.

how do i set output of submix voice in XAudio2? (metro)

I tried using the SetOutputVoices function, and the constructor parameter, but both result in a XAUDIO2_E_INVALID_CALL as the result when used on a submix voice.
the docs say that you get that error by calling it from an audio callback, but i'm not. i have even tried calling it before starting the audio engine.
the same method works for source voices, so i'm pretty sure i'm not passing a bad XAUDIO2_VOICE_SENDS structure.
Submix voices have a processing order, specified by the processingStage parameter in IXaudio2.CreateSubmixVoice
You can only send output to a submix voice with a lower processing stage. and i had all of my submixes at the default processing stage (0).

Resources