Trying to stream text over RTSP using Gstreamer - rtsp

I'm trying to develop an application which transmits processed data over RTSP so that another application can receive the data and do other stuff with it. The trouble is, it seems that not many people seems to use RTSP or/and Gstreamer to stream text(XML) data so I'm having difficulties looking for the right resources.
So at the server side (my application), I have this:
gst_rtsp_media_factory_set_launch(m_Factory, "( appsrc name=mysrc ! application/x-rtp,pt=98 ! rtpgstpay name=pay0 pt=98 )");
And I'm testing the connection by running the following pipeline:
gst-launch-1.0 rtspsrc location=rtsp://127.0.0.1:8554/test use-pipeline-clock=true debug=true ! application/x-rtp,payload=98 ! fakesink dump=true
When I run both of them, I confirmed that the callback function on the server side for "need-data" signal was being called continuously and the debug output from rtspsrc gave me 200 OK response all the way from OPTIONS request to PLAY request, except that the fakesink didn't dump anything and I don't see any data packet on Wireshark except for the requests and ACK's. So it seems that the connection can be made between the two, except that no actual data is being transferred between them.
And after a few seconds from the connection, I get this debug output from gstreamer on server side (GST_DEBUG=3)
0:00:13.646758533 4844 0x7f3508003940 WARN rtspmedia rtsp-media.c:3068:gst_rtsp_media_set_state: media 0x1f4ada0 was not prepared
0:00:13.676328927 4844 0x7f34d8004590 FIXME default gstutils.c:3643:gst_pad_create_stream_id_internal:<mysrc:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:13.676967874 4844 0x7f34b8001540 FIXME default gstutils.c:3643:gst_pad_create_stream_id_internal:<appsrc3:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:13.677498868 4844 0x7f34b8001680 FIXME default gstutils.c:3643:gst_pad_create_stream_id_internal:<appsrc4:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
and after exactly 20 seconds, I get this:
0:00:33.676438806 4844 0x7f34c0001f70 WARN rtspmedia rtsp-media.c:2127:wait_preroll: failed to preroll pipeline
0:00:33.676619321 4844 0x7f34c0001f70 WARN rtspmedia rtsp-media.c:2384:gst_rtsp_media_prepare: failed to preroll pipeline
0:00:33.681136801 4844 0x7f34c0001f70 ERROR rtspclient rtsp-client.c:678:find_media: client 0x1ec83b0: can't prepare media
0:00:33.684604205 4844 0x7f34c0001f70 ERROR rtspclient rtsp-client.c:2210:handle_describe_request: client 0x1ec83b0: no media
It seems that only one tiny step is missing to get the data packets actually transmitted from my application, but so far I haven't had any luck identifying it. Any help would be much appreciated.

Related

How to Fetch Recording from Onvif using javascript

I am trying to fetch the recording from a vivotek camera using onvif interface. I tried using the function exportRecordedData using the documentation http://www.onvif.org/ver10/recording.wsdl but there was no result.
I am getting error
2022-06-10T04:39:59.728Z error: exportRecordedData(): Error: Error: ONVIF SOAP Fault: Operation Action Not Implemented. The requested action operation is not implemented by the device.
I think ExportRecordedData() implementations might be rare (I think I don't have them on TVT an Uniview).
The other option is using RTSP stream as returned by GetRecordings(). For TVT there is also alternative url: rtsp://IP:port/chID=1&date=2020-03-09&time=17:00:00&timelen=100&StreamType=main. Both TVT and Uniview work with Range in RTSP PLAY when using URL from GetRecordings(), but both also have some problems - TVT plays all recordings as one stream (at least I cannot distinguish them at the moment), Uniview plays only first recording from specified range.

Does v3 Google Cast receiver parse alternative audio tracks from an hls master playlist automatically or do I have to define them in the sender?

I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.

Google Cast Video Player becomes unresponsive after network error

I am working on a Chromecast custom receiver app, built on top of the sample app provided by Google (sampleplayer.CastPlayer)
The app manages a playlist, I would like the player to move on to the next item in the list after a video fails to play for whatever reason.
I am running into a situation where, after a video fails to load because of a network error, the player becomes unresponsive: in the 'onError_()' handler, my custom code will do this
var queueLoadRequest = ...
var mediaManager = ...
setTimeout (function(){mediaManager.queueLoad(queueLoadRequest)}), 5000
...the player does receive the LOAD event according to the receiver logs, but nothing happens on the screen, the player's status remains IDLEand the mediaManager.getMediaQueue().getItems() remains undefined. Same result trying to use the client controller to try to load a different video.
I have tried to recover with mediaManager.resetMediaElement() and player.reset() in the onError_ handler, but no luck.
For reference, here is a screenshot of the logs (filtered for errors only) leading up to the player becoming unresponsive. Note that I am not interested in fixing the original error, what I need to figure out is how to recover from it:
My custom code is most likely responsible for the issue, however after spending many hours + stripping the custom code to a bare minimum in an effort to isolate the responsible bit of code, I have not made any progress. I am not looking for a fix but rather for some guidance in troubleshooting the root cause: what could possibly cause the Player to become unresponsive? or alternatively how can one recover from an unresponsive Player?

Websockets with Streaming Archives

So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.

Can MessageChannel overflow

I am working on an AS3 project in FDT6. I am using the lastest FLEX 4.6 and AIR 3.7.
I have a worker.swf file that is embedded into the main application to do threading work with.
I am using the MessageChannel class to pass information between the two.
In my main class I have defined
private var mainToWorker:MessageChannel;
private var workerToMain:MessageChannel;
mainToWorker = Worker.current.createMessageChannel(worker);
workerToMain = worker.createMessageChannel(Worker.current);
on the mainToWorker I only ever send messages. In these messages I send a byte array of information. The information is an object that contains a 'command' property and a 'props' property. Basically acting like a function call. The command is a function name and the props is an object that contains data for that function.
mainToWorkerMutex.lock();
mainToWorker.send(ByteArrayUtils.ObjectToByteArray({command:"DoSomething", props:{propA:1,propB:7}}));
mainToWorkerMutex.unlock();
The same occurs for the workerToMain var except I only send byte data that contains the 'message' and 'props' parameters.
workerToMainMutex.lock();
workerToMain.send(ByteArrayUtils.ObjectToByteArray({command:"complete", props:{return:"result"}}));
workerToMainMutex.unlock();
As a sanity check I make sure that the message channels are getting what they should.
It is working fine when I build it in FDT, however when it is built using an ANT script through flash builder I am sometimes getting the 'command' events coming back through in the workerToMain channel.
I am sending quite a lot of data through the message channel. Is it possible that I am overloading it and causing a buffer overflow into the other message channel somehow? How could that only be happening in FB?
I have checked my code many times and I am sure there is nothing in my own code that is sending that message back.
I had similar issue. When sending many bytearrays using channels sometimes things i received was not things i've actually sended. I had 4 channels (message channel to worker, message channel to main, data channel to worker, data channel to main).
I've noticed that data channel to main was affecting message channel to worker. When i turned off data channel to main - message channel to worker stared working just fine :D...
They have a big issue there with sending byte arrays it seems.
But what helped me was using shareable (at first it was not shareable) bytearray for communication via channels, but only for communication, as soon as i am receiving such bytearray i'm copying it to another byte array and parsing a copy.
This removed the problem (made quite hard stress tests there)...
Cheers
P.S. I'm also using static functions (like your ByteArrayUtils) to create bytearray's used for communication, but it seems fine, even made tests using non static functions.
So, it looks like I have found the issue. Looks like it's the ByteArray that is doing it.
ByteArray.toString() is basically sometimes mangles your data meaning you can't really trust it.
http://www.actionscript.org/forums/showthread.php3?t=155067
If you read the comment by "Jim Freer" he mentions how strings sometimes do this.
My solution was to switch to using a JSON encoded string instead of ByteArray data in the message channel. The reason I was using bytearray data to begin with is because I wanted to preserve class definition information, which JSON doesn't do.

Resources