I'm really stumped here. I'm trying to use the ONVIF protocol to enable/disable audio on RTSP streams from an IP camera. If you wish to disable audio on a profile, you should first send 'removeAudioEncoderConfiguration' to remove the audio codec, then send 'removeAudioSourceConfiguration' on the same profile.
So I do this for ALL profiles on the camera (there happen to be 5). However, the RTSP stream still has audio in it.
This is happening on a Uniview camera. If I do the same sequence of messages on a Techwin device I have, then the audio is disabled.
Similarly if I try to enable audio on a profile, I do the reverse operations. Again, the Techwin adds the audio, but the Uniview does not.
The responses I received to my ONVIF messages are always 'OK'.
Any idea what I'm doing wrong and how to fix it?
Related
I don't know how to get started with this.
What I am trying to do is get a video + audio stream from front-end and host the live stream as mp4 thats accessible on browser.
I was able to find information on WebRTC, socket.io, rtmp, but I'm not really sure what tool to use / whats best suited for something like this?
also follow up question, my front-end is iOS app. So what format would I send the live stream to the server?
It depends on which live streaming protocol you want to play on the player, as #Brad said, HLS is the most common protocol for player.
Note: Besides of HLS, iOS native app is able to use fijkplayer or FFmpeg to play any format of live streaming, like HLS, RTMP or HTTP-FLV, even MKV. However, the most straight forward solution is HLS, only need a tag to play MP4 or HLS, and MSE is also a optional solution to use flv.js/hls.js to play live streaming on iOS/Android/PC, this post is about these protocols.
The stream flow is like this:
FFmpeg/OBS ---RTMP--->--+
+--> Media Server---> HLS/HTTP-FLV---> Player
Browser ----WebRTC--->--+
The protocol to push to media server, or receive in node server, depends on your encoder, by RTMP or H5(WebRTC):
For RTMP, you could use FFmpeg or OBS to push stream to your media server.
If want to push stream by H5, the only way is use WebRTC.
The media server coverts the protocol from publisher to player, which use different protocols in live streaming right now(at 2022.01), please read more from this post.
I'm developing a web receiver for chromecast.
According to chromecast documentation, HLS with AC-3 passthrough is supported. It is to be checked with canDisplayType and following that, I assume it should be okay to use the codec.
I cannot seem to get the chromecast to accept HLS manifests with AC-3 in the codec list. I have tried this on chromecast generation gen2 and gen3 and they both reject it. canDisplayType will return true or false depending on the settings made in the Google Home app. When I have the server actually send ac-3, it throws cast.player.api.ErrorCode.MANIFEST/411, aborts and does not request anything else from the server.
I also cannot seem to get any detailed reason for this, it would be helpful if I was told in the logs why the codec presumably got rejected.
Here's a typical manifest.m3u8 file that I have experimented with:
#EXTM3U
#EXT-X-STREAM-INF:BANDWIDTH=5499178,AVERAGE-BANDWIDTH=5499178,CODECS="avc1.640028,mp4a.a5",RESOLUTION=720x576,FRAME-RATE=50
main.m3u8?<snip>&VideoCodec=h264&AudioCodec=ac3,aac,mp3&<snip>&TranscodingMaxAudioChannels=6&<snip>&h264-profile=high,main,baseline,constrainedbaseline&h264-level=41&<snip>
From my experimentation, mp4a.a5, mp4a.A5 and ac-3 all pass the canDisplayType test but fail at this stage.
For instance:
> context.canDisplayType("video/mp4", "avc1.640028,mp4a.a5", 1920, 1080, 50)
< true
I have also tried to trick the player by substituting the value with mp4a.40.2 or just pretending like there is no audio. In this case the results vary a bit but generally it requests some .ts files before it gives up, or it does not play an audio track at all. In comparison, the same media transcoded using aac-lc 2ch works fine.
Is there anything special one has to do to prepare the receiver to use passthrough audio?
Is it possible to capture all audio stream on my PC (from web browser) and stream it via LAN ?
I use Yandex Music (music.yandex.ru) service. So I logged into my yandex account and I have no any audio files, just online stream. I want to make something like LAN-radio. Users will visit an HTML-page located on our server and listen my audio stream.
Can I use icecast or similar software to stream non-file audio?
Or should I connect my PC's line out to line IN (or mic) and read audio stream via Java or flash? Any ideas?
Have you tried looking at things like Jack and Soundflower? These allow you to reroute the audio from one program to another. You could then reroute the sound into Java or flash and go from there.
https://rogueamoeba.com/freebies/soundflower/
http://jackaudio.org/
You can try WebRTC and MediaStream API for that. You can get audio from user's audio device or a stream they are playing in browser. You can find dcoumentation on those APIs from MDN pages.
i have an RTSP server which is re-streaming the A/V stream from the camera to clients.
Client side we are using MF to play the stream.
I can successfully play the video but not able to play the audio from the server. However when i use vlc to play, it can play both A/V.
Currently i am implementing IMFMediaStream and have created my customize media stream. I have also created a separate IMFStreamDescriptor for audio and added all the required attributes. When i run , everything goes fine but my RequestSample method never gets called.
Please let me know if i am doing it wrong or if there is any other way to play the audio in MF.
Thanks,
Prateek
Media Foundation support for RTSP is limited to a small number of payload formats. VLC supports more (AFAIR through Live555 library). Most likely, your payload is not supported in Media Foundation.
So I am trying to create a RTSP server that streams music.
I do not understand how the server plays a music and different requests get what ever is playing at that time.
so, to organize my questions:
1) how does the server play a music file?
2) how does the request to the server look like to get whats currently playing?
3) what does the response look like to get the music playing in the client that requested the music?
First: READ THIS (RTSP), and THEN READ THIS (SDP), and then READ THIS (RTP). Then you can ask more sensible questions.
It doesn't, server streams little parts of the audio data to the client, telling it when each part is to be played.
There is no such request. If you want, you can have URL for live streaming, and in RTSP DESCRIBE request, tell the client what is currently on.
Read the first (RTSP) document, all is there! Answer to your question is this:
RTSP/1.0 200 OK
CSeq: 3
Session: 123456
Range: npt=now-
RTP-Info: url=trackID=1;seq=987654
But to get the music playing you will have to do a lot more to initiate a streaming session.
You should first be clear about what is RTSP and RTP. The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in communications systems to control streaming media servers. where as Most RTSP servers use the Real-time Transport Protocol (RTP) for media stream delivery. RTP uses UDP to deliver the Packet Stream. try to Understanding these concepts.
then Have a look at this project.
http://sourceforge.net/projects/unvedu/
This a open source project developed by our university, which is used to stream video(MKV) and audio file over UDP.
You can also find a .Net Implementation of RTP and RTSP here # https://net7mma.codeplex.com/ which includes a RTSP Client and Server implementation and many other useful utilities e.g. implementations of many popular Digital Media Container Formats.
The solution has a modular design and better performance than ffmpeg or libav at the current time.