I am working on a simple RTSP server to emulate an IP Camera but instead stream a jpeg image from a file. I have been working through the rtsp protocol and cant find any specific data on what payload I should set in my DESCRIBE response. Any good documentation would be appreciated
Thanks
Matt
Here you go. It's the complete, official definition of RTSP: https://www.rfc-editor.org/rfc/rfc2326
Alternative, look at ffmpeg and/or VLC source code for a reference impl.
Btw, all this information can be obtained here: http://en.wikipedia.org/wiki/Real_Time_Streaming_Protocol
...which is the first link on google when you search RTSP.
The IANA the lists RTP payload type 26 for JPEG. You'll want to specify this in your media attribute of the SDP message in your response. See Appendix C.1.2 of the RTSP RFC for information on media streams. For additional information, see section 8.1 of the SDP RFC.
An example would be:
m=video 0 RTP/AVP 26
Be sure to reference RFC 2435 for the RTP payload format that should be used for your RTP packets.
Related
One of our in-house application uses RTP to stream video and audio from remote office to HO. Recently we're facing an issue where only audio is reaching destination (HO) but not video.
We've 2 lease lines and this issue occurs only when our network traffic is on one particular lease line. We've asked service providers and they've confirmed that they don't block ports/protocal and asked us to check the application.
We've ran Wireshark at both source and destination and found that source (remote office) is sending both PT (96 and 111) while only 1 PT is being received at destination.
Wireshark Log>>>
Captured at source: -Click Here
Captured at destination: -Click Here
We're suspecting this issue might be related to browser or video codec.
Is there any way to troubleshoot this issue?
I've successfully set up Asterisk on my server using the res_pjsip Hello World configuration from their wiki, and I want to be able to forward the RTP data to a Node.JS app, which can interpret RTP. I've heard about directmedia and directrtpsetup (see this stackoverflow) but I'm not sure if that's what I want. So my question is this:
Should I use directmedia / directrtpsetup to send voice data to my Node.JS app, or should I use some sort of Asterisk functionality to forward RTP packets? If the latter, how can Asterisk forward just the voice data?
I can clarify if needed, but hopefully this is more specific than my last questions. Thanks!
UPDATE: Having poked around Asterisk docs and messing with Wireshark, I think I have two options.
Figure out if there's a channel driver for Asterisk that just sends RTP, without any signaling, or
Capture the RTP stream with Wireshark or something and send the packets to the Node.JS app, and inject the return packets into the RTP stream.
Asterisk is PBX. It not suitable to "redirect rtp data"
No, there are no reason in having channel driver "without signalling". For what anyone can use it? How to determine call started if "no signaling"? It will be useless.
You can write such app in c/c++ or use other soft designed to be traffic capture: libpcap, tcpdump etc.
You also can use audio staff: libalsa, jack.
Best option however will be create or find full featured sip client and use it.
I'm implementing a solution for listening to on-going calls inside a LAN network.
Is there a way to provide WebRTC the ip address and port as to where an RTP stream is coming? All I want to do is to get that RTP stream directly streamed to the possible listeners of the call through WebRTC.
I'm not sure if it's feasible but I think it is given how WebRTC has evolved since the past months.
I've been looking around but I've got no luck on this.
The WebRTC RTP stream is encrypted with keys that are exchanged through DTLS. You cannot get the raw RTP stream from a WebRTC peer or even feed it a raw stream without some mediary system to handle the webrtc peerconnection, certificate exchange, and rtp encryption.
The only way to do what you want is to have a breaker or a gateway. An example of such a gateway is the janus-gateway though it is definitely not your only option.
I want to use an IP camera with webrtc. However webrtc seems to support only webcams. So I try to convert the IP camera's stream to a virtual webcam.
I found software like IP Camera Adapter, but they don't work well (2-3 frames per second and delay of 2 seconds) and they work only on Windows, I prefer use Linux (if possible).
I try ffmpeg/avconv:
firstly, I created a virtual device with v4l2loopback (the command was: sudo modprobe v4l2loopback). The virtual device is detected and can be feed with a video (.avi) with a command like: ffmpeg -re -i testsrc.avi -f v4l2 /dev/video1
the stream from the IP camera is available with: rtsp://IP/play2.sdp for a Dlink DCS-5222L camera. This stream can be captured by ffmpeg.
My problem is to make the link between these two steps (receive the rstp stream and write it to the virtual webcam). I tried ffmpeg -re -i rtsp://192.168.1.16/play2.sdp -f video4linux2 -input_format mjpeg -i /dev/video0 but there is an error with v4l2 (v4l2 not found).
Does anyones has an idea how to use an IP camera with webRTC?
Short answer is, no. RTSP is not mentioned in the IETF standard for WebRTC and no browser currently has plans to support it. Link to Chrome discussion.
Longer answer is that if you are truly sold out on this idea, you will have to build a webrtc gateway/breaker utilizing the native WebRTC API.
Start a WebRTC session between you browser and your breaker
Grab the IP Camera feed with your gateway/breaker
Encrypt and push the rtp stream to your WebRTC session from your RTSP stream gathered by the breaker through the WebRTC API.
This is how others have done it and how it will have to be done.
UPDATE 7/30/2014:
I have experimented with the janus-gateway and I believe the streaming plugin does EXACTLY this as it can grab an rtp stream and push it to an webrtc peer. For RTSP, you could probably create RTSP client(possibly using a library like gstreamer), then push the RTP and RTCP from the connection to the WebRTC peer.
Janus-gateway recently added a simple RTSP support (based on libcurl) to its streaming plugins since this commit
Then it is possible to configure the gateway to negotiate RTSP with the camera and relay the RTP thought WebRTC adding in the streaming plugins configuration <prefix>/etc/janus/janus.plugin.streaming.cfg
[camera]
type = rtsp
id = 99
description = Dlink DCS-5222L camera
audio = no
video = yes
url=rtsp://192.168.1.16/play2.sdp
Next you will be able to access to the WebRTC stream using the streaming demo page http://..../demos/streamingtest.html
I have created a simple example transforming a RTSP or HTTP video feed into a WebRTC stream. This example is based on Kurento Media Server (KMS) and requires having it installed for the example to work.
Install KMS and enjoy ...
https://github.com/lulop-k/kurento-rtsp2webrtc
UPDATE 22-09-2015.
Check this post for a technical explanation on why transcoding is just part of the solution to this problem.
If you have video4linux installed, the following command will create a virtual webcam from an rtsp stream:
gst-launch rtspsrc location=rtsp://192.168.2.18/play.spd ! decodebin ! v4l2sink device=/dev/video1
You were on the right track, the "decodebin" was the missing link.
For those who would like to get their hands dirty with some native-WebRTC, read on...
You could try streaming an IP camera’s RTSP stream through a simple ffmpeg-webrtc wrapper: https://github.com/TekuConcept/WebRTCExamples .
It uses the VideoCaptureModule and AudioDeviceModule abstract classes to inject raw media. Under the hood, these abstract classes are extended for all platform-specific hardware like video4linux or alsa-audio.
The wrapper uses the ffmpeg CLI tools, but I don’t feel it should be too difficult to use the ffmpeg C-libraries themself. (The wrapper relies on transcoding, or decoding the source media, and then letting WebRTC re-encode with respect to the ICE connections’ requirements. Still working out pre-encoded media pass-through.)
Actually our camera can support webrtc. It uses ip camera with h5, from P2P tramsmitting, and two way talk for ip camera with web browser! The delay is only 300ms!
I'm trying to use open source Java SIP client Jitsi to do video chat.
To eliminate all network and proxy issues, I've setup my own SIP proxy Asterisk and both the clients are on the same LAN. I also configured Asterisk to either relay the RTP packets or do direct communications between the peers.
The above error is from Asterisk and on chan_sip.c:8915 (asterisk-10.0.0-beta). The Asterisk code checks against port 0.
I was stuck with the above problem. I can try to modify Jitsi code to not use port 0, but wondering if there's a better way and if port 0 is a legal value to start with.
BTW, I was successful having 2 Xlite (commercial software from CounterPath) to transmit H263 videos between each other. I could not get Xlite to do so with Jitsi, nor to have both Jitsi clients send video.
I want to use a Java client as I'm much more adapt at Java. And I'm also hoping to be able to reuse the same codebase for Android in the future.
Port set to 0 is perfectly legal and part of SDP offer/answer model. In fact, it probably means that there was something wrong with your SDP offer. For example, if you support PCMA codec and the peer only support PCMU, he will reject the SDP offer with the port set to 0. There can be quite some reasons with the offer was rejected but codec incompatibility is probably the most common.
To really debug this if you want, you may need to look at the packets (with Wireshark for example).
You asked "... but wondering if there's a better way and if port 0 is a legal value to start with."
Port 0 is perfectly legal in SDP. In particular, SIP's offer/answer model in RFC 3264 section 5.1 says that
A port number of zero in the offer indicates that the
stream is offered but MUST NOT be used.