onvif vs rtsp - difference - rtsp

i have just started to delve into streaming libraries and the underlying protocols. I understand rtsp/rtp streaming and what these 2 protocols are for. But if we need the ip address, codec and the rtsp/rtp protocols to stream the video and audio from any cameras then why do we have onvif standard which essentially also aims to standardize the communication between IP network devices. I have seen the definitions of onvif so thats not what I am looking for. I want to know why at all we need onvif when we already have rtsp/rtp and what additional benefits it can provide.

ONVIF is much more than just video streaming. It's an attempt to standardize all remote protocols for network communication between security devices. This includes things like PTZ control video analytics and is much more than just digital camera devices.

Related

How can one play RTSP videos on browser with lowest possible latency?

I would like to know how surveillance camera companies stream cameras on their websites with low latency. So far I've found out that most cameras stream with RTSP protocol and it needs to be converted to enable browser streaming.
It seems like that webRTC is the best option but there aren't many resources on how to convert RTSP to webRTC.
There is also the option to send raw images to web page via websocket but I couldn't find a way to implement that either.
Yes, for low latency streaming WebRTC is the best choice in most cases.
You will need a server to convert RTSP->WebRTC. There are many opensource solutions, see RTSP to WEBRTC live video streaming

RTSP RTP-over-TCP H264 streaming

I am working on a RTSP RTP-over-TCP H264 streaming application from a live HW-based encoder.
I am sure , there is somewhere a example code, please any reference, web.
Most of the hardware appliances I have had to deal with (mostly IP cameras) use Live555. It is quite straightforward - you set up the needed tracks and then provide NAL units and the framework takes care of all the rest. They have a nice set of test programs that describe the API - at least I was able to make a RTSP server for a Texas Instruments Pandaboard SoC, and there are lots of info on Live555 on Stackoverflow.

Network Camera Streaming Capabilities via ONVIF

How could I know how many video streams a network camera is capable to transmit simultaneously via ONVIF and if there are any restrictions? I've search in every ONVIF's specs and nothing shows up. So far I've been able to know it through the manufacturer's web page or manuals.
Number of Unicast video streaming is generally provided by the manufacturer and also on the different types of resolution it can stream. If you meant to say number of simultaneous unicast user, then it is more or less 20. You should ask the manufacturers, they are very helpful at it.
Secondly, what is RTSP(real time streaming Protocol) it is a multi-cast protocol. What is a Multi-cast Protocol? it is for Net Streaming of Net Radio or IP Video/Audio to multiple Clients throughout the world. Is there a upper limit of this technology? No, as long as the bandwidth can support it.

Onvif compatibility

I have been using Onvif for one month and I am able to receive stream URI and have the control over all the configuration stuff from my own client program designed in C#.
In my application I want to take the videos (1 or 2 min streams) from 10 IP Cameras and then create a 10 min video. So it is like embedding the videos from all cameras.
My question is - Can I use Onvif for this application ?
I am asking because I only found information about configuration stuff in all Onvif WSDL files. So I got a doubt whether I can use or not. Kindly requesting you to tell me the compatibility of Onvif with my specified application. I would be more glad if you also provide some information on how to make it possible.
You can use Onvif to configure the cameras for use with the application, however you would not use Onvif to actually acquire the video from the cameras.
You can use Onvif to configure the streams (Encoding format, multicast setup, network configuration, etc) and get the Uri for the stream (GetSreamUri), but you would then need to access the RTSP streams directly to get the actual video.
This can be done using something like ffdshow with Direct Show to grab the video from each camera and make a compilation.
Onvif has a Streaming Specification which describes how compliant cameras must implement streaming but it still results in the camera producing a video stream on the network. How clients end up acquiring the video is outside the scope of the specification.

Two-way audio for software ip camera

I am trying to setup a raspberry pi box with a usb camera as a IP Camera that can be viewed from a a generic android IP Camera monitor app. I've found some examples on how to get the video stream, and that works, but what I also need is two-way audio. This seems to come out of the box in standalone network cameras -- any ideas how that works? I want to set it up in a way compatible with typical network cameras so that my cam can be used by any generic ip camera viewer app.
Well, the modern cameras nowadays implement the ONVIF protocol. This protocol specifies that you have a RTSP server that streams audio and video from the camera to the pc, but it also mandates a so called audio backchannel. It's a bit long to explain how it works, check it in the specs.
ONVIF is the standard, but you could also install an existing SIP client and do a video/audio VoIP call rather than implementing ONVIF - depends on the long term goals of your project.

Resources