How could I know how many video streams a network camera is capable to transmit simultaneously via ONVIF and if there are any restrictions? I've search in every ONVIF's specs and nothing shows up. So far I've been able to know it through the manufacturer's web page or manuals.
Number of Unicast video streaming is generally provided by the manufacturer and also on the different types of resolution it can stream. If you meant to say number of simultaneous unicast user, then it is more or less 20. You should ask the manufacturers, they are very helpful at it.
Secondly, what is RTSP(real time streaming Protocol) it is a multi-cast protocol. What is a Multi-cast Protocol? it is for Net Streaming of Net Radio or IP Video/Audio to multiple Clients throughout the world. Is there a upper limit of this technology? No, as long as the bandwidth can support it.
Related
My primary intention is to setup a VoIP session between 2 users A & B; Here the raw audio / video media bytes are fetched from A's browser are played in B's browser and vice versa.
The reason is that, when the user C & D are added into this call, we need not have to create a P2P mesh network which limits the performance.
Tried recording media with getUserMedia() and playback, but it is not real time. It also gives a bad user experience. (However, haven't experimented yet with videos of small chunks as 200 ms)
Is there any approach where I can get the raw bytes of the media and play it on other browser? Currently I have a server in between which can connect to both peers if required.
Any online examples or libraries are welcome.
Have already asked 2 questions in this regard with 100-100 bounties, but not much of use:
How to use libsrtp or similar library to decrypt/encrypt the WebRTC data stream?
How to integrate part of WebRTC as a static / dynamic library with the existing C++ code?
Related: How to stream, live video playing on my browser to browser of another user?
If i understand you well is you're looking on how to have more than two users on the session right? without using mesh topology
thats possible and configurable as well by means that some maybe active speaker or everyone is active speaker not only receiver whatever configuration you choose but to me it seems that you're asking for video conferencing
there are couple of tools for this the best one i might recommend is mediasoup its a SFU as selective fowarding unit mediasoup
I don't know if I understand correctly, but it is not likely that you will get raw video data and play it on the browser, it will just kill your bandwith and performance because the raw data is huge.
You need to use the compressed data ( media codec ex.H264 ) and you need a protocol to send and receive it. If you are looking for sub-second latency than webrtc is your best choice in here already. If you have a server in between, distribute your media through that server instead of Mesh. Check this out for webrtc network topologies:
https://antmedia.io/webrtc-servers/
I'm using WebRTC in a sort of non-conventional way.
I have multiple streams generated by several 'broadcasting' peers being sent to a collection of several 'receiving' peer.
I intend to use an SFU media server (maybe Jitsi or Kurento)
It is very critical that these streams are presented at the receiving peers in a synchronized fashion.
What are the methods I can use for synchronization? Usually this isn't an issue with WebRTC because there is not usually a consistent clock between peers, but in my case there is a common clock for all the stream sources.
The only ways I can imagine doing it are:
Not worry about it and hope that WebRTC's low latency will cause everything to be in sync.
Somehow encoding timestamp metadata in the WebRTC stream frames, and somehow synchronizing display with javascript in the browser.
Using a tool like GStreamer that can perform video synchronization, mix the streams into a single stream and forward that to the media server (and thus to the receiving clients). I don't have a good idea of how I would actually perform the synchronization though.
Any thoughts and advice would be appreciated.
The only OTT system capable of synchronisation of low latency streams available (when writing this text), is the SYE system made by Net Insight. They are able to synchronise any device down to single digit millisecond in low latency mode.
They do not provide any open source that I know of but you can check it out by downloading a app that uses it.
Primetime
The game starts 20:00 CET every day, download it on several phones/tablets to verify the sync part.
However there are other systems that can synchronise playback that I found.
HibbTV
HibbTV seams to focus on more IPTV replacement solutions as I interpret the solution. They do not seam to target the wild west of internet. I might be wrong please correct me then.
W3C MULTI-DEVICE TIMING COMMUNITY GROUP
Spoke to the researchers a while back. They can synchronise playback but they target collaborative viewing. The low latency part is not part of the scope as I understand it.
Then when it comes to WebRTC, LHLS, MPEG-DASH CMAF and all other solutions they have no sense of time so it will not be possible to render the same video frame on different devices using various access technologies such as 4G, WiFi or cable or even if the devices uses the same technology because the rendering is buffer controlled not time controlled.
/Anders
I have been using Onvif for one month and I am able to receive stream URI and have the control over all the configuration stuff from my own client program designed in C#.
In my application I want to take the videos (1 or 2 min streams) from 10 IP Cameras and then create a 10 min video. So it is like embedding the videos from all cameras.
My question is - Can I use Onvif for this application ?
I am asking because I only found information about configuration stuff in all Onvif WSDL files. So I got a doubt whether I can use or not. Kindly requesting you to tell me the compatibility of Onvif with my specified application. I would be more glad if you also provide some information on how to make it possible.
You can use Onvif to configure the cameras for use with the application, however you would not use Onvif to actually acquire the video from the cameras.
You can use Onvif to configure the streams (Encoding format, multicast setup, network configuration, etc) and get the Uri for the stream (GetSreamUri), but you would then need to access the RTSP streams directly to get the actual video.
This can be done using something like ffdshow with Direct Show to grab the video from each camera and make a compilation.
Onvif has a Streaming Specification which describes how compliant cameras must implement streaming but it still results in the camera producing a video stream on the network. How clients end up acquiring the video is outside the scope of the specification.
i have just started to delve into streaming libraries and the underlying protocols. I understand rtsp/rtp streaming and what these 2 protocols are for. But if we need the ip address, codec and the rtsp/rtp protocols to stream the video and audio from any cameras then why do we have onvif standard which essentially also aims to standardize the communication between IP network devices. I have seen the definitions of onvif so thats not what I am looking for. I want to know why at all we need onvif when we already have rtsp/rtp and what additional benefits it can provide.
ONVIF is much more than just video streaming. It's an attempt to standardize all remote protocols for network communication between security devices. This includes things like PTZ control video analytics and is much more than just digital camera devices.
I am working on fpga firmware, in which i want to have very fast data transfer using ethernet . I got help from FPGA forum they say that suggest designs for data transfer using light weight internet protocol (LWIP).
How this is different from transfering the data using NDIS. I will be grateful if you can suggest me some guide to interface my visual c++ application to the network guide and tranfer the data.
many greeting in advance.
LWIP is a library for talking IP on a network.
NDIS is a specification for how an OS talks to network cards.
Neither is necessarily what you appear to want.
If you want to transfer data very simply and quickly point-to-point using Ethernet, you need to understand how Ethernet works at the packet level, and form your data into some Ethernet packets. You can make up your own protocol for this if you have control over both ends of the link.
If you want to transfer the data over an existing network topology, you would be better doing it using an existing protocol. UDP/IP might be one such protocol, depending on your requirements for data-rate, latency, software complexity, reliability etc. LWIP is one library which implements UDP, so might be of use.