I developed the RTSP server camera for 640x480 resolution. Its working fine and showing video in VLC Player. Now I am trying to implementing the custom data(user parameter i.e frame count,timestamp etc.) by considering 640x481 resolution. In this what i am doing is I am considering 640x480 resolution for compression(RGB to MJPG) that means without considering the last line(which is having custom data). I don't know how to add and send the custom data through RTSP?Please suggest any ideas to implement this.
Thanks.
Related
Overview
Is it possible to use VNC (RFB) with WebRTC to implement remote screen control using Node Js? I get remote screen frames from RFB and I want to transform it to MediaStream and then send to the client side. I was trying to search for any solution in the net but found nothing I can use.
Possible solutions I've found
ffmpeg frame encoding (I don't sure I can encode frames to something suitable for MediaStream)
put frames into canvas element and then capture to the MediaStream
Main question
How to encode rfb frames to be suitable for Mediastream and WebRTC
What I've been using until now
I just transform rfb frames to png pictures, send to the client and render it using canvas. Problem - poor fps, quite big latency
Is there any other solutions except WebRTC?
I think WebRTC is a great solution for this, this Open Source project neko does it already. They aren't using VNC (but instead using GStreamer to capture X11), but that totally possible to change.
Since png is lossless you are wasting a lot of bandwidth on that, if possible I would encode to VPx or H264.
Are you transporting these png via the DataChannel? I would also use RTP if possible. The browser will discard late frames (and other optimizations) to make sure you get the best experience.
I have an UWP app that capture a live video stream (webcam), encodes it in h264, and sends it through a TCP socket (in a local network, I need high performance) to a Linux device.
Is there a way to do this? I need the video not for playing it but for extract single frames. I could do that with opencv but it requires a local video file, instead I'm using a live stream.
I would send photos instead of a video stream if the time needed for capture one was acceptable, but it requires about 250 ms.
Is RTP required? Does UWP (windows) provides a way to achive this?
Thank you
P.S.: The UWP app runs in Hololens.
You can use WebRTC to transmit live video from the HoloLens easily to any target. That's probably the easiest way to do it without going really low level.
For an introduction just grab this repo and try the sample app which runs perfectly on the HoloLens https://github.com/webrtc-uwp/PeerCC/tree/e95f231e1dc9c248ca2ffa040276b8a1265da145/Client
We recently built a demo application utilizing Kurento Media Server to record applicant video interview, but the audio quality is not well , some audio is not recognizable and some of it had high pitch noise. We've been test it on several models of PC or Mac, so this should not be device problem.
We've been using RecorderEndpoint with media profile MediaProfileSpecType.WEBM ,and all other setting remain as default.
To fix this problem, we tried:
We upgrade to Kurento 6.2.1 which use Opus as the audio encoder.
Try to using setMaxOuputBitrate of the recorder, we don't see it has improvements or I don't know which bit rate range can be used.
Change SDPOffer to setup a high bit rate audio for Opus which we don't know where to modify
None of it is working so far, so please tell us where to look.
Thanks.
Please check with this recording tutorial. The audio should be fine. Just make sure you are only sending audio, and not video. That should help.
If the audio is not being recorded correctly, I would try and hear what's coming out of your box through your browser. Try and run the hello-world tutorial, with a pair of headphones connected to your box so you don't have echoes.
About #2, if you want to raise the bitrate exchanged between the webrtc endpoint and the recorder, you need to invoke the setOutputBitrate command on the webrtc endpoint.
I have been using Onvif for one month and I am able to receive stream URI and have the control over all the configuration stuff from my own client program designed in C#.
In my application I want to take the videos (1 or 2 min streams) from 10 IP Cameras and then create a 10 min video. So it is like embedding the videos from all cameras.
My question is - Can I use Onvif for this application ?
I am asking because I only found information about configuration stuff in all Onvif WSDL files. So I got a doubt whether I can use or not. Kindly requesting you to tell me the compatibility of Onvif with my specified application. I would be more glad if you also provide some information on how to make it possible.
You can use Onvif to configure the cameras for use with the application, however you would not use Onvif to actually acquire the video from the cameras.
You can use Onvif to configure the streams (Encoding format, multicast setup, network configuration, etc) and get the Uri for the stream (GetSreamUri), but you would then need to access the RTSP streams directly to get the actual video.
This can be done using something like ffdshow with Direct Show to grab the video from each camera and make a compilation.
Onvif has a Streaming Specification which describes how compliant cameras must implement streaming but it still results in the camera producing a video stream on the network. How clients end up acquiring the video is outside the scope of the specification.
Is it possible to see the live stream of an IP camera using RTSP ?
Example URL: rtsp://public ip:554/1363e66e.mp4
The encoding is mp4 h.264 baseline profile at 320 x 240 resolution.
I followed the Wiki link here.
But I get the error: Prefetch error -2
When I try to play using real player on the nokia e72, I get the error: 'General: System Error'.
Please let me know what I can do about this.
There are no video players on Ovi store that can play the stream either but I am able to play the stream on VLC on the desktop.
You can stream it using ReaPlayer if you don't have VLC player in Ovi store. See the port address range supported by your IP camera. Try the range of 1024 - 2000. RTSP supports VLC, Quicktime and Real player. Using any of these objects you can stream it.
So I think here is the case,
There are a few different mp4 containers. Standard one wont allow you to wrap a real time data into a mp4 container because mp4 needs to have a field/atom in its header called
MDAT and it has information about the file and its size as well.(which is written after the file is completely encoded. )
So unless you wake that you can not stream live stuff in mp4 UNLESS it is fragmented mp4.
Media Foundation will allow you to do this when windows 8 is out( i got the intel from the msdn forum so I dont know how true it is).
I dont know what ffmpeg/Gstreamer is capable of. Again if this is a commercial product you are working on you might run into some licensing issues with ffmpeg.
Look at webrtc.
I am guessing best bet it to use webm or ogg/theora but I am not sure if theora can do what you want, This is something I am also working on.
Please share your findings
Thanks.