RTSP RTP-over-TCP H264 streaming - rtsp

I am working on a RTSP RTP-over-TCP H264 streaming application from a live HW-based encoder.
I am sure , there is somewhere a example code, please any reference, web.

Most of the hardware appliances I have had to deal with (mostly IP cameras) use Live555. It is quite straightforward - you set up the needed tracks and then provide NAL units and the framework takes care of all the rest. They have a nice set of test programs that describe the API - at least I was able to make a RTSP server for a Texas Instruments Pandaboard SoC, and there are lots of info on Live555 on Stackoverflow.

Related

How to play RTSP stream from ip video camera and NVR on user web page

I want to play RTSP stream from ip video cameras (MP4, H264) on my intranet web page, I use React. I have 12 cameras and NVR.
I did not find a way to do this without an intermediate server (Webrtc is not suitable), that spends resources on transcoding h264 stream to the mjpeg.
If I set a high resolution and quality of the stream, then a lot of resources are spent on transcoding, and most importantly, the streaming of mjpeg images takes a lot of traffic.
Is there a way or solution to stream from the ip camera directly to the web page so that the decoding is on the user's webbrowser side.
This will free the intermediate server from a heavy load for big streams.
It is necessary that the playback work on mobile phones.
Thanks for the answer.
There is no way to stream RTSP camera's H264 video directly to web browser.
But cameras support outputting still jpeg images - you can create a webpage that will display such an image from a camera every 200ms or so.
If you are not happy with the above solution, you must use a media server in between, which will pull RTSP stream from the camera and will convert it to some protocol that browser understands. You are mistaken in one thing: no video transcoding is involved. I don't know why WebRTC is not an option for you, but most media servers will offer 4 types of output:
Low latency:
WebRTC
Websockets to MSE
High latency:
HLS
MPEG-Dash
All these methods do NOT require transcoding of your original H264 video, encoded by RTSP camera/NVR. Some media servers you can use: Unreal Media Server, Wowza, Janus.
Live demo: http://www.umediaserver.net/umediaserver/demos.html
No browser has native RTSP support, so if you want decoding to happen on the end user side, then you'll have to write your own custom web player.
You can start by looking at the open-source solution like this one:
git://github.com/Streamedian/html5_rtsp_player.git
It works on PC and Android, but didn't work with iPhone for me (but you can try it for yourself https://streamedian.com/demonstration/ maybe it's just my issue), but maybe you can find better alternative or fork it and make it work on all devices.
It still requires a middle-man proxy server though because it uses a websocket tech to work, but since it doesn't do any video converting or decoding, it don't suppose to take any resources at all.

What is the simplest way to implement small group, low latency, one-to-many audio broadcast

I have a Linode server and need to broadcast one to-many audio (they can hear but can not talk back) to a group of three to five people. I looked at WebRTC and the Janus server but it seems complete overkill. Using commercial applications like Skype, Discord etc. results in low audio quality and it is mono. Best possible audio quality and low latency (on a par with that of Skype, Discord etc.) is essential.
Any pointers would be greatly appreciated.
I can recommend building such system based on Icecast streaming. It's an old proven technology which has a latency close to real-time.
You could use any set of Icecast-enabled tools for that.
As example, here's what you an do with tools by our company:
Larix Broadcaster mobile app allows streaming in audio-only
mode.
Nimble Streamer software media server can get Larix' input and
produce Icecast stream. You can use any Icecast-enabled here
instead.
SLDP Player can play Icecast produced by Nimble
Streamer or any other Icecast-enabled server.
That can also be built with other companies products, so you can pick the right tools yourself.
A super simple setup would be to just use command line tool called ffmpeg (it also has an api) see doc at https://trac.ffmpeg.org/wiki/ffserver
Where your source audio lives just launch either the ffmpeg or ffserver
ffserver -f /etc/ffserver.conf
in that config put location of source audio and output url it will publish to ... then your client receivers can use ffplay with
ffplay <stream URL>
ffmpeg is a free open source industry workhorse for audio/video manipulation ... its the underlying technology several more visable tools like vlc use under the covers

Displaying mjpeg/h264 live streaming (with additional information) on a web page?

Right now my goal is to grab a streaming video from an IP surveillance camera and display it on a web page.
The camera allows to encode the streaming either in h264 or mjpeg, and transmits it by the RTSP protocol.
The streaming has to be available for several kinds of devices (mainly computers, android smartphones and iphones).
According to my findings it seems like the best option for doing that (in terms of latency) is to transmit the frames of the video through a websocket:
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets.
Almost all the implementations of this mechanism I've found are based on mjpeg since it's easier to get the video frames.
There's also a h264 player: https://github.com/131/h264-live-player, based on https://github.com/mbebenita/Broadway, which I didn't manage to run ( I would appreciate any help in that respect).
Now the first question is: it is worth trying to work with h264 (since it saves a lot of bandwidth). Or would the h264 decode process probably introduce too much latency?
I would also like to ask if anyone knows a better solution that the one I'm trying to implement.
Finally, where I say "additional information" I mean that I might want to include some additional data associated with some video frames. (something like subtitles or telemetry data).

What's the best protocol for live audio (radio) streaming for mobile and web?

I am trying to build a website and mobile app (iOS, Android) for the internet radio station.
Website users broadcast their music or radio and mobile users will just listen radio stations and chat with other listeners.
I searched a week and make a prototype with Wowza engine (using HLS and RTMP) and SHOUTcast server on Amazon EC2.
Using HLS has a delay with 5 seconds, but RTMP and SHOUTcast has 2 second delay.
With this result I think I should choose RTMP or SHOUTcast.
But I am not sure RTMP and SHOUTcast are the best protocol. :(
What protocol should I choose?
Do I need to provide a various protocol to cover all platform?
This is a very broad question. Let's start with the distribution protocol.
Streaming Protocol
HLS has the advantage of allowing users to get the stream in the bitrate that is best for their connection. Clients can scale up/down seamlessly without stopping playback. This is particularly important for video, but for audio even mobile clients are capable of playing 128kbit streams in most areas. If you intend to have a variety of bitrates available and want to change quality mid-stream, then HLS is a good protocol for you.
The downside of HLS is compatibility. iOS supports it, but that's about it. Android has HLS support but it is still buggy. (Maybe in another year or two once all the Android 3.0 folks are gone, this won't be as much of an issue.) JWPlayer has some hacks to make HLS work in Flash for desktop users.
I wouldn't bother with RTMP unless you're only concerned with Flash users.
Pure progressive streaming with HTTP is the route I almost always choose to go. Everything can play it. (Even my Palm Pilot's default media player from 12 years ago.) It's simple to implement and well understood.
SHOUTcast is effectively HTTP, but a poorly implemented version that has compatibility issues, particularly on mobile devices. It has a non-standard status line in its response which breaks a lot of clients. Icecast is a good alternative, and is what I would recommend for production use today. As another option, I have created my own streaming service called AudioPump which is HTTP as well, and has been specifically built to fix compatibility with oddball mobile clients, such as native Android players on old hardware. It isn't generally available yet, but you can contact me at brad#audiopump.co if you want to try it.
Latency
You mentioned a latency of 2 seconds being desirable. If you're getting 2-second latency with SHOUTcast, something is wrong. You don't want latency that low, particularly if you're streaming to mobile clients. I usually start with a 20-second buffer at a minimum, which is flushed to the client as fast as it can receive it. This enables immediate starting of the stream playback (as it fills up the client-side buffer so it can begin decoding) while providing some protection against buffer underruns due to network conditions. It's not uncommon for mobile users to walk around the corner of a building and lose their nice signal quality. You want your stream to survive that as best as possible, so if you have already sent the data to cover the drop out, the user doesn't have to know or care that their connection became mediocre for a short period of time.
If you do require low latency, you're looking at the wrong technology entirely. For low latency, check out WebRTC.
You certainly can tweak your traditional internet radio setup to reduce latency, but rarely is that a good idea.
Codec
Codec choice is what will dictate your compatibility more than anything else. MP3 is easily the most compatible, and AAC isn't far behind. If you go with AAC, you get better quality audio for a given bitrate. Most folks use this to reduce their bandwidth bill.
There are licensing fees with MP3, and there may be with AAC depending on what you're using for a codec. Check with a lawyer. I am not one, and the licensing is extremely complicated.
Other codecs include Vorbis and Opus. If you can use Opus, do so as the licensing is wide open and you get good quality for the bandwidth. Client compatibility here though is the killer of Opus. (Maybe in a few years it will be better.) Vorbis is a mediocre codec, but is free and clear.
On the extreme end, I have some stations doing their streaming in FLAC. This is lossless audio quality, but you're paying for 8x the bandwidth as you would with a medium quality MP3 station. FLAC over HTTP streaming compatibility is not code at the moment, but it works alright in VLC.
It is very common to support multiple codecs for your streams. Depending on your budget, if you can't do that, you're best off with MP3.
Finally on encoding, don't go from a lossy codec to another lossy codec if you can help it. Try to get the output stream as close to the input as possible. If you re-encode audio, you lose quality every time.
Recording from Browser
You mentioned users streaming from a browser. I built something like this a couple years ago with the Web Audio API where the audio is captured and then encoded and sent off to Icecast/SHOUTcast servers. Check it out here: http://demo.audiopump.co:3000/ A brief explanation of how it works is here: https://stackoverflow.com/a/20850467/362536
Anyway, I hope this helps you get started.
Streaming straight audio/mpeg (mp3 packets) has worked everywhere I've tried.
If you are developing an APP then go with AAC, if you are simply playing via web browser then you need a HTML5 Implimentation which is MP3. All custom protocols like RTMP or SHOUTcast requires additional UI to be built. There are some third party players available in open source world. You can either use them or stick to HTML5 MP3/OGG as most people now days are using chrome browser or other HTML5 complaint browsers.

Requirement for Transport Stream streaming server

Heyy Everyone,
We are designing a television module. In the current architecture, we have 2 independent devices, each running linux over Atom processor. We have a requirement to stream live transport stream from one device to another via a network. I tried looking for streaming softwares running on linux, which are capable of live streaming Transport Streams, but could not find any.
Any suggestions will be appreciated.
--
Sen
Try the following:
ffmpeg
Gstreamer
VLC
Live555
I'm using "Nginx" with "ffmpeg" on "linux" server.
There are many tools for this process. but newly I using srs.
Nginx
SRS
for send "RTMP" stream please use "Vmix" or "OBS".
For live streaming, both #dipan-mehta and #fakhredin-gholamizadeh give lots of open-source softwares, which is stable and widely used.
I want to share about more use scenarios about live streaming, first is about WebRTC. Although WebRTC is design for video conference, but it's also OK to use in live streaming, especially the realtime latency(<1s). Let's take a view from end-to-end latency:
HLS or DASH: About 3~10s latency, and LLHLS maybe more lower about 3~5s. Note that all the file-based protocols(HLS, HDS, DASH, CMAF, etc) is impossible to get 1s latency.
RTMP or HTTP-FLV: About 1-3s latency, works quit perfect for most of live streaming use scenarios.
SRT or WebRTC: About 0.2~1s latency, because they are UDP streaming protocol, and without cumulative latency caused by jitter of network.
Note that SRT is used in publisher systems, to replace RTMP, it's not support by player(H5) now.
In future, the live streaming protocol maybe QUIC or WebTransport, which is now RFC, and SRS is plan to support it.
As a live streaming server, SRS supports almost all of these protocols:
RTMP, HLS, HTTP-FLV: Stable.
WebRTC: Stable.
DASH, SRT: Experimental.
HDS: Deprecated.
LLHLS, CMAF, QUIC, WebTransport: In plan.
Note: Please see the latest Features.
Apart of protocols, it's also important to support DVR and Cluster.

Resources