Is there anyone who can successfully run real time streaming with ffserver? - rtsp

I hope to stream my video camera and audio mic. using ffserver.
ffserver says it could do such, but I just can't find any working source?
If someone knows, could you please show me how it's done?

Please refer to following links:
Simple video streaming with ffserver
Live video streaming
from Ubuntu (the link is broken)
Following is configuration of my test env:
ffserver configuration [/etc/ffserver.conf]
HttpPort 8090
RtspPort 5554
HttpBindAddress 0.0.0.0
MaxClients 1000
MaxBandwidth 10000
NoDaemon
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 5M
</Feed>
<Stream test.mpeg4>
Feed feed1.ffm
Format rtp
VideoCodec mpeg4
VideoFrameRate 15
VideoBufferSize 80000
VideoBitRate 100
VideoQMin 1
VideoQMax 5
VideoSize 352x288
PreRoll 0
Noaudio
</Stream>
Run ffserver like following:
ffserver -d
Start video capture from web camera:
ffmpeg -r 25 -s 352x288 -f video4linux2 -i /dev/video0 http://localhost:8090/feed1.ffm
Now you can play your stream using any rtsp client. In my example I use ffplay:
ffplay "rtsp://localhost:5554/test.mpeg4
I just tested this configuration on my laptop. And it works fine!

I also try this for video and audio but neither the video nor the sound could be established.
HttpPort 8090
RtspPort 8554
HttpBindAddress 0.0.0.0
MaxClients 1000
MaxBandwidth 10000
NoDefaults
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 5M
ACL allow 127.0.0.1
</Feed>
<Stream test>
Feed feed1.ffm
Format rtp
#VideoCodec libx264
#VideoFrameRate 25
#VideoBufferSize 80000
VideoBitRate 4000
VideoSize 1920x1080
AudioCodec aac
Strict -2
AudioBitRate 8000
AudioChannels 2
AudioSampleRate 44100
AVOptionAudio flags +global_header
</Stream>
But by adding the "Noaudio", only the video is playing. How to fix the problem؟

Related

How to change mjpeg to yuyv422 from a webcam to a v4l2loopback?

Backstory: One livestreaming site I use isn't smart enough to detect the capabilities of my webcam (Logitech Brio, 4k), and instead just uses the default frames per second settings, which is 5fps.
(full solution walk-through in the answer)
The best solution I could think of (besides changing livestream providers) was to create a loopback virtual webcam using v4l2loopback that I could force to have the exact settings I wanted to use on that livestream site.
For the brio, the higher frame rates come with mjpeg, not the default yuyv.
Problem 1:
I could easily read mjpeg, but unfortunately kept banging my head against the wall because v4l2loopback evidently only wanted yuyv.
I tried things like:
ffmpeg -f v4l2 \
-input_format mjpeg \
-framerate 30 \
-video_size 1280x720 \
-i /dev/video0 \
-vcodec copy \
-f v4l2 /dev/video6
and
ffmpeg -f v4l2 \
-input_format mjpeg \
-framerate 30 \
-video_size 1280x720 \
-i /dev/video0 \
-vcodec yuyv422 \ # this line changed (even tried "copy")
-f v4l2 /dev/video6
But they wouldn't work. I got errors like:
Unknown V4L2 pixel format equivalent for yuvj422p
and
...deprecated pixel format used, make sure you did set range correctly...
...V4L2 output device supports only a single raw video stream...
Eventually I got this to work:
ffmpeg -f v4l2 \
-input_format mjpeg \
-framerate 30 \
-video_size 1280x720 \
-i /dev/video0 \
-pix_fmt yuyv422 \ # The winning entry
-f v4l2 /dev/video6
Problem 2
The next problem was getting chrome to see the virtual webcam. It worked correctly with guvcview, and on firefox I could use webcam testing sites and it would pick the virtual camera up without a problem.
Turns out google, in it's overly-protective nature (while it's siphoning off all our data, btw), doesn't want to use webcams that can be read and written to.
So when starting v4l2loopback you have to tell it to announce that it's "read only" to consumers like chrome.
Here's the exact modprobe I use that works:
sudo modprobe v4l2loopback devices=1 exclusive_caps=1
Exact solution.
1. Figure out which webcam is the correct input webcam
Use v4l2-ctl to list all the webcams:
v4l2-ctl --list-devices
My output is this (yours will vary, I'll use mine as an example as I go):
Logitech BRIO (usb-0000:00:14.0-5.2):
/dev/video0
/dev/video1
HP HD Camera: HP HD Camera (usb-0000:00:14.0-9):
/dev/video2
/dev/video3
/dev/video4
/dev/video5
In this case my brio is video0.
2. Start v4l2loopback:
sudo modprobe v4l2loopback devices=1 exclusive_caps=1
3. Confirm your loopback device:
v4l2-ctl --list-devices
Mine now shows this, indicating video6 is the loopback:
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video6
Logitech BRIO (usb-0000:00:14.0-5.2):
/dev/video0
/dev/video1
HP HD Camera: HP HD Camera (usb-0000:00:14.0-9):
/dev/video2
/dev/video3
/dev/video4
/dev/video5
4. Determine your optimal input settings
Use guvcview to figure out which codec gives you the resolution and framerate you're looking for (you may have to use the menu -> Video -> Video Codec -> Raw camera input).
I got 60fps using mjpeg, I only needed 30. The default yuyv gave a miserable 5fps.
Now use ffmpeg to list the capabilities of the camera and get the matching codec:
ffmpeg -f v4l2 -list_formats all -i /dev/video0 #use your camera here from step 2
In the output you'll see something like this:
[video4linux2,v4l2 # 0x55f1a4e989c0] Raw : yuyv422 : YUYV 4:2:2 : 640x480 160x120 176x144 320x180 320x240 352x288 340x340 424x240 440x440 480x270 640x360 800x448 800x600 848x480 960x540 1024x576 1280x720 1600x896 1920x1080
[video4linux2,v4l2 # 0x55f1a4e989c0] Compressed: mjpeg : Motion-JPEG : 640x480 160x120 176x144 320x180 320x240 352x288 424x240 480x270 640x360 800x448 800x600 848x480 960x540 1024x576 1280x720 1600x896 1920x1080
In my case it was the mjpeg that gave the best output in guvcview, and that was the exact name of the codec (as indicated above).
5. Start ffmpeg using that input codec and changing the pixel format to yuyv:
ffmpeg -f v4l2 \
-input_format mjpeg \
-framerate 30 \
-video_size 1280x720 \
-i /dev/video0 \
-pix_fmt yuyv422 \
-f v4l2 /dev/video6
Update the video size to the highest size your livestream/video record will support, as long as your camera also supports it.
Now when you want to livestream, just use the camera labeled "Dummy"

Webrtc streaming issue with Wowza and FFMPEG

I am trying to stream video and audio from a Camera in a browser using Webrtc and Wowza Media Server (4.7.3 version).
The camera stream (h264/aac) is first of all transcoded by using FFMPEG (version N-89681-g2477bfe built with gcc 4.8.5, last available version on ffmpeg website) in VP8/OPUS and then pushed to the Wowza Server.
By using the small Wowza webpage I ask for the Wowza stream to be displayed in the browser (Chrome Version 66.0.3336.5 Build officiel canary 32 bits).
FFMPEG used command :
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec libopus -ab 32000 -ar 48000 -ac 2 -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
When I click on Play stream I have a very bad quality video and audio (jerky video and very bad audio).
If I use this FFMPEG command:
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec copy -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
I will have a good video (flowing, smooth) but no audio (the camera micro is ON).
If libopus is the problem (as this test first shows), I tried libvorbis but with Chrome console I have this error "Failed to set remote offer sdp: Session error code: ERROR_CONTENT". Weird, cause libvorbis is one of the available codecs for Webrtc.
Is someone experiencing the same issue ? Did someone experience the same issue ?
Thanks in advance.
You probably have no audio because opus must have sample rate of 48000
You should add the flag:
"-ar 48000"
to the output settings
I also experienced the "bad quality video and audio issues".
I finally solved the issue by adding:
"-quality realtime" to the output settings .
That work well for me, I hope this will help you.

FFmpeg RTP_Mpegts over RTP protocol

I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
This works, but on initiation it alerts "[mpegts # 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
I'm using -vcodec copy because i've already verified it in another rtp stream in which -acodec copy didn't work.
This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance:
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.
I tested this with an .aac file and it worked:
Streaming:
(notice I use a multicast address.
But if you test the streaming and receiving on the same machine you might use your 127.0.0.1 as loopback address to the local host.)
ffmpeg -f lavfi -i testsrc \
-stream_loop -1 -re -i "music.aac" \
-map 0:v -map 1:a \
-ar 8000 -ac 1 \
-f rtp_mpegts "rtp://239.1.1.9:1234"
You need a video source for the rtp_mpegts muxer. I created one with lavfi.
I used -stream_loop to loop the .aac file forever for my test. You don't need this with a mic as input.
Capture stream:
ffmpeg -y -i "rtp://239.1.1.9:1234" -c:a pcm_u8 "captured_stream.wav"
I use the -c:a pcm_u8 while capturing on purpose, because using it in the Streaming did not work on the capturing side.
The output is a low quality 8bit, 8kHz mono audio file but that was what you've asked for.

HTTP Live Stream stops playing after a while

I have a problem with streaming with ffserver. After I start ffserver and desktop-capture, everything seems to work fine.
Then I open the browser and access the output(http://localhost:8090/test1.mpeg). It
plays fine for 6-7 seconds then it stops and I have to refresh the page to get it work again. Does anyone know why that happens and how I can correct it?
Here is my ffserver.conf
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 40000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 10000K
ACL allow 127.0.0.1
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Feed>
<Stream test1.mpeg>
Feed feed1.ffm
Format mpeg
AudioBitRate 32
AudioChannels 1
AudioSampleRate 44100
VideoBitRate 300
VideoFrameRate 30
VideoSize 1280x1024
VideoCodec mpeg1video
AudioCodec libvorbis
NoAudio
StartSendOnKey
</Stream>
my desktop-capture:
ffmpeg -f x11grab -r 40 -s 800x600 -framerate 50 -i :0.0+4,529 -map 0 -codec:v mpeg1video -codec:a libvorbis http://localhost:8090/feed1.ffm
The Problem was, that the VideoBitRate was too low. I changed it to 3000 and now it runs without Problems.
now my ffserver.conf looks like this:
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 40000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 10000K
ACL allow 127.0.0.1
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Feed>
<Stream test1.mpeg>
Feed feed1.ffm
Format mpeg
AudioBitRate 50
AudioChannels 1
AudioSampleRate 44100
# Bitrate for the video stream
VideoBitRate 3000
VideoFrameRate 30
VideoSize 1280x1024
VideoCodec mpeg1video
AudioCodec libvorbis
NoAudio
StartSendOnKey
</Stream>

h264 restream works when i have no audio in ffserver conf but does not work when i try to add audio

I am trying to restream an h264 video stream from a camera. All works well when I have NoAudio in my conf file. However when i add audio, even the video stream does not work. Has anyone ever encountered thiss?
ffmpeg -i rtsp://*** -s 320x240 -vcodec copy -acodec copy -s 320x240 -ab 64k http://*:8091/feed1.ffm

Resources