Streaming RTP audio over Ethernet with avconv - audio

I have two laptops connected with a Ethernet cable and I'm trying to transmit an audio stream between them.
The IPs are 192.168.137.93 for the sender and 192.168.137.1 for the receiver. The receiver runs the DHCP server and provides Internet connection for the sender, and connectivity works well.
I'm running this command on the sender(Ubuntu Server) to capture audio from internal mic and send it to the receiver through RTP.
avconv -f alsa -ac 1 -i default:0 -acodec mp2 -b 64k -f rtp rtp://192.168.137.1:8000
On the receiver(Windows 10) I open VLC and I try to reproduce
rtp://192.168.137.1:8000
but I get no sound.
If I open the network monitor on the receiver I can see that there's incoming traffic, and if I try to stream and play on the same machine (the sender) with
avconv -f alsa -ac 1 -i default:0 -acodec mp2 -b 64k -f rtp rtp://192.168.137.93:8000
aplay -i rtp://192.168.137.93:8000
it works flawlessly!
I can't really figure out where the problem is.
UPDATE:
Ok, problem solved. Apparently VLC doesn't like mp2. Switched to -acodec libmp3lame and now works!
Next problem: latency. There's something like 1 second of delay between the mic and the receiver's loudspeaker. I think that's a matter of codec, because there's a huge delay even running sender and receiver on the same machine.
What's the lightest codec that fits for low-latency audio transmission?

Related

Webrtc streaming issue with Wowza and FFMPEG

I am trying to stream video and audio from a Camera in a browser using Webrtc and Wowza Media Server (4.7.3 version).
The camera stream (h264/aac) is first of all transcoded by using FFMPEG (version N-89681-g2477bfe built with gcc 4.8.5, last available version on ffmpeg website) in VP8/OPUS and then pushed to the Wowza Server.
By using the small Wowza webpage I ask for the Wowza stream to be displayed in the browser (Chrome Version 66.0.3336.5 Build officiel canary 32 bits).
FFMPEG used command :
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec libopus -ab 32000 -ar 48000 -ac 2 -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
When I click on Play stream I have a very bad quality video and audio (jerky video and very bad audio).
If I use this FFMPEG command:
ffmpeg -rtsp_transport tcp -i rtsp://<camera_stream> -vcodec libvpx -vb 600000 -crf 10 -qmin 0 -qmax 50 -acodec copy -f rtsp rtsp://<IP_Address_Wowza>:<port_no_ssl>/<application_name>/test
I will have a good video (flowing, smooth) but no audio (the camera micro is ON).
If libopus is the problem (as this test first shows), I tried libvorbis but with Chrome console I have this error "Failed to set remote offer sdp: Session error code: ERROR_CONTENT". Weird, cause libvorbis is one of the available codecs for Webrtc.
Is someone experiencing the same issue ? Did someone experience the same issue ?
Thanks in advance.
You probably have no audio because opus must have sample rate of 48000
You should add the flag:
"-ar 48000"
to the output settings
I also experienced the "bad quality video and audio issues".
I finally solved the issue by adding:
"-quality realtime" to the output settings .
That work well for me, I hope this will help you.

FFmpeg RTP_Mpegts over RTP protocol

I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
This works, but on initiation it alerts "[mpegts # 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
I'm using -vcodec copy because i've already verified it in another rtp stream in which -acodec copy didn't work.
This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance:
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.
I tested this with an .aac file and it worked:
Streaming:
(notice I use a multicast address.
But if you test the streaming and receiving on the same machine you might use your 127.0.0.1 as loopback address to the local host.)
ffmpeg -f lavfi -i testsrc \
-stream_loop -1 -re -i "music.aac" \
-map 0:v -map 1:a \
-ar 8000 -ac 1 \
-f rtp_mpegts "rtp://239.1.1.9:1234"
You need a video source for the rtp_mpegts muxer. I created one with lavfi.
I used -stream_loop to loop the .aac file forever for my test. You don't need this with a mic as input.
Capture stream:
ffmpeg -y -i "rtp://239.1.1.9:1234" -c:a pcm_u8 "captured_stream.wav"
I use the -c:a pcm_u8 while capturing on purpose, because using it in the Streaming did not work on the capturing side.
The output is a low quality 8bit, 8kHz mono audio file but that was what you've asked for.

How to stream on YouTube using a Raspberry Pi?

So I'm trying to stream on YouTube using a raspberry pi. The idea is for one raspberry pi to be used to stream the connected webcam and for another to display the stream, sort of like a surveillance camera. Both raspberry pi's are currently using Raspbian.
So is it possible for me to stream directly to YouTube on a Raspberry Pi.
You can use any Pi supported RTMP/Flash encoder to publish a YouTube live event. One example is ffmpeg which can be compiled on Raspbian.
Create your YouTube live event using the guide. You can find the various encoder settings here.
When everything is ready you can start streaming. For a 640x480#25 700k video stream the command will be something like:
ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 -c:v libx264 -b:v 700k -maxrate 700k -bufsize 700k -an -f flv rtmp://<youtube_rtmp_server/<youtube_live_stream_id>
"So is it possible for me to stream directly to YouTube on a Raspberry
Pi?"
Yes. But you're going to need to do a bit of configuring and get different hardware depending on your project needs.
For my project, a day and night doorway "security camera" that streams live to youtube, I chose a Raspberry Pi Zero W running raspbian (headless) and a camera module with auto IR switching capabilities and IR lights.
I have edited the raspbian image so all of the configurations of the wifi and camera module interfaces, code, and dependencies I need are pre-installed, so I can just flash an sd card, slap it in a pi+camera+powersupply setup and it does its thing.
So, for this answer to be helpful at all, you're going to need to do plenty of research on FFMPEG, know what it is, learn what it does, and get it installed on your board... You should be able to run a few tests getting FFMPEG to just spit out maybe a 10-second long video from your camera. I wouldn't bother reading any more of my ramblings if you have not got that far yet, because things are about to get specific.
So, your board is online, you can see it on the network, it's got internet, it's got ffmpeg, it's ready to go.
Here is the ffmpeg "stream command" I use to start the live stream:
raspivid -o - -t 0 -vf -hf -fps 60 -b 12000000 -rot 180 | ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -i - -vcodec copy -acodec aac -ab 384k -g 17 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/SESSION_ID
I arrived at this "stream command" above by tweaking each parameter you see, one by one, and in different combinations and I eventually got a really crisp 1080p stream with no buffering issues at all except for the occasional bit of wifi lag that comes around on my setup. You are going to need to do a ton of research into what every parameter does to get things just right and trust me it's going to be a pain figuring out what does what in the beginning. I would lurk all around StackOverflow and other resources and just plug around and see what you can get to come out of your setup when it comes to these FFMPEG commands.
To test if this "stream command" or any other you find works for you, just change SESSION_ID at the end to your stream key and run it in the console.
After you get an output you are happy with, figure out on your own how you want to trigger your camera to start streaming, if you want it to start recording as soon as the board is ready to start sending data, you accomplish this by putting your "stream command" in /etc/rc.local and it will run that command as soon as it can.
For my project, I use 18650 cells charged by solar panels as the power source so I have to be conscious about the power I use so I wrote some NodeJS program monitor just that.
Alright, that's enough talking into the wind for now. Hopefully, any of this helped someone out there, cheers.
Audio working! This worked for me from a raspberry pi 4 with an rbp v1.3 camera and cheap usb audio interface. Also gets the default audio which you can set in the alsamixer:
raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | ffmpeg -f alsa -ac 1 -ar 44100 -i default -acodec pcm_s16le -f s16le -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 60 -strict -2 -f flv rtmp://<destination/streamkey>

Problems with point to point streaming using FFmpeg

I want to live stream video from webcam and sound from microphone from one computer to another but there is some problems.
When I use this command line:
ffmpeg.exe -f dshow -rtbufsize 500M -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
FFmpeg console starts filling with yellow color messages and stream becomes unstable: http://s16.postimg.org/qglcgr345/Untitled.png
To solve this problem I have added new parameter to the command line to set the frame rate -r 25:
ffmpeg.exe -f dshow -rtbufsize 500M -r 25 -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
After I added -r 25 problem with yellow color messages disappears but then appears another problem. When I fresh start FFmpeg with this command line video and sound looks synchronous but after one or two minutes appears ~25 seconds lag between video and sound, sound goes behind video. I have tried that with different protocols UDP, TCP, RTP but problems are the same. Please help me!
I found answer for my problem with "-r" and asynchronous audio and video. Who is interested answer is here: https://trac.ffmpeg.org/wiki/DirectShow (in paragraph "Specifying input framerate").

How to record webcam video signal with video4linux2?

I need to record a video from my webcam with ffmpeg.
I tried with this command : ffmpeg -re -f video4linux2 -i /dev/video0 video.avi.
And I received that : The v4l2 frame is 24384 bytes, but 153600 bytes are expected.
When I try the same operation with avconv with this command : avconv -f video4linux2 -i /dev/video0 video.avi I received the same error.
But I can receive the video from my webcam with this command: gstreamer-properties.
How to configure v4l2 to get signal video from my webcam ?
The problem come from virtual box.
It recognize my webcam the first time during my test but after just as a USB device.
I set USB device to webcam and now I can stream or convert videos with ffmpeg.

Resources