gstreamer audio error on linux - linux

i am using g streamer-0.10 on Ubuntu os for streaming an web cam video on to an rtmp server i am getting an video output but their is a problem in audio . Below command used for streaming
gst-launch-0.10 v4l2src ! videoscale method=0 ! video/x-raw-yuv,width=852,height=480,framerate=(fraction)24/1 ! ffmpegcolorspace ! x264enc pass=pass1 threads=0 bitrate=900 tune=zerolatency ! flvmux name=mux ! rtmpsink location='rtmp://..../live/testing' demux. alsasrc device="hw:0,0" ! audioresample ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! pulseaudiosink
Blockquote
by running the above command i got an error
gstbaseaudiosrc.c(840): gst_base_audio_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 13920 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
Blockquote
so the audio is not audible.
Help me out to solve this problem.
Thanks in advance
Ameeth

I don't understand your pipeline. What is "demux." in the middle?
The problem you are facing is because you have not seperated your elements with queues. Keep a queue before your sinks and after your sources to give the rest all seperate threads to run. It should allow get rid of the issue.
Since I don't have pulse audio or rtmp reciever in my system i have tested out the following and it works.
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! queue ! x264enc pass=pass1 threads=0 bitrate=900000 tune=zerolatency ! queue ! flvmux name=mux ! fakesink alsasrc ! queue ! audioresample ! audioconvert ! queue ! autoaudiosink
You can change it accordingly and use it. The only thing I had to do to make it work and remove the error your are facing is to add the queues.

For me (Logitech c920 on Raspberry Pi3 w/ GStreamer 1.4.4) I was able to get rid of the "Dropped samples" warning by using audioresample to set the sampling rate of the alsasrc to something that flvmux liked. From gst-inspect-1.0 flvmux, it looks like flvmux only supports 5512, 11025, 22050, 44100 sample rates for x-raw and 5512, 8000, 11025, 16000, 22050, 44100 for mp4. Here's my working pipeline
gst-launch-1.0 -v -e \
uvch264src initial-bitrate=800000 average-bitrate=800000 iframe-period=2000 device=/dev/video0 name=src auto-start=true \
src.vidsrc ! video/x-h264,width=864,height=480,framerate=30/1 ! h264parse ! mux. \
alsasrc device=hw:1 ! 'audio/x-raw, rate=32000, format=S16LE, channels=2' ! queue ! audioresample ! "audio/x-raw,rate=44100" ! queue ! voaacenc bitrate=96000 ! mux. \
flvmux name=mux ! rtmpsink location="rtmp://live-sea.twitch.tv/app/MYSTREAMKEY"
I was surprised that flvmux didn't complain about getting an audio source that was at an unsupported sampling rate. Not sure if that's expected behavior.

Related

gstream delay problems with udpsrc and tcpserver

I'm using mediasoup create plaintransport then I forward from udpsrc to tcpserversink like this:
gst-launch-1.0 udpsrc port=57616 caps="application/x-rtp,media=(string)audio,clock-rate=(int)48000,payload=(int)100,encoding-name=(string)OPUS,ssrc=(uint)613744965" ! rtpopusdepay ! opusdec ! tcpserversink port=23333 host=0.0.0.0
on Client:
gst-launch-1.0 tcpclientsrc port=23333 host=11.22.33.44 ! rawaudioparse ! decodebin ! audioconvert ! audioresample ! autoaudiosink
the problems is audio stream always delay 2,3 and increase time by time. and I have warning like this
gstrtpbasedepayload.c(505): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRTPOpusDepay:rtpopusdepay0:
Received invalid RTP payload, dropping
WARNING: from element /GstPipeline:pipeline0/GstRTPOpusDepay:rtpopusdepay0: Could not decode stream.
please help me solve this and improve stream delay problems

GStreamer problem with separating audio and video

I'm new to gstreamer, basically a newbie.
I want to receive an rtmp video, process the video, reencode the video, merge it with the sound from the received video and then send it out as a new rtmp-video. Somehow I can not get get the sound working:
Receiver:
"rtmpsrc location=rtmp://xx.yy.10.40:1935/orig/1 do-timestamp=true ! queue ! flvdemux name=demux demux.video ! h264parse ! video/x-h264 ! nvh264dec ! videoconvert ! appsink"
"demux.audio ! aacparse ! queue ! mp4mux streamable=true ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false shm-size=100000000"
Please note, I separated the 2 strings simply for better readability. Both strings together are the reveiver queue. I get no error or warning up to GST_DBG=3. I used mp4mux because some claim, that I need a container.
Sender:
"appsrc ! videoconvert ! nvh264enc ! h264parse ! queue ! mux.video"
" shmsrc socket-path=/tmp/foo ! qtdemux ! aacparse ! queue ! mux.audio"
" flvmux name=mux ! rtmpsink location=rtmp://xx.yy.10.50:1935/result/1"
Please note I separated the strings for better readability. Again I get no error. But reading the sound buffer from shared memory (shmsrc) simply stalls. If I remove this line everything seems to work perfectly well, stable even for hours.
Any ideas someone, because all the working solutions seem to use raw audio and caps. But actually I'm not interested in audio at all, I just need it copied to the sender...
so an update from our side:
We tried many things, but the answer is that this problem is inherent to the elements shmsink and shmsrc from gstreamer
When using the shm-communication between threads you loose all meta-data, basically the audio stream coming from shmsrc is not an audio stream any more. You can test this by using launch:
gst-launch-1.0 --verbose rtmpsrc location=rtmp://xx.yy.10.40:1935/ai/1 do-timestamp=true timeout=10 ! flvdemux name=demux demux.video ! h264parse ! video/x-h264 ! fakesink demux.audio ! queue ! faad ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false shm-size=100000
will produce a lot of info, especially the audio and video formats
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, rate=(int)48000, channels=(int)2, codec_data=(buffer)1190 ...
/GstPipeline:pipeline0/GstShmSink:shmsink0.GstPad:sink: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)avc, codec_data=(buffer)014d4029ffe10015674d402995900780227e5c04400000fa40002ee02101000468eb8f20, pixel-aspect-ratio=(fraction)1/1, width=(int)1920, height=(int)1080, framerate=(fraction)24000/1001, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, alignment=(string)au, profile=(string)main, level=(string)4.1
When you do the same on the side of the shmsrc (pls note we only transfer the audio via shm)
gst-launch-1.0 --verbose shmsrc socket-path=/tmp/foo ! queue ! fakesink
you will get nothing, for gstreamer "nothing" looks like this:
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
If you want to use shmsrc, you need to set the meta-data via caps manually:
gst-launch-1.0 --verbose shmsrc socket-path=/tmp/foo ! 'audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2' ! queue ! fakesink
Will give you:
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2
Which is a correct, but totally useless solution. So we decided to move to libav instead of gstreamer.

gstreamer rtsp stream, server runs but client crashes

I am following a video tutorial online to stream low latency video and audio using gstreamer.
Here is the video link: https://youtu.be/mNQTORvhQ6Q
I have installed all the gstreamer dependencies and plugins on both the client and server and the rtsp package on the server also. The server runs with no issues but when I try run the client it has an error and ends the pipeline. I have tried some video only examples and it does indeed work so it's something to do with the pipeline I am using.
Here is the server pipeline running from a Raspberry Pi 4:
Ran from inside the /gst-rtsp-server-1.14.4/examples folder:
./test-launch --gst-debug=0 "( alsasrc device=hw:2,0 ! "audio/x-raw,channels=1,rate=48000" ! audioconvert ! opusenc ! rtpopuspay name=pay1 pt=97 v4l2src device=/dev/video0 ! "image/jpeg,width=800,height=600,frame-rate=30/1" ! rtpjpegpay name=pay0 pt=96 )"
Here is the pipeline on the client, which is a Ubuntu PC:
gst-launch-1.0 rtspsrc latency=0 location=rtsp://192.168.127.219:8554/test name=src src. ! "application/x-rtp, channels=1, media=audio, rate=48000, encoding-name=OPUS" ! rtpjitterbuffer ! rtpopusdepay ! opusdec ! audioconvert ! jackaudiosink src. ! "application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)JPEG" ! rtpjitterbuffer ! rtpjpegdepay ! jpegdec ! videoconvert ! autovideosink
It has these errors:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstJackAudioSink:jackaudiosink0: Jack server not found
Additional debug info:
gstjackaudiosink.c(355): gst_jack_ring_buffer_open_device (): /GstPipeline:pipeline0/GstJackAudioSink:jackaudiosink0:
Cannot connect to the Jack server (status 17)
Setting pipeline to NULL ...
Freeing pipeline ...
I have tested the output of jackaudiosink on its own with a test tone and it also works fine, so I assume it's specifically something about this pipeline that I haven't got quite right :(
Any help is much appreciated :)
Have you tried to put 'autoaudiosink' instead of 'jackaudiosink'? Like that:
gst-launch-1.0 rtspsrc latency=0 location=rtsp://192.168.127.219:8554/test name=src src. ! "application/x-rtp, channels=1, media=audio, rate=48000, encoding-name=OPUS" ! rtpjitterbuffer ! rtpopusdepay ! opusdec ! audioconvert ! autoaudiosink src. ! "application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)JPEG" ! rtpjitterbuffer ! rtpjpegdepay ! jpegdec ! videoconvert ! autovideosink

Using videobalance to adjust contrast and brightness in gstreamer pipeline saving camera stream to file

I have a working gstreamer pipeline, using videobalance to adjust the contrast and brightness of a camera stream, the output of which is displayed on screen:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 !
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" !
nvvidconv ! videobalance contrast=1.5 brightness=-0.3 ! nvoverlaysink
I want to do the same again, but this time record the camera stream to a file. I tried adding the videobalance element to the pipeline suggested by the authors of the drivers I'm using (which works fine otherwise):
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 !
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" !
nvv4l2h264enc ! videobalance contrast=1.5 brightness=-0.3 ! h264parse !
mp4mux ! filesink location=test.mp4 -e
But, I get the error:
WARNING: erroneous pipeline: could not link nvv4l2h264enc0 to videobalance0
Any suggestions for where I'm going wrong and/or possible solutions would be greatly appreciated.
NVidea encoders use NVMM memory, so it can't directly connect encoder to videobalance. Just adding simple video convert will solve the problem:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 !
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" !
nvv4l2h264enc ! videoconvert ! videobalance contrast=1.5 brightness=-0.3 ! h264parse !
mp4mux ! filesink location=test.mp4 -e

Gstreamer audio problem on embedded linux

I work on embedded linux. I want play video with minimum CPU. So after I completed compile, I tried play video with mplayer and gstreamer. Mplayer use CPU avarage %10-20. I want to obtain this perform on gstreamer. So I tried these command:
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec ! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux ! mpegvideoparse ! mpeg2dec ! xvimagesink
These commands use avarage %10-20 CPU. This number that I want number. But audio did not work with these command. I tried added audio element but I could not achieve.
I also tried gst-launch-1.0 playbin uri=file:///video_path.mpeg. Audio work with this command but CPU usage is so high and I don2t prefer this.
How can I work audio with 1 or 2 commands?
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec
! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux !
mpegvideoparse ! mpeg2dec ! xvimagesink
With the above two pipelines you are asking gtreamer to just play video, as a result you aren’t getting any audio.
gst-launch filesrc location=video_path.mpeg ! mpegdemux name=demuxer
demuxer. ! queue ! mpeg2dec ! autovideosink demuxer. ! queue ! mad !
audioconvert ! audioresample ! autoaudiosink
The above pipeline should play both audio and video.
Note: If you have support for hardware decoding that would reduce further CPU usage.

Resources