Encoding to AC3 with GStreamer - audio

Is there an example for producing an AC3 stream? The only example I keep finding is:
gst-launch-1.0 -v audiotestsrc ! avenc_ac3
However, I get an "internal data flow error" every time, with the following right below it:
gstbasesrc.c(2809): gst_base_src_loop (): /GstPipeline:pipeline0/GstAudioTestSrc:audiotestsrc0:
streaming task paused, reason not-negotiated (-4)
I have version 1.0.6 .

It turns out that the bitrate parameter is optional, but the default value (0) is not valid, at least with an audiotestsrc source.
This works:
gst-launch-1.0 audiotestsrc ! audio/x-raw,channels=2 ! avenc_ac3 bitrate=192000 ! filesink location=/tmp/ac3test_20130630-0245

Related

Write, then read rtpopus to file with gstreamer?

Is it possible to write rtpopus to a file, then read it back with gstreamer? It seems simple but I'm getting nowhere with it and can't seem to find any information online. Here is my attempt:
gst-launch-1.0.exe audiotestsrc ! opusenc ! rtpopuspay ! filesink location=test.opus
Then, close and run:
gst-launch-1.0.exe filesrc location="test.opus" ! rtpopusdepay ! fakesink dump=true
gstreamer fails with:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
streaming stopped, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
I don't think it could work. RTP is related to UDP packetization so it would work when streaming over UDP.
You'd better use a file container supporting opus audio such as matroskamux:
gst-launch-1.0 -e audiotestsrc ! audioconvert ! opusenc ! matroskamux ! filesink location=test.mkv
# Let play for 5s and stop with Ctrl-C
# Replay:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! opusdec ! audioconvert ! autoaudiosink

Gstreamer: Encode microphone audio using AAC encode to mp4

Wondering if it is possible to encode using AAC into mp4 container
I have tried using the following
gst-launch-1.0 alsasrc device="hw:0,0" ! "audio/x-raw,rate=48000,channels=2,depth=16" ! queue ! audioconvert ! avenc_aac ! qtmux ! filesink location=audio.mp4
The program runs without a fault but when I inspect the file content, it gives me a null content
However when i run with avimux, the file content gives the encoding and details like lenght of audio
gst-launch-1.0 alsasrc device="hw:0,0" ! "audio/x-raw,rate=48000,channels=2,depth=16" ! queue ! audioconvert ! avenc_aac ! avimux ! filesink location=audio.mp4
Wander what is wrong as I would need AAC encoding (for later rtsp streaming) and need to use mp4 as container and qtmux
thanks
You don't really say what you are doing exactly. But most likely you are missing the -e option for gst-launch-1.0. With that an EOS signal is propagated through the pipleine to correctly finalize the mp4 file. Other file formats are not that picky, but mp4 needs to write a proper index when all samples have been written.

How to solve a RAW stream playback problem with GStreamer and VAAPi

I am currently experiencing a small problem with GStreamer, here are more details:
Configuration:
Intel i7-6700
Intel HD Graphics 530
Ubuntu 18.04 LTS
GStreamer1.0
VAAPI plugin
I receive a UDP stream from a video source, this stream is sent in RAW UYVY format. Here is my command line to decode it:
gst-launch-1.0 -v udpsrc port="1234" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)BT709-2, payload=(int)96, ssrc=(uint)1188110121, timestamp-offset=(uint)4137478200, seqnum-offset=(uint)7257, a-framerate=(string)25" ! rtpvrawdepay ! decodebin ! queue ! videoconvert ! xvimagesink
Problem as we can see on the screenshot below, the CPU load (right) is far too high for this kind of task and we can see the GPU load (left) which is almost zero.
To overcome this problem, I want to use the VAAPI graphics acceleration as I did in a previous project with H264 of which here is the command line below:
gst-launch-1.0 -v udpsrc port=1234 caps= "application/x-rtp, media\=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, profile-level-id=(string)640028, payload=(int)96, ssrc=(uint)2665415388, timestamp-offset=(uint)3571350145, seqnum-offset=(uint)18095, a-framerate=(string)25" ! rtph264depay ! queue ! vaapih264dec low-latency=1 ! autovideosink
The line above works perfectly and the CPU has almost no more loads. So I adapt this command line to use it with a RAW stream, here is the command:
gst-launch-1.0 -v udpsrc port="1234" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)BT709-2, payload=(int)96, ssrc=(uint)1188110121, timestamp-offset=(uint)4137478200, seqnum-offset=(uint)7257, a-framerate=(string)25" ! rtpvrawdepay ! vaapidecodebin ! videoconvert ! xvimagesink
It is the same line as the one at the beginning but I changed the element decodebin by vaapidecodebin as I had replaced avdec_h264 by vaapih264dec for my H264 stream. Unfortunately it doesn't work and I end up with this error:
WARNING: wrong pipeline: unable to connect rtpvrawdepay0 to vaapidecodebin0
How I can solve this problem? Do you have any leads to solve this problem?
What exactly are you trying to accelerate here? The CPU load is probably either due to the videoconvert as this is run in software to convert UYVY into a format your renderer supports (Hopefully that's another YUV format and not RGB) or it is the data transfer of the uncompressed data from CPU memory to GPU memory.
Note that transferring uncompressed image data is a much higher data rate than compressed H.264 video.
If you think the videoconvert is the expensive part you may want to try to use OpenGL for convert and displaying: .. ! glupload ! glcolorconvert ! glimagesink.
Maybe vaapipostproc can help you with color conversion if you don't want to go the OpenGL route.

How to fix 'Lossing Stream Before End of Stream in Gstream-0.10'

I have streamed video via vlc player over rtsp and then I have displayed this video via gstreamer-0.10. However, While vlc was streaming video over rtsp, I suddenly lost stream in the first minute of stream before end of stream.
I have used following pipeline:
GST_DEBUG=2 gst-launch-0.10 rtspsrc location=rtsp://127.0.0.1:8554/test !
gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! videorate ! xvimagesink
sync=false
I have got following output:
rtpjitterbuffer.c:428:calculate_skew: delta - skew: 0:00:01.103711536 too big, reset skew
rtpjitterbuffer.c:387:calculate_skew: backward timestamps at server, taking new base time
Got EOS from element "pipeline0".
Execution ended after 59982680309 ns.
Setting pipeline to PAUSED ...
gst_rtspsrc_send: got NOT IMPLEMENTED, disable method PAUSE
How to fix this problem ?
I have found solution. I have used rtspt://... instead of rtsp://... to enforce TCP instead of UDP.
gst-launch-0.10 rtspsrc location= rtspt://127.0.0.1:8554/test ! gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false

gstreamer pipeline for streaming multiplexed h.264 and aac audio between two raspberry pi's

I have been stuck on this for days now. I am trying to come up with a GStreamer pipeline that will stream h.264 video and compressed audio (aac, mulaw, whatever, I don't really care) over a single rtp stream. The problem seems to always be with the multiplexer. I've tried asf, avi, mpegts, Matroska and flv multiplexers and it seems they are all oriented towards files (not network streaming) and are therefore requiring header information. Anyway, here's my latest attempt:
gst-launch-1.0 -e --gst-debug-level=4 \
flvmux name=flashmux streamable=true ! flvdemux name=flashdemux ! decodebin name=decode \
videotestsrc ! 'video/x-raw,width=640,height=480,framerate=15/1' ! omxh264enc ! flashmux. \
audiotestsrc ! 'audio/x-raw,format=S16LE,rate=22050,channels=2,layout=interleaved' ! flashmux. \
decode. ! queue ! autovideoconvert ! fpsdisplaysink sync=false \
decode. ! queue ! audioconvert ! alsasink device="hw:1,0"
This pipeline removes rtp and simply feeds the decoder with the encoder. Also, this attempt uses raw audio, not encoded. Any help will be greatly appreciated!
To stream video+audio you should use 2 different ports.
Using rtpbin element to manage rtp session
Example http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp/server-v4l2-H264-alsasrc-PCMA.sh

Resources