I am trying to play 2 channels in which audio in one and silence in other channel is being played.
$ gst-launch \
interleave name=i ! alsasink
filesrc location=/home/test1.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
After 10 sec I want to play silence in first and some audio in the second channel.
$ gst-launch \
interleave name=i ! alsasink \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
filesrc location=/home/test2.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
This can be done on PC's side, while playing these pipeline in two different terminals or making one of them run in background. But when I am playing one pipeline on am335x board and trying to play the other one, its something like this:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstAlsaSink:alsasink0: Could not open audio device for playback.
Device is being used by another application.
Additional debug info:
gstalsasink.c(697): gst_alsasink_open (): /GstPipeline:pipeline0/GstAlsaSink:alsasink0:
Device 'default' is busy
Setting pipeline to NULL ...
Freeing pipeline ...
when we check in gstalsasink.c it is calling snd_pcm_open in non-blocking mode .
CHECK (snd_pcm_open (&alsa->handle, alsa->device, SND_PCM_STREAM_PLAYBACK,
SND_PCM_NONBLOCK), open_error);
Then why its blocking other events to use the audio device?
Can anyone suggest me what to do for the target side ,since PC side alsasink is perfect.
could there be a little delay for closing the alsa device on your embedded hardware. Check with fuser which process has it still open. Also consider using gnonlin for developing a sequential playback of streams. This will reuse the existing audio sink.
Related
BLUF: I'd like to fan out an RTSP video stream using gstreamer so multiple processes can use the gstreamer process as a source, and I'm having problems doing that with tcpserversink.
I have an IOT camera that serves the video over RTSP, so I can successfully capture video with e.g.
gst-launch-1.0 -e rtspsrc location=rtsp://camera:554/data \
! rtph264depay \
! h264parse \
! mp4mux \
! filesink location=/tmp/data.mp4
I'd like to be able to capture several videos simultaneously from the stream, with arbitrary start and stop times - for example, I might have a video that runs from 0-120, another from 40-80, another from 60-100. For reasons that are not clear, when I request too many simultaneous streams, the camera starts killing existing streams. My theory is that the camera's hardware can't handle multiple connections and is running into resource starvation issues. To get around this, I'd like my recording server to have a single process that is re-hosting the RTSP stream from the camera, and my asynchronous recorder processes can attach to that.
It would seem that the following would work for the server subprocess:
gst-launch-1.0 -e rtspsrc location=rtsp://camera:554/data \
tcpserversink port=29000
and the following for the asynchronous recorder:
gst-launch-1.0 -e tcpclientsrc port=29000 \
! rtph264depay \
! h264parse \
! mp4mux \
! filesink location=/tmp/data.mp4
But it don't. The specific error I'm seeing on my client process is
ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data stream error.
The documentation for tcpserversink seems to indicate that you can just attach any pipeline end there and you're fine. It seems this isn't the case. What am I missing?
Try adding a ! after data "data ! tcpserversink port=29000"
I work on embedded linux. I want play video with minimum CPU. So after I completed compile, I tried play video with mplayer and gstreamer. Mplayer use CPU avarage %10-20. I want to obtain this perform on gstreamer. So I tried these command:
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec ! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux ! mpegvideoparse ! mpeg2dec ! xvimagesink
These commands use avarage %10-20 CPU. This number that I want number. But audio did not work with these command. I tried added audio element but I could not achieve.
I also tried gst-launch-1.0 playbin uri=file:///video_path.mpeg. Audio work with this command but CPU usage is so high and I don2t prefer this.
How can I work audio with 1 or 2 commands?
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec
! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux !
mpegvideoparse ! mpeg2dec ! xvimagesink
With the above two pipelines you are asking gtreamer to just play video, as a result you aren’t getting any audio.
gst-launch filesrc location=video_path.mpeg ! mpegdemux name=demuxer
demuxer. ! queue ! mpeg2dec ! autovideosink demuxer. ! queue ! mad !
audioconvert ! audioresample ! autoaudiosink
The above pipeline should play both audio and video.
Note: If you have support for hardware decoding that would reduce further CPU usage.
I'm trying to send audio through an RTP Stream using gstreamer with the lowest latency possible and I want to do it from a Pepper(gstreamer 0.10) to my computer(gstreamer 0.10 or 1.0).
I can send audio with little latency (20 ms) from the computer to Pepper however I doesn't work as well from the Pepper to the computer. When I try to adjust the buffer-time under 200 ms, I get this type of error :
WARNING: Can't record audio fast enough
Dropped 318 samples. This is most likely beacause downstream can't keep up and is consuming samples too slowly.
I used the answers here and so far and worked with the following pipelines:
Sender
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Receiver
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! autoaudiosink buffer-time=80000 sync=false
I don't really know how to tackle this issue as the CPU usage is not anormal.
And to be frank I am quite new in this, so I don't get what are the parameters to play with to get low latency. I hope someone can help me! (and that it is not a hardware problem too ^^)
Thanks a lot!
I don't think gst-launch-0.10 is made to work in real-time (RT).
Please consider writing your own program (even using GStreamer) to perform the streaming from an RT thread. NAOqi OS has the RT patches included and support this.
But with network involved in your pipeline, you may not be able to guarantee it is going to be processed in time.
So maybe the simplest solution could be to keep a queue before the audio processing of the sender:
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
queue ! audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Also consider that the receiver may cause a drop if it cannot process the audio in time, and that a queue might help too:
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! queue ! autoaudiosink buffer-time=80000 sync=false
I have been stuck on this for days now. I am trying to come up with a GStreamer pipeline that will stream h.264 video and compressed audio (aac, mulaw, whatever, I don't really care) over a single rtp stream. The problem seems to always be with the multiplexer. I've tried asf, avi, mpegts, Matroska and flv multiplexers and it seems they are all oriented towards files (not network streaming) and are therefore requiring header information. Anyway, here's my latest attempt:
gst-launch-1.0 -e --gst-debug-level=4 \
flvmux name=flashmux streamable=true ! flvdemux name=flashdemux ! decodebin name=decode \
videotestsrc ! 'video/x-raw,width=640,height=480,framerate=15/1' ! omxh264enc ! flashmux. \
audiotestsrc ! 'audio/x-raw,format=S16LE,rate=22050,channels=2,layout=interleaved' ! flashmux. \
decode. ! queue ! autovideoconvert ! fpsdisplaysink sync=false \
decode. ! queue ! audioconvert ! alsasink device="hw:1,0"
This pipeline removes rtp and simply feeds the decoder with the encoder. Also, this attempt uses raw audio, not encoded. Any help will be greatly appreciated!
To stream video+audio you should use 2 different ports.
Using rtpbin element to manage rtp session
Example http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp/server-v4l2-H264-alsasrc-PCMA.sh
i am using g streamer-0.10 on Ubuntu os for streaming an web cam video on to an rtmp server i am getting an video output but their is a problem in audio . Below command used for streaming
gst-launch-0.10 v4l2src ! videoscale method=0 ! video/x-raw-yuv,width=852,height=480,framerate=(fraction)24/1 ! ffmpegcolorspace ! x264enc pass=pass1 threads=0 bitrate=900 tune=zerolatency ! flvmux name=mux ! rtmpsink location='rtmp://..../live/testing' demux. alsasrc device="hw:0,0" ! audioresample ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! pulseaudiosink
Blockquote
by running the above command i got an error
gstbaseaudiosrc.c(840): gst_base_audio_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 13920 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
Blockquote
so the audio is not audible.
Help me out to solve this problem.
Thanks in advance
Ameeth
I don't understand your pipeline. What is "demux." in the middle?
The problem you are facing is because you have not seperated your elements with queues. Keep a queue before your sinks and after your sources to give the rest all seperate threads to run. It should allow get rid of the issue.
Since I don't have pulse audio or rtmp reciever in my system i have tested out the following and it works.
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! queue ! x264enc pass=pass1 threads=0 bitrate=900000 tune=zerolatency ! queue ! flvmux name=mux ! fakesink alsasrc ! queue ! audioresample ! audioconvert ! queue ! autoaudiosink
You can change it accordingly and use it. The only thing I had to do to make it work and remove the error your are facing is to add the queues.
For me (Logitech c920 on Raspberry Pi3 w/ GStreamer 1.4.4) I was able to get rid of the "Dropped samples" warning by using audioresample to set the sampling rate of the alsasrc to something that flvmux liked. From gst-inspect-1.0 flvmux, it looks like flvmux only supports 5512, 11025, 22050, 44100 sample rates for x-raw and 5512, 8000, 11025, 16000, 22050, 44100 for mp4. Here's my working pipeline
gst-launch-1.0 -v -e \
uvch264src initial-bitrate=800000 average-bitrate=800000 iframe-period=2000 device=/dev/video0 name=src auto-start=true \
src.vidsrc ! video/x-h264,width=864,height=480,framerate=30/1 ! h264parse ! mux. \
alsasrc device=hw:1 ! 'audio/x-raw, rate=32000, format=S16LE, channels=2' ! queue ! audioresample ! "audio/x-raw,rate=44100" ! queue ! voaacenc bitrate=96000 ! mux. \
flvmux name=mux ! rtmpsink location="rtmp://live-sea.twitch.tv/app/MYSTREAMKEY"
I was surprised that flvmux didn't complain about getting an audio source that was at an unsupported sampling rate. Not sure if that's expected behavior.