I have two different pipelines, one for video and one for audio. They both work independently but i'd like to merge them as a single one. I believe this is possible but i have no idea how to do such a thing :(
Here are my two pipelines:
Sender:
gst-launch v4l2src device=/dev/video0 ! 'video/x-raw-yuv,width=1280,height=720,framerate=10/1' ! ffmpegcolorspace ! vpuenc codec=6 ! rtph264pay ! udpsink host=192.168.20.26 port=5000
gst-launch alsasrc device=hw:2 ! audioconvert ! audioresample ! alawenc ! rtppcmapay ! udpsink host=192.168.20.26 port=5001
Receiver:
gst-launch udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, payload=(int)96, encoding-name=(string)H264" ! rtph264depay ! ffdec_h264 ! xvimagesink
gst-launch udpsrc port=5001 caps="application/x-rtp" ! rtppcmadepay ! alawdec ! alsasink
Moreover, anyone knows what would be the resulting sdp file so i can also open it in VLC if needed?
Any pointers would be of great help ;)
Thank you.
To merge the sender:
gst-launch v4l2src device=/dev/video0 ! \
'video/x-raw-yuv,width=1280,height=720,framerate=10/1' ! \
ffmpegcolorspace ! \
vpuenc codec=6 ! \
rtph264pay ! \
udpsink host=192.168.20.26 port=5000 alsasrc device=hw:2 ! \
audioconvert ! \
audioresample ! \
alawenc ! \
rtppcmapay ! \
udpsink host=192.168.20.26 port=5001
To merge the receiver:
gst-launch udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, payload=(int)96, encoding-name=(string)H264" ! \
rtph264depay ! \
ffdec_h264 ! \
xvimagesink udpsrc port=5001 caps="application/x-rtp" ! \
rtppcmadepay ! \
alawdec ! \
alsasink
The SDP file would be of the form(generally, this is probably not exact):
v=0
c=IN IP4 <Receiver IP>
m=video 5000 RTP/AVP 96
a=recvonly
a=rtpmap:96 H264/90000
m=audio 5001 RTP/AVP 8
a=recvonly
a=rtpmap:8 PCMA/8000/1
You should change the clock rates on the PCMA if it is two channels.
Also, you MAY need to add a a=fmtp:96 sprop-parameter-sets=<your sprop-parameter sets in the caps> after the video rtpmap line.
You should be able to get the full caps for both pipelines by launching it verbosely(gst-launch -v). This is so you can get the number of channels and clock-rate for PCMA and your sprop-parameter-sets for H264.
Related
I have a compiled pipeline for working with a program in python. So far, I'm checking this pipeline in the console and came up with a strange result. If I try to play the resulting sound with a pipeline that has a pitch, I get only strange clicks, but if I remove the part with a pitch, I get a clean sound.
Generator command:
gst-launch-1.0 -v filesrc location=morse.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! udpsink host=127.0.0.1 port=4000
Receive command:
gst-launch-1.0 audiomixer name=mixer udpsrc name=src0 uri=udp://127.0.0.1:4000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)4000, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! tee name=app ! queue ! rtpL16depay ! mixer.sink_0 udpsrc name=src1 uri=udp://127.0.0.1:5001 caps="application/x-rtp, media=(string)audio, clock-rate=(int)4000, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! queue ! rtpL16depay ! mixer.sink_1 mixer. ! tee name=t ! queue ! audioconvert ! audioresample ! audiorate ! pitch pitch=1.0 !audiopanorama name=panorama panorama=-1.00 ! autoaudiosink name=audio_sink app. ! queue ! appsink name=asink emit-signals=True
When using gstreamer in my program, I would like to shift as many options as possible to it, since it is much more reliable in my opinion.
The question is how to adjust the pitch and what is the reason that the pitch does not allow the command to work?
You may need to set encoding-name=L16 in udpsrc output caps:
Sender:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! audio/x-raw,format=S16BE ! rtpL16pay ! udpsink host=127.0.0.1 port=4000 -v
Receiver:
gst-launch-1.0 udpsrc port=4000 ! application/x-rtp,media=audio,encoding-name=L16,clock-rate=44100,format=S16BE ! rtpL16depay ! audioconvert ! autoaudiosink -v
[EDIT: This works fine on my side:
Created 2 live sources:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! audio/x-raw,format=S16BE,rate=44100 ! rtpL16pay ! application/x-rtp,encoding-name=L16 ! udpsink host=127.0.0.1 port=4000 -v
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! audio/x-raw,format=S16BE,rate=44100 ! rtpL16pay ! application/x-rtp,encoding-name=L16 ! udpsink host=127.0.0.1 port=5001 -v
Then used:
gst-launch-1.0 \
audiomixer name=mixer \
udpsrc name=src0 uri=udp://127.0.0.1:4000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! tee name=app ! queue ! rtpL16depay ! queue ! mixer.sink_0 \
udpsrc name=src1 uri=udp://127.0.0.1:5001 caps="application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! queue ! rtpL16depay ! queue ! mixer.sink_1 \
mixer. ! tee name=t ! queue ! audioconvert ! audioresample ! audiorate ! pitch pitch=1.0 ! audiopanorama name=panorama panorama=-1.00 ! autoaudiosink name=audio_sink \
app. ! queue ! appsink name=asink emit-signals=True
without problem.
What seems wrong is the low clock-rate in your pipeline. Try setting rate 44100 in caps after audioresample in senders.
I am using GStreamer to stream live video / audio from a Pi3B with a picam module and USB microphone. My end goal is to use the audio from the one USB microphone in both the live video / audio stream AND as the input to a python script. I understand that this can be done with the ALSA dsnoop plugin and have been able to demonstrate it with this /etc/asound.conf config:
pcm.myTest {
type dsnoop
ipc_key 2241234
slave {
pcm "hw:1,0"
channels 1
}
}
pcm.!default {
type asym
playback.pcm {
type plug
slave.pcm "hw:0,0"
}
capture.pcm {
type plug
slave.pcm "myTest"
}
}
The video / audio stream works perfectly using the following GStreamer settings, but i am unable to use the microphone in other applications (note the "hw:1,0"):
#!/bin/bash
gst-launch-1.0 -v rpicamsrc vflip=true hflip=false \
name=src preview=0 fullscreen=0 bitrate=10000000 \
annotation-mode=time annotation-text-size=20 \
! video/x-h264,width=960,height=540,framerate=24/1 \
! h264parse \
! rtph264pay config-interval=1 pt=96 \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5001 \
alsasrc device=hw:1,0 \
! audioconvert \
! audioresample \
! opusenc \
! rtpopuspay \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5002
The following (which uses dsnoop) causes an issue in the video stream which looks like some kind of synchronization problem where instead of a nice smooth 24 frames per second I get one frame every ~2-3 seconds. The audio continues to work well and im able to use the USB mic simultaneously in other applications.
#!/bin/bash
gst-launch-1.0 -v rpicamsrc vflip=true hflip=false \
name=src preview=0 fullscreen=0 bitrate=10000000 \
annotation-mode=time annotation-text-size=20 \
! video/x-h264,width=960,height=540,framerate=24/1 \
! h264parse \
! rtph264pay config-interval=1 pt=96 \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5001 \
alsasrc device=plug:myTest \
! audioconvert \
! audioresample \
! opusenc \
! rtpopuspay \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5002
I've tried a few things that I've found in some peripherally related forums to no avail and im feeling kinda stuck. Do any of you have any suggestions on getting a stream to play nicely with dsnoop so that I can avoid buying another microphone for this project?
Thank you!
For posterity, I received a great tip from the GStreamer developer forum.
Adding provide-clock=false to the alsasrc line did the trick! so the GStreamer call becomes:
#!/bin/bash
gst-launch-1.0 -v rpicamsrc vflip=true hflip=false \
name=src preview=0 fullscreen=0 bitrate=10000000 \
annotation-mode=time annotation-text-size=20 \
! video/x-h264,width=960,height=540,framerate=24/1 \
! h264parse \
! rtph264pay config-interval=1 pt=96 \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5001 \
alsasrc device=plug:myTest provide-clock=false\
! audioconvert \
! audioresample \
! opusenc \
! rtpopuspay \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5002
One minor side effect of this approach is that the audio is
out of sync with the video by about 0.5 seconds. Im curious to know if there is a way to sync the two up a little better or if this just one of the
inevitable tradeoffs when trying to use a dsnoop device with gstreamer?
I have been trying to stream a live video from a RPi to a browser using GStreamer, i.e. RPi -> MediaServer -> Browser.
However the video that is display has become corrupted:
Corrupted Video Output
I have isolated the problem to the Gstreamer pipeline by streaming to a different port on the Pi and saving the stream to .mp4, however this video does not play.
Bash Script to send Stream
PEER_A={KMS_AUDIO_PORT} PEER_V={KMS_VIDEO_PORT} PEER_IP={KMS_PUBLIC_IP} \
SELF_PATH="{PATH_TO_VIDEO_FILE}" \
SELF_A=5006 SELF_ASSRC=445566 \
SELF_V=5004 SELF_VSSRC=112233 \
bash -c 'gst-launch-1.0 -e \
rtpbin name=r sdes="application/x-rtp-source-sdes,cname=(string)\"user\#example.com\"" \
rpicamsrc ! video/x-raw,width=200,height=150,framerate=25/1 ! decodebin name=d \
d. ! x264enc tune=zerolatency \
! rtph264pay ! "application/x-rtp,payload=(int)103,clock-rate=(int)90000,ssrc=(uint)$SELF_VSSRC" \
! r.send_rtp_sink_1 \
r.send_rtp_src_1 ! udpsink host=$PEER_IP port=$PEER_V bind-port=$SELF_V \
r.send_rtcp_src_1 ! udpsink host=$PEER_IP port=$((PEER_V+1)) bind-port=$((SELF_V+1)) sync=false async=false \
udpsrc port=$((SELF_V+1)) ! tee name=t \
t. ! queue ! r.recv_rtcp_sink_1 \
t. ! queue ! fakesink dump=true async=false'
Script to Receive Stream and save to mp4
gst-launch-1.0 udpsrc port=23938 caps="application/x-rtp" ! rtph264depay ! h264parse ! mp4mux ! filesink location=~/Desktop/test.mp4
Any ideas on what is wrong about my pipeline setup would be greatly appreciated, thanks.
I'm initiating RTP stream from my Raspberry camera using:
raspivid -n -vf -fl -t 0 -w 640 -h 480 -b 1200000 -fps 20 -pf baseline -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay pt=96 config-interval=10 ! udpsink host=192.168.2.3 port=5000
on the client site, I'm converting it to HLS and upload it on a web server:
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,payload=96 ! rtph264depay ! mpegtsmux ! hlssink max-files=5 target-duration=5 location=C:/xampp/htdocs/live/segment%%05d.ts playlist-location=C:/xampp/htdocs/live/playlist.m3u8
the above works with me. on the other hand some players does not play the HLS since it has no audio track. I'm trying to figure out how i can add a dummy audio track. I tried many things but no luck e.g.
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,payload=96 ! rtph264depay ! h264parse ! mux. audiotestsrc wave=4 freq=200 ! audioconvert ! queue ! mux. mpegtsmux name=mux ! hlssink max-files=5 target-duration=5 location=C:/xampp/htdocs/live/segment%%05d.ts playlist-location=C:/xampp/htdocs/live/playlist.m3u8
or
gst-launch-1.0 -e -v udpsrc port=5000 name=src ! application/x-rtp,payload=96 ! rtph264depay ! h264parse ! mpegtsmux name=mux ! audiotestsrc wave=silence src. ! audioconvert ! wavenc ! rtpmp4gdepay ! aacparse ! mux. ! hlssink max-files=5 target-duration=5 location=C:/xampp/htdocs/live/segment%%05d.ts playlist-location=C:/xampp/htdocs/live/playlist.m3u8
Any help is appreciated
What was your idea for these pipelines? These look like you are trying to mux uncompressed audio data. I don't think this is what you want. I expected something like this for the audio path:
audiotestsrc wave=silence ! voaacenc ! aacparse ! mux.
Note that there may be more specific requirements - like number of audio channels or specific sample rates that are supported by your HLS players.
I need to read a pcap file and convert it into a avi file with audio and video by using gstreamer.
If i try the following command, it only works for generating a video file.
Video Only
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=44602 \
!"application/x-rtp, payload=96" ! rtph264depay ! "video/x-h264, width=352, height=288, framerate=(fraction)30/1" \
! ffdec_h264 ! videorate ! ffmpegcolorspace \
! avimux ! filesink location=testh264.avi
Audio Only
And if i use the following command, it only works for generating a audio file.
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=7892 \
! "application/x-rtp, payload=8" ! rtppcmadepay ! alawdec ! audioconvert ! audioresample ! avimux ! filesink location=test1audio.avi
Video + Audio
When i combine two commands as follows, i encountered an error message --
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc1: Internal data flow error.
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=44602 \
!"application/x-rtp, payload=96" ! rtph264depay ! "video/x-h264, width=352, height=288, framerate=(fraction)30/1" \
! ffdec_h264 ! videorate ! ffmpegcolorspace \
! queue ! mux. \
filesrc location=h264Audio.pcap pcapparse src-port=7892 \
! "application/x-rtp, payload=8" ! rtppcmadepay ! alawdec ! audioconvert ! audioresample ! queue ! avimux name=mux ! filesink location=testVideoAudio.avi
Please kindly give me some solutions or suggestions with regard to this issue.
Thank you in advance.
Eric
Instead of the 2nd "filesrc ! pcapparse" give the first pcapparse a name=demux, drop the src-port arg and start the 2nd branch from demux.