I am trying to record audio/video screencasts using gstreamer pipeline, by
modifying the standard gnome gstreamer pipeline to include pulsesrc audio.
With
pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | cut -d" " -f2
I query all available audio monitor sources, in my case:
alsa_output.pci-0000_01_00.1.hdmi-stereo.monitor
alsa_output.pci-0000_00_1b.0.analog-stereo.monitor
my complete gstreamer pipeline:
queue ! videorate ! \
vp8enc min_quantizer=13 max_quantizer=13 cpu-used=5 deadline=1000000 threads=%T ! \
queue ! muxout. \
pulsesrc device="alsa_output.pci-0000_00_1b.0.analog-stereo.monitor" ! \
audioconvert ! vorbisenc ! queue ! muxout. \
webmmux name=muxout
this gives me synchronized audio/video output, however I see problems:
heavy frame drops due to high cpu load (without audio included it is fine)
the resulting webm file seems broken in terms of keyframes or something..
Any advices appreciated ;)
Related
I need to concat multiple mp3 files together then adjust there volume then play via aplay. I currently do this using the following 3 commands
sox file1.mp3 file2.mp3 file3.mp3 out1.wav
sox -v 0.5 out1.wav out2.wav
aplay -D plughw:1,0 out2.wav
This works correctly the only minor issue is it creates temporary files and I know it can be done by piping all these commands together somehow. Sort of like.
sox file1.mp3 file2.mp3 file3.mp3 | sox -v 0.5 | aplay -D plughw:1,0
But can't appear to get the piping to work (I am not really a linux user) Any help would be much appreciated :)
Is it possible to mute a section of a video file (say 5 seconds) without having to re-encode the whole audio stream with ffmpeg? I know it's technically (though probably not easily) possible by reusing the majority of the existing audio stream and only re-encoding the changed section and possibly a short section before and after, but I'm not sure if ffmpeg supports this. If it doesn't, anyone know of any other library that does?
You can do the partial segmented encode, as you suggest, but if the source codec is DCT-based such as AAC/MP3, there will be glitches at the start and end of the re-encoded segment once you stitch it all back together.
You would use the segment muxer and concat demuxer to do this.
ffmpeg -i input -vn -c copy -f segment -segment_time 5 aud_%d.m4a
Re-encode the offending segment, say aud_2.m4a to noaud_2.m4a.
Now create a text file
file aud_0.mp4
file aud_1.mp4
file noaud_2.mp4
file aud_3.mp4
and run
ffmpeg -an -i input -f concat -safe 0 -i list.txt -c copy new.mp4
Download the small sample file.
Here is my plan visualized:
# original video
| video |
| audio |
# cut video into 3 parts. Mute the middle part.
| video | | video | | video |
| audio | | - | | audio |
# concatenate the 3 parts
| video | video | video |
| audio | - | audio |
# mux uncut original video with audio from concatenated video
| video |
| audio | - | audio |
Let's do this.
Store filename:
i="fridayafternext_http.mp4"
To mute the line "What the hell are you doing in my house!?", the silence should start at second 34 with a duration of 2 seconds.
Store all that for your convenience:
mute_starttime=34
mute_duration=2
bash supports simple math so we can automatically calculate the start time where the audio starts again, which is 36 of course:
rest_starttime=$(( $starttime + $duration))
Create all 3 parts. Notice that for the 2nd part we use -an to mute the audio:
ffmpeg -i "$i" -c copy -t $mute_starttime start.mp4 && \
ffmpeg -i "$i" -ss $mute_starttime -c copy -an -t ${mute_duration} muted.mp4 && \
ffmpeg -i "$i" -ss $rest_starttime -c copy rest.mp4
Create concat_videos.txt with the following text:
file 'start.mp4'
file 'muted.mp4'
file 'rest.mp4'
Concat videos with the Concat demuxer:
ffmpeg -f concat -safe 0 -i concat_videos.txt -c copy muted_audio.mp4
Mux original video with new audio
ffmpeg -i "$i" -i "muted_audio.mp4" -map 0:v -map 1:a -c copy "${i}_partly_muted.mp4"
Note:
I've learned from Gyan's answer that you do the last 2 steps in 1 take which is really cool.
ffmpeg -an -i "$i" -f concat -safe 0 -i concat_videos.txt -c copy "${i}_partly_muted.mp4"
I want to produce a Live audio/video stream from local file.
I tried the following:
ffmpeg -re -thread_queue_size 4 -i source_video_file.ts -strict -2
-vcodec copy -an -f rtp rtp://localhost:10000 -acodec copy -vn -sdp_file saved_sdp_file -f rtp rtp://localhost:20000
and then:
ffplay saved_sdp_file
It seems to work fine, but it looks like a Video on Demand, cause I can replay this file with ffplay whenever I want.
But I need ffplay to show video/audio only during ffmpeg streaming instance is running (the first command above).
How do I achieve this?
Thanks!
This code works for live video streaming :
proc liveStreaming {} {
#ffmpeg command to capture live streaming in background
exec ffplay -f dshow -i video="Integrated Webcam" >& $logFile &
}
liveStreaming
Make use of fmmpeg using following code, this also works :
proc liveStreaming {} {
#ffmpeg command to capture live streaming
exec ffmpeg -f dshow -i video="Integrated Webcam" -f sdl2 -
}
liveStreaming
You can also make use of "sdl" if sdl2 doesn't work.
I have a program that receives raw image data of known width, height and format from USB camera. Then it outputs each frame to stdout. Format is BGR24.
I need to transfer it as h264 stream using gstreamer but unable to find how to encode the raw video stream.
For example, using ffmpeg this is done like this:
my_video_reader | ffmpeg -f rawvideo -pix_fmt bgr24 -s:v 752x480 -i - -f h264 - | <send data here>
Try below pipeline, This is used to convert raw YUV Frames into H264stream.
gst-launch-1.0 videotestsrc ! "video/x-raw,format=I420,width=352,height=288,framerate=30/1" ! videoparse width=352 height=288 framerate=30/1 ! x264enc bitrate=1024 ref=4 key-int-max=20 ! video/x-h264,stream-format=byte-stream,profile=main ! filesink location=v1
If you want to convert a file, instead of videotestsrc, then simply replace the videotestsrc with filesrc location=filename
How to get aspect ratio from video file ? ( 16:9 or 4:3 for example ) ?
Install the tool mediainfo. Run it with mediainfo -f --Output=XML <file> to examine it.
PS: In my case (openSUSE, mediainfo 0.7.34, the option --Output was ignored).
You can use ffmpeg to do that:
my ($aspect) = `ffmpeg -i filename.mov 2>&1` =~ /DAR\s*(\d+:\d+)/;
Or ffprobe:
my ($aspect) = `ffprobe -i filename.mov -show_streams 2>&1`
=~ /display_aspect_ratio=(.+)/;