Video quality when desktop recording is bad - linux

Trying to record my desktop and also audio with RHEL6.
I'm using the command below, but the quality of the video output is not good.
It is very blurry and I can bearly make out text on screen.
The audio is good so no issues there.
Does anyone know how the make the video quality any better?
ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -s $(xwininfo -root | grep 'geometry' | awk '{print $2;}') -r 25 -i :0.0 -sameq -f mpeg -ar 48000 -s wvga -y sample.avi

I believe the -sameq option means 'same quantizer' not 'same quality' and is depreciated, see here.
Try -q 1 instead.
q being quality 1-32 (1 being highest)

Related

Mute Volume with Minimal Re-encoding

Is it possible to mute a section of a video file (say 5 seconds) without having to re-encode the whole audio stream with ffmpeg? I know it's technically (though probably not easily) possible by reusing the majority of the existing audio stream and only re-encoding the changed section and possibly a short section before and after, but I'm not sure if ffmpeg supports this. If it doesn't, anyone know of any other library that does?
You can do the partial segmented encode, as you suggest, but if the source codec is DCT-based such as AAC/MP3, there will be glitches at the start and end of the re-encoded segment once you stitch it all back together.
You would use the segment muxer and concat demuxer to do this.
ffmpeg -i input -vn -c copy -f segment -segment_time 5 aud_%d.m4a
Re-encode the offending segment, say aud_2.m4a to noaud_2.m4a.
Now create a text file
file aud_0.mp4
file aud_1.mp4
file noaud_2.mp4
file aud_3.mp4
and run
ffmpeg -an -i input -f concat -safe 0 -i list.txt -c copy new.mp4
Download the small sample file.
Here is my plan visualized:
# original video
| video |
| audio |
# cut video into 3 parts. Mute the middle part.
| video | | video | | video |
| audio | | - | | audio |
# concatenate the 3 parts
| video | video | video |
| audio | - | audio |
# mux uncut original video with audio from concatenated video
| video |
| audio | - | audio |
Let's do this.
Store filename:
i="fridayafternext_http.mp4"
To mute the line "What the hell are you doing in my house!?", the silence should start at second 34 with a duration of 2 seconds.
Store all that for your convenience:
mute_starttime=34
mute_duration=2
bash supports simple math so we can automatically calculate the start time where the audio starts again, which is 36 of course:
rest_starttime=$(( $starttime + $duration))
Create all 3 parts. Notice that for the 2nd part we use -an to mute the audio:
ffmpeg -i "$i" -c copy -t $mute_starttime start.mp4 && \
ffmpeg -i "$i" -ss $mute_starttime -c copy -an -t ${mute_duration} muted.mp4 && \
ffmpeg -i "$i" -ss $rest_starttime -c copy rest.mp4
Create concat_videos.txt with the following text:
file 'start.mp4'
file 'muted.mp4'
file 'rest.mp4'
Concat videos with the Concat demuxer:
ffmpeg -f concat -safe 0 -i concat_videos.txt -c copy muted_audio.mp4
Mux original video with new audio
ffmpeg -i "$i" -i "muted_audio.mp4" -map 0:v -map 1:a -c copy "${i}_partly_muted.mp4"
Note:
I've learned from Gyan's answer that you do the last 2 steps in 1 take which is really cool.
ffmpeg -an -i "$i" -f concat -safe 0 -i concat_videos.txt -c copy "${i}_partly_muted.mp4"

How to produce Live video and audio streaming (not VoD) with ffmpeg?

I want to produce a Live audio/video stream from local file.
I tried the following:
ffmpeg -re -thread_queue_size 4 -i source_video_file.ts -strict -2
-vcodec copy -an -f rtp rtp://localhost:10000 -acodec copy -vn -sdp_file saved_sdp_file -f rtp rtp://localhost:20000
and then:
ffplay saved_sdp_file
It seems to work fine, but it looks like a Video on Demand, cause I can replay this file with ffplay whenever I want.
But I need ffplay to show video/audio only during ffmpeg streaming instance is running (the first command above).
How do I achieve this?
Thanks!
This code works for live video streaming :
proc liveStreaming {} {
#ffmpeg command to capture live streaming in background
exec ffplay -f dshow -i video="Integrated Webcam" >& $logFile &
}
liveStreaming
Make use of fmmpeg using following code, this also works :
proc liveStreaming {} {
#ffmpeg command to capture live streaming
exec ffmpeg -f dshow -i video="Integrated Webcam" -f sdl2 -
}
liveStreaming
You can also make use of "sdl" if sdl2 doesn't work.

pipe sox play command to stdout

So I'm currently trying to stream my microphone input from my raspberry pi (rasbian)
to some sort of network stream in order to receive it later on my phone.
In order to do this I use arecord -D plughw:1,0 -f dat -r 44100 | top pipe the soundstream from my usb-microphone to stdout which works fine as far as I can see but I needed it to be a bit louder so I can understand people standing far away from it .
So i piped it to the sox play command like this :
arecord -D plughw:1,0 -f dat -r 44100| play -t raw -b 16 -e signed -c 2 -v 7 -r 44100 - test.wav
(test.wav is just some random wav file id doesn't work without it and there is a whitespace between the - behind 44100 and test.wav because i think - is a seperate parameter:
SPECIAL FILENAMES (infile, outfile):
- Pipe/redirect input/output (stdin/stdout); may need -t
-d, --default-device Use the default audio device (where available))
I figured out by using the -v parameter i can increase the volume.
This plays the recorded stream to the speakers I connected to the raspberry pi 3 .
Final goal : pipe the volume increased soundstream to the stdout(or some fifopipe file) so i can get it from stdin inside another script to send it to my phone.
However im very confused by the manpage of the play command http://sox.sourceforge.net/sox.html
i need to select the outputdevice to pipe or stout or something
if you know a better way to just increase the voulme of the i think Recording WAVE 'stdin' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereosoundstream let me know
As far as I'm aware you can't pipe the output from play, you'll have to use the regular sox command for that.
For example:
# example sound file
sox -n -r 48k -b 16 test16.wav synth 2 sine 200 gain -9 fade t 0 0 0.1
# redundant piping
sox test16.wav -t wav - | sox -t wav - gain 8 -t wav - | play -
In the case of the command in your question it should be sufficient to change play to sox and add -t wav to let sox know in what format you want to pipe the sound.
arecord -D plughw:1,0 -f dat -r 44100 | \
sox -t raw -b 16 -e signed -c 2 -v 7 -r 44100 - -t wav -

ffmpeg - Have troubling syncing up audio and video together

I have a webcam and a separate mic. I want to record what is happening.
It almost works, however the audio seems to play quickly and parts missing while playing over the video.
This is the command I am currently using to get it partially working
ffmpeg -thread_queue_size 1024 -f alsa -ac 1 -i plughw:1,0 -f video4linux2 -thread_queue_size 1024 -re -s 1280x720 -i /dev/video0 -r 25 -f avi -q:a 2 -acodec libmp3lame -ab 96k out.mp4
I have tried other arguments, but unsure if it has to do with the formats I am using or incorrect parameter settings.
Also, the next part would be how to stream it. Everytime I try going through rtp it complains about multiple streams. I tried doing html as well, but didn't like the format. html html://localhost:50000/live_feed or rts rts://localhost:5000
edit:
I am running this on a rpi 3.

How to generate video screencaps of video files via linux commandline

Is there a command line program for linux (ubuntu) which can generate a large image containing say 6 caps from a given video (e.g. WMV) laid out storyboard style (I know on Windows media player classic can do this)? I need this for part of a script I am writing.
I pulled the answer from this site: http://blog.prashanthellina.com/2008/03/29/creating-video-thumbnails-using-ffmpeg/
ffmpeg -itsoffset -4 -i test.avi -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 test.jpg
Where -4 is the number of seconds into the file to grab the screenshot, 320x240 is the screenshot size, and test.jpg is the output file.
Hope this helps.
Use SlickSlice
./slickslice.sh -x video.avi -s 5x3 -e
I've used MPlayer to save frames as images and ImageMagick to combine them:
mplayer -nosound -sstep 15 -vo png video.mkv
montage *.png -tile 3x3 -geometry 300x+0+0 screencaps.png
vcsi can do this. It is a command-line tool written in Python. Example:
vcsi video.mkv -o output.jpg

Resources