ffmpeg with Popen (python) on Windows - python-3.x

I am trying to use ffmpeg with Popen. The ffmpeg command I am trying works on cmd but gives me error with Popen.
I am using the standalone ffmpeg .exe:
ffmpeg -f gdigrab -offset_x 10 -offset_y 20 -show_region 1 -i desktop -video_size 1536x864 -b:v 2M -maxrate 1M -bufsize 1M -tune fastdecode -crf 15 -preset ultrafast -pix_fmt yuv420p -r 25 <path>/video.mov -qp 1 -y -an
This gives me Invalid argument, but if I remove the last parameters in order to make the output the last thing on the string, I get a different error:
Output file #0 does not contain any stream
I tried to use -f dshow -i video="UScreenCapture" instead of the gdigrab, but both give me the same error with and without the parameters in the end.
Both commands work on command line.
On command line this ffmpeg -list_devices true -f dshow -i dummy returns this:
[dshow # 000001b24fa6a300] DirectShow video devices (some may be both video and audio devices)
[dshow # 000001b24fa6a300] "Integrated Webcam"
[dshow # 000001b24fa6a300] Alternative name "#device_pnp_\\?\usb#vid_1bcf&pid_2b8a&mi_00#6&2c03619a&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow # 000001b24fa6a300] "UScreenCapture"
[dshow # 000001b24fa6a300] Alternative name "#device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\UScreenCapture"
[dshow # 000001b24fa6a300] DirectShow audio devices
[dshow # 000001b24fa6a300] "Microphone (Realtek Audio)"
[dshow # 000001b24fa6a300] Alternative name "#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{35EBFC89-7B09-4557-8032-85AA0B688FE9}"
But on popen I can't check it:
-list_devices true -f dshow -i dummy: Invalid argument
For the python part of the code I am using this:
p = subprocess.Popen([getPathForFile("windows/ffmpeg").replace('\\','/'), " -f gdigrab -offset_x 10 -offset_y 20 -show_region 1 -i desktop -video_size 1536x864 -b:v 2M -maxrate 1M -bufsize 1M -tune fastdecode -crf 15 -preset ultrafast -pix_fmt yuv420p -r 25 -qp 1 -y -an "+ path.replace('\\\\','/').replace('\\','/')+"video.mov"], shell=True)
The getPathForFile is a custom function that returns the path. this is correct, mainly because the errors I am getting are from the ffmpeg, so...
I am on a Windows 10. FFmpeg 4.0. Python 3.5.
Any ideas why am I getting these errors on Popen but not on command line and how to fix them? (mainly the second error)

Put each argument in its own string and turn off the shell.
Like this:
import subprocess
import os
cmd = ["-f", "gdigrab", "-offset_x", "10", "-offset_y", "20",
"-show_region", "1", "-video_size", "1536x864", "-i", "desktop",
"-b:v", "2M", "-maxrate", "1M", "-bufsize", "1M", "-tune", "fastdecode",
"-preset", "ultrafast", "-pix_fmt", "yuv420p",
"-r", "25", "-qp", "1", "-y", "-an",
os.path.join(path, "video.mov")]
p = subprocess.Popen([getPathForFile("windows/ffmpeg")]+cmd)
p.communicate()

Related

How to convert video for web with ffmpeg

I am trying to rescale, subclip and convert video for web (html5 video tag). Target browsers : Chrome, Safari, Firefox, Ya Browser.
I am using command like that (changing some params)
ffmpeg -i C.mp4 -ss 00:00:00 -t 10 -vf scale=312x104 -vcodec libx264 -strict -2 -movflags faststart -pix_fmt yuv420p -profile:v high -level 3 -r 25 -an -sn -dn d.mp4 -y
But every time video is not playing in some browser.
I would like to find some way to do that task fast (that's why I am using ffmpeg) and stable (so that any video passed would give me a valid video for all browsers)
I also tried to play with setsar, setdar params, but still no success
Thanks everyone, I guess I found smth suitable for my case
Ffmpeg -i C.mp4 -ss 00:00:00 -t 10 -vf scale=dstw=312:dsth=104:flags=accurate_rnd,setdar=3/1 -vcodec libx264 -level 21 -refs 2 -pix_fmt yuv420p -profile:v high -level 3.1 -color_primaries 1 -color_trc 1 -colorspace 1 -movflags +faststart -r 30 -an -sn -dn d.mp4

Creating Video and Streaming Command Line

Currently, using ffmpeg, I am using two commands on my Terminal to:
1) create a video from a bunch of images:
ffmpeg -r 60 -f image2 -s 1920x1080 -i rotated-pano_frame%05d_color_corrected_gradblend.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4
2) stream the video to a udp address:
ffmpeg -re -i test.mp4 -c copy -f flv udp://127.0.0.1:48550
I am trying to combine both these instructions into one command line instruction, using &&, as suggested in the answer to a previous question of mine:
ffmpeg -r 60 -f image2 -s 1920x1080 -i rotated-pano_frame%05d_color_corrected_gradblend.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4 \
&& ffmpeg -re -i test.mp4 -c copy -f flv udp://127.0.0.1:48550
but am encountering an error which prevents streaming:
[flv # 0x7fa2ba800000] video stream discovered after head already parsed.
Thoughts on a different command line syntax to join the two instructions, different ffmpeg instruction (filters perhaps?), and why I am getting the error?

FFMPEG Combining images to video and streaming in one command line

Currently, using ffmpeg, I am using two commands on my Terminal to:
1) create a video from a bunch of images:
ffmpeg -r 60 -f image2 -s 1920x1080 -i rotated-pano_frame%05d_color_corrected_gradblend.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4
2) stream the video to a udp address:
ffmpeg -re -i test.mp4 -c copy -f flv udp://127.0.0.1:48550
How can I combine these two commands, into one single ffmpeg command?
A concern I have is that it takes a couple of minutes to generate the video from the images. Therefore, these commands have to happen serially, whereby the second command waits a few minutes for the first command to be completed, before it ensues.
Just add && between the two commands. This will execute the second command if the first executes successfully.
ffmpeg -r 60 -f image2 -s 1920x1080 -i rotated-pano_frame%05d_color_corrected_gradblend.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4 && ffmpeg -re -i test.mp4 -c copy -f flv udp://127.0.0.1:48550

How to record audio with ffmpeg on linux?

I'd like to record audio from my microphone. My OS is ubuntu. I've tried the following and got errors
$ ffmpeg -f alsa -ac 2 -i hw:1,0 -itsoffset 00:00:00.5 -f video4linux2 -s 320x240 -r 25 /dev/video0 out.mpg
ffmpeg version 0.8.8-4:0.8.8-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav
developers
built on Oct 22 2013 12:31:55 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release.
Please use avconv instead.
ALSA lib conf.c:3314:(snd_config_hooks_call) Cannot open shared library
libasound_module_conf_pulse.so
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM hw:1,0
[alsa # 0xbda7a0] cannot open audio device hw:1,0 (No such file or directory)
hw:1,0: Input/output error
Then I tried
$ ffmpeg -f oss -i /dev/dsp audio.mp3
ffmpeg version 0.8.8-4:0.8.8-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav
developers
built on Oct 22 2013 12:31:55 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release.
Please use avconv instead.
[oss # 0x1ba57a0] /dev/dsp: No such file or directory
/dev/dsp: Input/output error
I haven't been able to get ffmpeg to find my microphone. How can I tell ffmpeg to record from my microphone?
It seems the 'Deprecated' message can be ignored because of this topic
I realise this is a bit old. Just in case anyone else is looking:
ffmpeg -f alsa -ac 2 -i default -itsoffset 00:00:00.5 -f video4linux2 -s 320x240 -r 25 -i /dev/video0 out.mpg
This way it will use the default device to record from. You were also missing a -i before the video capture device - /dev/device0
If you want to get more specific you should take a look in /proc/asound.
Check the cards, devices, pcm files and the card subdirectories. You should be able to glean enough information there to be able to make an educated guess; e.g hw:1,0 or hw:2,0
The documentation may provide further clues:
http://www.alsa-project.org/main/index.php/DeviceNames
The same goes for the webcam - it may not be /dev/video0, perhaps you have an external webcam plugged in and its at /dev/video1 - Have a look in the /dev directory and see whats available
solved !
ffmpeg -f pulse -ac 2 -i default -f x11grab -r 30 -s 1920x1080 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -threads 0 -y /media/t/TBVolume/desktop/output.mkv
First, list your AV devices using:
ffmpeg -list_devices true -f dshow -i dummy
Assuming your audio device is "Microphone Array", you can use:
ffmpeg -f dshow -i audio="Microphone Array" -c:a libmp3lame -b:a 128k OUTPUT.mp3
Here, 128k is the sampling rate. You can check all options for sampling rates (CBR) here.

ffmpeg audio and video sync error

./ffmpeg \
-f alsa -async 1 -ac 2 -i hw:2,0 \
-f video4linux2 -vsync 1 -s:v vga -i /dev/video0 \
-acodec aac -b:a 40k \
-r 25 -s:v vga -vcodec libx264 -strict -2 -crf 25 -preset fast -b:v 320K -pass 1 \
-f flv rtmp://192.168.2.105/live/testing
with the above command i able to stream with fps of 25 but their is no audio and video synchronization i.e., audio is faster than video,i am using ffmpeg 0.11.1 version on the pandaboard for an rtmp streaming ,help me out to solve this problem.
Thanks
Ameeth
Don't use -pass 1 if you're not actually doing two-pass encoding.
From the docs (emphasis added):
‘-pass[:stream_specifier] n (output,per-stream)’
Select the pass number (1 or 2). It is used to do two-pass video encoding. The statistics of the video are recorded in the first pass into a log file (see also the option -passlogfile), and in the second pass that log file is used to generate the video at the exact requested bitrate. On pass 1, you may just deactivate audio and set output to null, examples for Windows and Unix:
ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
I was streaming to Twitch, and, funnily enough, removing the -r option made video sync with the audio. Now, you might want to have the framerate limited in some way; unfortunately, I have no solution for that, but it does allow to sync the video with the audio very well.

Resources