I am trying to establish a watermark to the video using ffmpeg and execute commands in a terminal : ffmpeg -i pic.mp4 -i logo.png -filter_complex - "overlay=20:20" pic1.mp4
there is such an error:
The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
If the video is not available then the sound encoding + overlay watermark is successful, but there is an error with sound
agreed
ffmpeg -i pic.mp4 -i logo.png -filter_complex \
"overlay=20:20" -codec:a copy output.flv
Related
so I met a problem with my device-added sound when it didn't change after I replaced the sound.oga file
here's what I have done for the hole progress:
step 1:
$ sudo mv /usr/share/sounds/freedesktop/stereo/device-added.oga /usr/share/sounds/freedesktop/stereo/device-added_old.oga
$ sudo mv /usr/share/sounds/freedesktop/stereo/device-removed.oga /usr/share/sounds/freedesktop/stereo/device-removed_old.oga
step 2:
I downloaded a file: 4k-ehe-te-nandayo-paimon-green-screen-update.mp3 then changed to .oga
step 3:
sudo mv "the name file" "/usr/share/sounds/freedesktop/stereo/device-added.oga"
After that, the files works completely fine and the copy was successfully doneenter image description here
but when I plug the usb, it just had the original sound
it should work, tested it on my debian but try to see if
$ paplay device-added.oga
actually works or gives error
"Failed to open audio file."
if so u have to convert your mp3 to ogg using ffmpeg
$ ffmpeg -i in.mp3 out.ogg
then
$ mv out.ogg final.oga
or if u have mp4 file it wont work from mp4 to oga you have to first
$ ffmpeg -i Project.mp4 -b:a 192K -vn test.mp3
this is what worked for me after trying
I have a command that generates a video with background and text on it with FFmpeg and would like to render it using Azure Batch Service. Locally my command works:
./ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
while the one generated programatically with C# and added as a task in batch service retursn failure:
cmd /c %AZ_BATCH_APP_PACKAGE_ffmpeg#3.4%\ffmpeg-3.4-win64-static\bin\ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
The ffmpeg configuration works, and also the Pool as I've already tested it with simpler ffmpeg commands which had input and output files. This command doesnt have input file, maybe that is part of the problem ?
Thank you
If your ffmpeg command requires an input file as you suggest, then that file needs to be a part of the CloudTask specification (or other means of ingressing it onto the Batch compute node such as Job Preparation or Start Task if it makes sense) as a ResourceFile.
These resource files should be placed into Azure Blob storage. Generate a SAS URL (with appropriate read permission and sufficient end time) and use that as the ResourceFile's BlobSource.
How can one with node.js perform a custom command on all files in a directory, e.g: compress all media files into a given subdirectory? (ffmpeg is set up and works correctly)
This does not work for me:
foreach -g "/.mp4" --no-c -x "ffmpeg -y -i #{path}.mp4 -r 15 -s 480x320 -ac 1 -c:a copy ./compacted/#{name}___compacted.mp4"
I think there might be something wrong with the spaces and the periods in the filenames, but could not figure out how to solve this.
There is a webm file that contains no audio. I want to merge an audio file with this video. I've tried the following command:
ffmpeg -i /home/test.mp3 -i /home/output.webm -vcodec copy -acodec copy /home/newtest.webm
And received the error:
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument.
ffmpeg -y -i vFilePath -i aFilePath -map 0:0 -map 1:0 -c copy oFilename
Where:
vFilePath: path of the video file
aFilePath: path of the audio file
oFilePath: path of the output file
If you're fine with using a UI, you can get VLC. Then under Media -> Save/Convert add both files. In the following menu's it should ask which audio and video streams to put into the new file, as well as the format for the file, which would probably be webm again.
VLC also has a command line interface, if you want automation.
I am using janus-gateway for recording in web-browser. Once the recording is completed, two files are generated, one is audio and another is a video. Both have format mjr. How can I combine both these files to create a single file?
I was dealing with the same need.
If you did the default janus-gateway install you only miss these steps:
run this on the folder where you downloaded the git sources:
./configure --enable-post-processing
then
make
(sudo) make install
then run this for each file you want to convert them to audio/video formats:
./janus-pp-rec /opt/janus/share/janus/recordings/video.mjr /opt/janus/share/janus/recordings/video.webm
./janus-pp-rec /opt/janus/share/janus/recordings/audio.mjr /opt/janus/share/janus/recordings/audio.opus
if you don't have ffmpeg installed run this (i'm on Ubuntu, on other distros ffmpeg might be already in apt-get repositories)
sudo add-apt-repository ppa:kirillshkrogalev/ffmpeg-next
sudo apt-get update
sudo apt-get install ffmpeg
and then finally to merge audio with video:
(sudo) ffmpeg -i audio.opus -i video.webm -c:v copy -c:a opus -strict experimental mergedoutput.webm
from there you can build a shell script to convert all mjr files automatically on a cron
I have a very primitive example of doing this in C with Gstreamer. Note, this code is very messy but it should show you what you need to do.
Here is a list of what needs to be done to merge these files:
Build the list of RTP buffers so that you can iterate over them in the file. There are examples of this in the janus-gateway post processing
Start iterating over your files at the same time. The timestamps should sync up OK, though I have run into issues where a packet would be lost or corrupted on write which will screw up the merge
I decode the media and re-encode it here so that I can statically set the framerate and size for the video. I am sure that there is a way to do this without having to transcode the media.
Multiplex and write to a file
I do step 1 exactly like the janus post processer. Step 2 I push each rtp packet from the files to a gstreamer appsrc element. Steps 3 and 4 are done within the gstreamer pipelines.
sudo apt-get install libavutil-dev libavcodec-dev libavformat-dev
After Installing Dependencies...
./configure --prefix=/opt/janus --enable-post-processing
Then Use this BASH file
#!/bin/bash
# converter.sh
# Declare the binary path of the converter
januspprec_binary=/opt/janus/bin/janus-pp-rec
# Contains the prefix of the recording session of janus e.g
session_prefix="$1"
output_file="$2"
# Create temporary files that will store the individual tracks (audio and video)
tmp_video=/tmp/mjr-$RANDOM.webm
tmp_audio=/tmp/mjr-$RANDOM.opus
echo "Converting mjr files to individual tracks ..."
$januspprec_binary $session_prefix-video.mjr $tmp_video
$januspprec_binary $session_prefix-audio.mjr $tmp_audio
echo "Merging audio track with video ..."
ffmpeg -i $tmp_audio -i $tmp_video -c:v copy -c:a opus -strict experimental $output_file
echo "Done !"
The following command should do the trick:
bash converter.sh ./room-1234-user-0001 ./output_merged_video.webm