I am able to use the moviepy library to add a watermark to a section of video. However when I do this it is taking the watermarked segment, and creating a new file with it. I am trying to figure out if it is possible to simply splice in the edited part back into the original video, as moviepy is EXTREMELY slow writing to the disk, so the smaller the segment the better.
I was thinking maybe using shutil?
video = mp.VideoFileClip("C:\\Users\\admin\\Desktop\\Test\\demovideo.mp4").subclip(10,20)
logo = (mp.ImageClip("C:\\Users\\admin\\Desktop\\Watermark\\watermarkpic.png")
.set_duration(20)
.resize(height=20) # if you need to resize...
.margin(right=8, bottom=8, opacity=0) # (optional) logo-border padding
.set_pos(("right","bottom")))
final = mp.CompositeVideoClip([video, logo])
final.write_videofile("C:\\Users\\admin\\Desktop\\output\\demovideo(watermarked).mp4", audio = True, progress_bar = False)
Is there a way to copy the 10 second watermarked snippet back into the original video file? Or is there another library that allows me to do this?
What is slow in your use case is the fact that Moviepy needs to decode and reencode each frame of the movie. If you want speed, I believe there are ways to ask FFMPEG to copy video segments without rencoding.
So you could use ffmpeg to cut the video into 3 subclips (before.mp4/fragment.mp4/after.mp4), only process fragment.mp4, then reconcatenate all clips together with ffmpeg.
The cutting into 3 clips using ffmpeg can be done from moviepy:
https://github.com/Zulko/moviepy/blob/master/moviepy/video/io/ffmpeg_tools.py#L27
However for concatenating everything together you may need to call ffmpeg directly.
Related
Thanks in advance.
I'm trying to crop a .mp4 video using an ffmpeg binary (within the context of an electron-react-app).
(The binary is run in a child process using execFile() and outputs to a temp folder which is later deleted)
ffmpeg varies considerably in the time it takes to complete the creation of a cropped video file (1sec to 18sec) depending on the computer (mac vs Windows).
I need to read the cropped video file.
I've set up an event listener in the Main process of electron
if (!monitorCroppedFile) {
console.log(`${croppedFilePath} doesn't exist`);
} else {
console.log(`${croppedFilePath} exists !`)
...readFile...;
Once monitorCroppedFile = true I read it using fs.readfile().
The problem is that ffmpeg initally creates the cropped file path but it sometimes takes ages to complete the process of cropping.
This results in the read file often being blank (as the read is triggered on detecting the file path of the cropped file).
I've tried using -preset ultrafast in the ffmpeg arguments but this only improves things on Windows marginally.
The problem doesn't occur on Macs.
Can anybody suggest a possible solution ? Is there a way to detect when the crop is fully completed ?
Many thanks.
Add -progress FILE to your command where FILE should be a filename. ffmpeg will log processing status to that file. Search for the line progress=end in it. Once you find it, you can read the file.
is there any possibility of unmixing a file? I have use overlay to mix 2 audios, but I want to get back the original first audio. Is there something in pydub that I can use?
sound1 = AudioSegment.from_mp3("/path/to/file1.mp3")
sound2 = AudioSegment.from_mp3("/path/to/file2.mp3")
output = sound1.overlay(sound2, position=5000)
output.export("mixed_sounds.mp3", format="mp3")
The original audio in the sound1 variable has not been modified, so you can use it right away if you like.
If you mean recovering just the audio from sound1 from the exported “mixed_sounds.mp3” file, without acccess to the original data, that is not possible unless you know very specific things (for example if sound2 is silent and you know when sound1 starts and ends)
I have successfully changed the muxing.c sample to use video frames that I generate on runtime.
I am trying now to replace the get_audio_frame function with a function that decodes an existing audio file, and writes its samples instead of the synthesized audio-samples in the example code.
I've tried using the "audio decoding" example to decode the audio file, but the not sure how / when to write the samples decoded.
I suggest to check the source of my Karaoke Lyrics Editor which is doing exactly what you need based on ffmpeg. See ffmpegvideoencoder.cpp, see createFile and encodeImage functions.
Im trying to extract each frame from a rtsp mp4 stream, and convert that into a jpeg/gif using ffmpeg. I'm getting the sdp header from 000001b0.....000001b5, and adding that into an byte array then capturing a frame starting from 000001b6 and appending it to the byte array.
When I flush it to a file (.mpg) and use ffmpeg it throws errors and not converting.
my header looks like 000001B008000001B58913000001000000012000C488BA98514043C1463F and after this I'm appending a frame (starting from 000001b6).
I did something similar with FFMPEG, and it seems that the frame data you get from FFMPEG already contains the frame header, which is all you need to transcode the data. Please make sure that you decode the mp4 data to a raw format (RGB24 for instance), then encode it to the pixelformat the JPEG/GIF encoder expects (probably a YUV format) using libswscale, before passing the data to the encoder.
Depending on the Codec you may not have to add anything or you may have to add a lot..
This is referred to as de-packetization and MPEG4-ES has no packetization model... H264 has many depending on the profile.
Check out the RFC..
Either 3016 or 3640 should help you.
https://www.rfc-editor.org/rfc/rfc3640
https://www.rfc-editor.org/rfc/rfc3016
I'm looking to combine a range of different audio files (mp3) in Python. One of the requirements is that I need to be able to specify a delay at the end of each file. To illustrate, something like:
[file1.mp3--------3 seconds----------][delay---------2 seconds--------][file2.mp3]-------------4 seconds][delay---------2 seconds][file3.mp3----------3 seconds---------]
Does anyone here know of any mp3 libraries that can accomplish this? Python isn't really a necessity here. If it'll be easier in another language, that'll be fine.
I think FFmpeg can do this, given the right arguments. No real need to use a library.
To combine wav or aiff files, you can do something like this: (inspiration from here)
import aifc
def concatenate(*items):
data = []
for item in items:
f = aifc.open(item, 'rb')
data.append([f.getparams(), f.readframes(f.getnframes())])
f.close()
output = aifc.open('output.aif', 'wb')
output.setparams(data[0][0])
for item in data:
output.writeframes(item[1])
output.close()
See the link for the wav format (it's pretty much the same, but with the wave library)
To add silence, I would just make a one second silent file using your favorite audio editor and then concatenate in the proper amount of silence.