Getting rid of pop-up window when making OpenCV videos - python-3.x

I've been making video flipbooks by putting together individual frames of PNG files. At first I was using FFMPEG. I just got OpenCV working.
However, every time I make one, I have to manually choose the encoder to use.
The window has the title: Video Compression. The about box says it's Intel Indeo Video iYUV 2.0
Can I specify this somewhere in a Python call to OpenCV so I don't have to choose every time?
Here is the code that is making the video. Note I'm resizing the frames as they were different sized source frames.
video = cv.VideoWriter(outfile, -1, 30, (width, height), False)
for image in images:
cvimage = cv.imread(os.path.join(png_working_folder, image))
resized = cv.resize(cvimage, (800,800))
video.write(resized)

I found one solution.
I wasn't using fourcc previously. Adding it to my call got me over this hump.
fourcc = cv.VideoWriter_fourcc(*'XVID')
video = cv.VideoWriter(outfile, fourcc, 30, (width, height))
Adding the fourcc to the call was the key.

Related

Creating an mp4 file from images using video.write in opencv

I'm trying to create and save a video file in python via opencv
video_name = os.path.join(DIR, "test.mp4")
images = [img for img in os.listdir(DIR) if img.startswith(f'{test_')]
images = sorted(images, key=graph_utils_py.numericalSort)
frame = cv2.imread(os.path.join(DIR, images[0]))
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 1, (width, height))
for image in images:
fpath = os.path.join(DIR, image)
video.write(cv2.imread(fpath))
os.remove(fpath) # remove the image file
cv2.destroyAllWindows()
video.release()
The above generates the video file without any issues but I get a warning
OpenCV: FFMPEG: tag 0x00000000/'????' is not supported with codec id 13 and format 'mp4 / MP4 (MPEG-4 Part 14)'
[mp4 # 000001d0378cdec0] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
This creates an issue when I try to embed this video file in a pdf generated using latex.
From what I understand [The video data inside the file must be encoded with the H.264 codec.][1] to successfully view the embedded video in pdf.
Any suggestion on how to generate the mp4 file with H.264 codec in python will be greatly appreciated.
EDIT:
I tried what has been suggested below and the following error occurs
OpenCV: FFMPEG: tag 0x31435641/'AVC1' is not supported with codec id 27 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x31637661/'avc1'
Failed to load OpenH264 library: openh264-1.8.0-win64.dll
Please check environment and/or download library: https://github.com/cisco/openh264/releases
[libopenh264 # 000002833670e500] Incorrect library version loaded
Could not open codec 'libopenh264': Unspecified error
You need to pass correct codec:
fourcc = cv2.VideoWriter_fourcc(*'AVC1')
fps = 1
video = cv2.VideoWriter(video_name, fourcc, fps, (width, height))

Frames of transparent gif overlapping with each other

I'm trying to create a transparent gif through pillow in python through this code
frames[0].save(path+'/final.gif', format='GIF', append_images=frames[1:], save_all=True, duration=33, loop=0,transparency=0)
where frame is a list of PIL.Image files. The end result is that you can see the image in the previous frame
This hasn't happened before and I was able to create this gif without any problems
I solved this problem by setting disposal = 2 , you can edit your code as:
frames[0].save(path+'/final.gif', format='GIF', append_images=frames[1:], save_all=True, duration=33, loop=0,transparency=0, disposal = 2)

Adding watermark to video

I am able to use the moviepy library to add a watermark to a section of video. However when I do this it is taking the watermarked segment, and creating a new file with it. I am trying to figure out if it is possible to simply splice in the edited part back into the original video, as moviepy is EXTREMELY slow writing to the disk, so the smaller the segment the better.
I was thinking maybe using shutil?
video = mp.VideoFileClip("C:\\Users\\admin\\Desktop\\Test\\demovideo.mp4").subclip(10,20)
logo = (mp.ImageClip("C:\\Users\\admin\\Desktop\\Watermark\\watermarkpic.png")
.set_duration(20)
.resize(height=20) # if you need to resize...
.margin(right=8, bottom=8, opacity=0) # (optional) logo-border padding
.set_pos(("right","bottom")))
final = mp.CompositeVideoClip([video, logo])
final.write_videofile("C:\\Users\\admin\\Desktop\\output\\demovideo(watermarked).mp4", audio = True, progress_bar = False)
Is there a way to copy the 10 second watermarked snippet back into the original video file? Or is there another library that allows me to do this?
What is slow in your use case is the fact that Moviepy needs to decode and reencode each frame of the movie. If you want speed, I believe there are ways to ask FFMPEG to copy video segments without rencoding.
So you could use ffmpeg to cut the video into 3 subclips (before.mp4/fragment.mp4/after.mp4), only process fragment.mp4, then reconcatenate all clips together with ffmpeg.
The cutting into 3 clips using ffmpeg can be done from moviepy:
https://github.com/Zulko/moviepy/blob/master/moviepy/video/io/ffmpeg_tools.py#L27
However for concatenating everything together you may need to call ffmpeg directly.

Raspberry Pi use Webcam instead of PiCam

Here is the code to initialize raspberry camera by pyimagesearch blog.I want to add a webcam to capture frame by frame in
camera = PiCamera()
camera.resolution = tuple(conf["resolution"])
camera.framerate = conf["fps"]
rawCapture = PiRGBArray(camera, size=tuple(conf["resolution"]))
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image and initialize
# the timestamp and occupied/unoccupied text
frame = f.array
This is the part that continuously captures frames from PiCamera. The problem is that I want to read frame by frame from webcam too. I somehow got it working, But lost the code accidentally. Now I don't remember what did I do. I got it working in just about 2-3 extra lines. Please help me get it if you can. Thank you.

How to create 10bit YUY2 packed YUV Renderer?

I have created 8 bit yuv player for format YUY2 packed using SDL lib,some part of code:
handle->texture = SDL_CreateTexture(handle->renderer, SDL_PIXELFORMAT_YUY2, SDL_TEXTUREACCESS_STREAMING, width, height);
SDL_UpdateTexture(handle->texture, NULL,pDisplay->Ydata,(handle->width*2));
in that while creating texture,pixel format is given SDL_PIXELFORMAT_YUY2 and update texture pitch in twice of width. So it is playing fine.
But when it comes to 10 bit YUV, it plays disturbed and greenish video.
What I have tried is changed pitch to (handle->width*2 * 2) but no success
also someone suggested to convert 10bit value to 8bit but I don't want to do that.
Please help me to play 10bit YUY2 packed format YUV.
Is SDL support more than 8 bit depth pixel rendering ?

Resources