what can i do to improve my video quality in opencv - python-3.x

I am creating a python program to record my desktop screen.
But the output of this code is in very low quality and blurry.
Can anyone help me in capturing (screen capturing) in high quality.
Like the screen recorder like OBS studio and Camtasia do.
what can i do improve my quality change my extention ,codec ,etc. please mention.
import cv2
import numpy as np
import datetime
from PIL import Image, ImageTk, ImageGrab
date = datetime.datetime.now()
filename='rec_%s-%s-%s-%s%s%s.mp4' % (date.year, date.month, date.day,
date.hour, date.minute, date.second)
fourcc = cv2.VideoWriter_fourcc(*'X264')
frame_rate = 16
SCREEN_SIZE = (960,540)
out = cv2.VideoWriter(filename, fourcc,framerate, SCREEN_SIZE)
while True:
img = ImageGrab.grab()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
out.write(frame)
cv2.imshow('screenshot', frame)
if cv2.waitKey(1) == ord("q"):
break
cv2.destroyAllWindows()
out.release()

frame_rate = 16
SCREEN_SIZE = (960,540)
Both of this are too low, you probably want your frame_rate to be 30, and your screen size to be 1920x1080p.
Also just has an extra info:
.mp4 is a bad format for screen recording, I know OBS recommended using .flv because it doesn't corrupt the whole file if the recording end abruptly, unlike .mp4.

Related

Pipe numpy array to virtual video device

I want to pipe images to a virtual video device (e.g. /dev/video0), the images are created inside a loop with the desired frame rate.
In this minimal example i only two arrays which alternate in the cv2 window. Now i look for a good solution to pipe the arrays to the virtual device.
I saw that ffmpeg-python can run asynchronous with ffmpeg.run_async(), but so far i could not make anything work with this package.
example code without the ffmpeg stuff:
#!/usr/bin/env python3
import cv2
import numpy as np
import time
window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)
img1 = np.random.uniform(0, 255, (1080, 1440, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (1080, 1440, 3)).astype('uint8')
for i in range(125):
time.sleep(0.04)
if i % 2:
img = img1
else:
img = img2
cv2.imshow(window_name, img)
cv2.waitKey(1)
cv2.destroyAllWindows()
First of all, you would have to setup a virtual camera, with for example v4l2loopback. See here for how to install it (ignore the usage examples).
Then, you can just write to the virtual camera like to a normal file (that is, let openCV write the images to say /dev/video0; how to do that you have to find out yourself because im not an expert with openCV).
In the end, you can use ffmpeg-python with /dev/video0 as input file, do something with the video, and that's it !
As Programmer wrote in his answer, it is possible to create a dummy device with the package v4l2loopback. To publish images, videos or the desktop to the dummy device was already easy with ffmpeg, but i want to pipe it directly from the python script - where i capture the images - to the dummy device. I still think it's possible with ffmpeg-python, but i found this great answer from Alp which sheds light on the darkness. The package pyfakewebcam is a perfect solution for the problem.
For the sake of completeness, here is my extended minimal working example:
#!/usr/bin/env python3
import time
import cv2
import numpy as np
import pyfakewebcam
WIDTH = 1440
HEIGHT = 1080
DEVICE = '/dev/video0'
fake_cam = pyfakewebcam.FakeWebcam(DEVICE, WIDTH, HEIGHT)
window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)
img1 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')
for i in range(125):
time.sleep(0.04)
if i % 2:
img = img1
else:
img = img2
fake_cam.schedule_frame(img)
cv2.imshow(window_name, img)
cv2.waitKey(1)
cv2.destroyAllWindows()

Background removal from webcam OPENCV PYTHON

I'm creating a script that will read the state of a supermarket and tell me if there is products missing.
for example in the image below there is some places where there is products missing. I'm using FAST method to find all the corners in the frame. but sometimes the scripts detects the floor corners. What I want to do is remove the floor from the frame before I find the corners.
import cv2
import numpy as np
image = cv2.imread('gondola_imagem.jpeg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
You can use a mask to remove the areas you are not interested. For example with the following image as a mask you can get the bellow results.
Mask
Result
Code is as follow:
import numpy as np
import cv2
image = cv2.imread('test.jpg')
mask = cv2.imread('mask.jpg', 0)
cv2.imshow('Original', image)
cv2.imshow('Mask', mask)
res = cv2.bitwise_and(image,image,mask = mask)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('result.jpg', image)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
Edit:
If you want to do this in realtime (video from webcam) you just need to do it for every frame you get from the video camera. As long as the camera is not moving you should be able to use the same mask for all the frames. You could make the code above a function and then call it with an image as a parameter, as per the following code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Following function will have to be created with the previews code
CallFunctionToPreviewsCode(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The code above was taken from OpenCV Python-Tutorials It is a good place for learning OpenCV for Python programming language.

How do I increase speed of playback of images using python3 using cv2

I have used cv2 to combine a folder of images into a video using cv2 but the speed in which the video plays is too slow is there a way to increase the speed?
import cv2
import os
image_folder = 'd:/deep/data'
video_name = 'video.avi'
images = [img for img in os.listdir(image_folder) if img.endswith(".jpg")]
frame = cv2.imread(os.path.join(image_folder, images[0]))
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 1, (width,height))
for image in images:
video.write(cv2.imread(os.path.join(image_folder, image)))
cv2.destroyAllWindows()
video.release()
to set fps i tried this peice of code and it still dint work
import cv2
import os
import numpy as np
image_folder = 'd:/deep/data'
video_name = 'video.avi'
fps=100
images = [img for img in os.listdir(image_folder) if img.endswith(".jpg")]
frame = cv2.imread(os.path.join(image_folder, images[0]))
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 1, (width,height),fps)
for image in images:
video.write(cv2.imread(os.path.join(image_folder, image)))
cv2.destroyAllWindows()
video.release()
cv2.VideoWriter([filename, fourcc, fps, frameSize[, isColor]]) → <VideoWriter object>
Check documentation and set fps. your current framerate is 1 fps .
Edit :
Should be something like this :
fourcc = cv2.VideoWriter_fourcc(*'DIVX') # codec for windows (supported by your device?)
fps = 25.0 # adjust the framerate here
cv2.ViewoWriter(video_name, fourcc, fps , (width,height))
this is the right code
import cv2
import os
import numpy as np
image_folder = 'd:/deep/data'
video_name = 'video.avi'
images = [img for img in os.listdir(image_folder) if img.endswith(".jpg")]
frame = cv2.imread(os.path.join(image_folder, images[0]))
height, width, layers = frame.shape
fourcc = cv2.VideoWriter_fourcc(*'XVID')
video = cv2.VideoWriter(video_name, fourcc,15.0, (width,height))
for image in images:
video.write(cv2.imread(os.path.join(image_folder, image)))
cv2.destroyAllWindows()
video.release()

Loading keras model on webcam video stream

I have a keras model on depth perception, I want to load it using tensorflowjs, and apply it frame by frame on my webcam stream. Currently I am unable to capture my webcam video stream using HTML. How to do it?
you can use opencv and python to easily capture your video and make changes on it. for example you can use following code:
import cv2
import sys
from time import sleep
video_capture = cv2.VideoCapture(0)
anterior = 0
while True:
if not video_capture.isOpened():
print('Unable to load camera.')
sleep(5)
pass
# Capture frame-by-frame
ret, frame = video_capture.read()
# Display the resulting frameA
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Display the resulting frame
cv2.imshow('Video', frame)
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

close webcam in openCV python

I want to close web cam i used the cap.released() but it does not close the web cam after it captures the image. Here is my code:
import cv2
import matplotlib.pyplot as plt
def main():
cap=cv2.VideoCapture(0)
if cap.isOpened():
ret, frame = cap.read()
print(ret)
print(frame)
else:
ret=False
img1= cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(img1)
plt.title('Color Image RGB')
plt.xticks([])
plt.yticks([])
plt.show()
cap.release()
if __name__=='__main__':
main()
The cam will stay active until you close the figure, i.e. until the script finishes. This is because you only release the capture afterwards,
plt.show()
cap.release()
If you want to turn off the camera after taking the image, reverse this order
cap.release()
plt.show()

Resources