I want to close web cam i used the cap.released() but it does not close the web cam after it captures the image. Here is my code:
import cv2
import matplotlib.pyplot as plt
def main():
cap=cv2.VideoCapture(0)
if cap.isOpened():
ret, frame = cap.read()
print(ret)
print(frame)
else:
ret=False
img1= cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
plt.imshow(img1)
plt.title('Color Image RGB')
plt.xticks([])
plt.yticks([])
plt.show()
cap.release()
if __name__=='__main__':
main()
The cam will stay active until you close the figure, i.e. until the script finishes. This is because you only release the capture afterwards,
plt.show()
cap.release()
If you want to turn off the camera after taking the image, reverse this order
cap.release()
plt.show()
Related
I am creating a python program to record my desktop screen.
But the output of this code is in very low quality and blurry.
Can anyone help me in capturing (screen capturing) in high quality.
Like the screen recorder like OBS studio and Camtasia do.
what can i do improve my quality change my extention ,codec ,etc. please mention.
import cv2
import numpy as np
import datetime
from PIL import Image, ImageTk, ImageGrab
date = datetime.datetime.now()
filename='rec_%s-%s-%s-%s%s%s.mp4' % (date.year, date.month, date.day,
date.hour, date.minute, date.second)
fourcc = cv2.VideoWriter_fourcc(*'X264')
frame_rate = 16
SCREEN_SIZE = (960,540)
out = cv2.VideoWriter(filename, fourcc,framerate, SCREEN_SIZE)
while True:
img = ImageGrab.grab()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
out.write(frame)
cv2.imshow('screenshot', frame)
if cv2.waitKey(1) == ord("q"):
break
cv2.destroyAllWindows()
out.release()
frame_rate = 16
SCREEN_SIZE = (960,540)
Both of this are too low, you probably want your frame_rate to be 30, and your screen size to be 1920x1080p.
Also just has an extra info:
.mp4 is a bad format for screen recording, I know OBS recommended using .flv because it doesn't corrupt the whole file if the recording end abruptly, unlike .mp4.
I'm creating a script that will read the state of a supermarket and tell me if there is products missing.
for example in the image below there is some places where there is products missing. I'm using FAST method to find all the corners in the frame. but sometimes the scripts detects the floor corners. What I want to do is remove the floor from the frame before I find the corners.
import cv2
import numpy as np
image = cv2.imread('gondola_imagem.jpeg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
You can use a mask to remove the areas you are not interested. For example with the following image as a mask you can get the bellow results.
Mask
Result
Code is as follow:
import numpy as np
import cv2
image = cv2.imread('test.jpg')
mask = cv2.imread('mask.jpg', 0)
cv2.imshow('Original', image)
cv2.imshow('Mask', mask)
res = cv2.bitwise_and(image,image,mask = mask)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('result.jpg', image)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
Edit:
If you want to do this in realtime (video from webcam) you just need to do it for every frame you get from the video camera. As long as the camera is not moving you should be able to use the same mask for all the frames. You could make the code above a function and then call it with an image as a parameter, as per the following code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Following function will have to be created with the previews code
CallFunctionToPreviewsCode(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The code above was taken from OpenCV Python-Tutorials It is a good place for learning OpenCV for Python programming language.
I have a keras model on depth perception, I want to load it using tensorflowjs, and apply it frame by frame on my webcam stream. Currently I am unable to capture my webcam video stream using HTML. How to do it?
you can use opencv and python to easily capture your video and make changes on it. for example you can use following code:
import cv2
import sys
from time import sleep
video_capture = cv2.VideoCapture(0)
anterior = 0
while True:
if not video_capture.isOpened():
print('Unable to load camera.')
sleep(5)
pass
# Capture frame-by-frame
ret, frame = video_capture.read()
# Display the resulting frameA
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Display the resulting frame
cv2.imshow('Video', frame)
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
I got a piece of code on internet that 'Label image regions' and tried to run it over a video , but all I get is first frame and than an error after closing the first frame window " max() arg is an empty sequence" from line" plt.tight_layout() of my code. I am trying to get label for all the frames in my video instead of single image example as shown in the given example above (link). Basically the code should display/plot all the frames with labels.
Any help will be really useful.Please find my code below
import cv2
import numpy as np
from matplotlib import pyplot as plt
import time
import matplotlib.patches as mpatches
from skimage import data
from skimage.filters import threshold_otsu
from skimage.segmentation import clear_border
from skimage.measure import label, regionprops
from skimage.morphology import closing, square
from skimage.color import label2rgb
cap = cv2.VideoCapture('test3.mp4')
fig, ax = plt.subplots(figsize=(10, 6))
while(1):
t = time.time()
ret, frame2 = cap.read()
image = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)
thresh = threshold_otsu(image)
bw = closing(image > thresh, square(3))
# remove artifacts connected to image border
cleared = clear_border(bw)
# label image regions
label_image = label(cleared)
image_label_overlay = label2rgb(label_image, image=frame2)
x = regionprops(label_image)
area2 = [r.area for r in x]
print(area2)
ax.imshow(image_label_overlay)
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 100:
# draw rectangle around segmented coins
minr, minc, maxr, maxc = region.bbox
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr -minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Wola!
And the solution is:
1.) Error rectification: "max() arg is an empty sequence" from line plt.tight_layout() can be removed using fig.tight_layout rather than plt.tight_layout. Because after I was closing the first frame of video (that was not updating, well that's another problem I am still pondering on!!) the figure was empty and it was raising an exception as tight.layout trying to run on an empty figure.
2.) Running Label image regions code for video is made possible if you replace line
rect = mpatches.Rectangle((minc, minr), maxc - minc+50, maxr - minr+50,fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
with
cv2.rectangle(frame2, (minc, minr), (minc +maxc - minc , minr + maxr - minr), (0, 255, 0), 2)
cv2.imshow('ObjectTrack', frame2) # this line outside the if loop
Basically display the video the way it is in simple Capture Video from Camera program of Python.
Was using opencv to capture a preexisting video. The video pops up on a frame
buts ends in the following error:
cv2.error: D:\Build\OpenCV\opencv-3.2.0\modules\highgui\src\window.cpp:312: error: (-215) size.width>0 && size.height>0 in function cv::imshow
My code is
import numpy as np
import cv2
cap = cv2.VideoCapture('lol.avi')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
PS: I do have ffmpeg in windows and environment path set to it.