Playing a movie in OpenCV - python-3.x

I get the following error while trying to show a movie:
cv2.imshow("Video Output", frames)
TypeError: Expected Ptr<cv::UMat> for argument 'mat'
The commented-out lines are my attempts to fix the problem, but I still get the error.
What am i doing wrong?
import cv2
import numpy as np
vid = cv2.VideoCapture("resources/Plaza.mp4")
while True:
frames = vid.read()
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break

Better implementation would be:
Check if the video is opened using vid.isOpened()
vid = cv2.VideoCapture("resources/Plaza.mp4")
while vid.isOpened():
If the frame is returned successfully then display it.
ret, frames = vid.read()
if ret:
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
Make sure always close all the windows and release the VideoCapture object
cv2.destoyAllWindows()
vid.release()
Code:
import cv2
import numpy as np
vid = cv2.VideoCapture("resources/Plaza.mp4")
while vid.isOpened():
ret, frames = vid.read()
if ret:
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else
# continue with the next frame
continue
cv2.destoyAllWindows()
vid.release()

In your code, vid.read() returns two values. The first contains a boolean value, which, according to the documentation:
returns a bool (True/False). If frame is read correctly, it will be True. So you can check end of the video by checking this return value.
So your frames variable essentially is a tuple containing the boolean and the frames themselves. You need to index in to the second element (frames[1]) to play the video with imshow.
Always read the docs well!

Related

Unable to capture image every 5 seconds using OpenCV

I am trying to capture image in every 5 seconds using opencv through my laptop's built-in webcam. I am using time.sleep(5) for the required pause. In every run, the first image seems to be correctly saved but after that rest all the images are being saved as an unsupported format (when i am opening them). Also I am unable to break the loop by pressing 'q'.
Below is my code.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
framerate = cap.get(5)
x=1
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cap.release()
# Our operations on the frame come here
filename = 'C:/Users/shjohari/Desktop/Retail_AI/MySection/to_predict/image' + str(int(x)) + ".png"
x=x+1
cv2.imwrite(filename, frame)
time.sleep(5)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Any help is appreciated.
Just a small change, solved my issue.
I included the below code inside the loop :P
cap = cv2.VideoCapture(0)
framerate = cap.get(5)

Output 2 Functions on 1 Video

I'm a beginner on Python and want to ask whether can I draw images of different function onto one video? Below is my practice code.
import numpy as np
import cv2
from multiprocessing import Process
cap = cv2.VideoCapture('C:/Users/littl/Desktop/Presentation/Crop_DownResolution.mp4')
def line_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.line (img,(50,180),(380,180),(0,255,0),5)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
def rectangle_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.rectangle(img,(180,0),(380,128),(0,255,0),3)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
if __name__=='__main__':
p1 = Process(target = rectangle_drawing)
p1.start()
p2 = Process(target = line_drawing)
p2.start()
When I run the code, it gives me 2 tabs running the same video, one with the line drawn, the other with the rectangle drawn. How to I make both the rectangle and line to be on the video and have the functions separated instead of putting both in the same function?
I won't be able to give you an answer with Python code but...
What you have is two different threads, both capturing data from the video feed independently, and drawing the elements on separate pieces of data.
What you need to do is Have one process that is just in charge of data capture from your video feed which then provides that data for the other two threads. You would likely need to look into Mutexes so that the two threads don't clash with each other.
Resources
There are quite a few questions on SO and the internet that will help you achieve this:
opencv python Multi Threading Video Capture
https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/
http://algomuse.com/c-c/developing-a-multithreaded-real-time-video-processing-application-in-opencv
http://forum.piborg.org/node/2363

Text written on the frame also turns gray when video frames turned into grayscale

I am building a motion detector application.So for the motion detection algorithm to work, I converted the frames to grayscale so now the application is able to detect the motions.But when I try to put a text on the frame trying to post a message like "MOVING", even the text has turned gray and is hardly visible.How do I draw colored text on a video frame?
Below is my motion detection application code
import cv2
import numpy as np
from skimage.measure import compare_ssim
from twilio.rest import Client
#we can compare two images using Structural Similarity
#so a small change in pixel value won't prompt this method to term both images as dissimilar
#the closer the value is to 1,the more similar two images are
def ssim(A, B):
return compare_ssim(A, B, data_range=A.max() - A.min())
#capture a video either from a file or a live video stream
cap = cv2.VideoCapture(0)
first_frame = True
prev_frame = None
current_frame = None
#we keep a count of the frames
frame_counter = 0
while True:
if frame_counter == 0:
#prev_frame will always trail behind the current_frame
prev_frame = current_frame
#get a frame from the video
ret, current_frame = cap.read()
#if we reach the end of the video in case of a video file,stop reading
if current_frame is None:
break
#convert the image to grayscale
current_frame = cv2.cvtColor(current_frame,cv2.COLOR_BGR2GRAY)
if first_frame:
#for the first time prev_frame and current_frame will be the same
prev_frame = current_frame
first_frame = False
if frame_counter == 9:
#compare two images based on SSIM
ssim_val = ssim(current_frame, prev_frame)
print(ssim_val)
#if there is a major drop in the SSIM value ie it has detected an object
if ssim_val < 0.8:
# Here I want to put a colored text to the screen
cv2.putText(current_frame, "MOVING", (100, 300),
cv2.FONT_HERSHEY_TRIPLEX, 4, (255, 0, 0))
frame_counter = -1
#show the video as a series of frames
cv2.imshow("Motion Detection",current_frame) #(name of the window,image file)
frame_counter += 1
key = cv2.waitKey(1) & 0xFF #cv2.waitKey(1) returns a value of -1 which is masked using & 0xFF to get char value
if key == ord('q'): #gives ASCII value of 'q'
break
#release the resources allocated to the video file or video stream
cap.release()
#destroy all the windows
cv2.destroyAllWindows()
I searched online and I got this piece of code which basically suggests to convert grayscale back to BGR
backtorgb = cv2.cvtColor(current_frame, cv2.COLOR_GRAY2RGB)
But this didn't work.I even took a copy of the current frame before it being converted to grayscale frame and then tried to write on the copied color frame but still the text comes gray and not colored.What should I do?

Access IP camera with OpenCV

Can't access the video stream. Can any one please help me to get the video stream. I have searched in google for the solution and post another question in stack overflow but unfortunately nothing can't solve the problem.
import cv2
cap = cv2.VideoCapture()
cap.open('http://192.168.4.133:80/videostream.cgi?user=admin&pwd=admin')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use code below to access ipcam directly through opencv. Replace the url in VideoCapture with your particular camera rtsp url. The one given generally works for most cameras I've used.
import cv2
cap = cv2.VideoCapture("rtsp://[username]:[pass]#[ip address]/media/video1")
while True:
ret, image = cap.read()
cv2.imshow("Test", image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
You can use urllib to read frames from video stream.
import cv2
import urllib
import numpy as np
stream = urllib.urlopen('http://192.168.100.128:5000/video_feed')
bytes = ''
while True:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
cv2.imshow('Video', img)
if cv2.waitKey(1) == 27:
exit(0)
Check this out if you want to stream video from webcam of your pc. https://github.com/shehzi-khan/video-streaming
You can use this code to get live video feeds in browser.
for accessing camera other than your laptop's webcam, you can use RTSP link like this
rtsp://admin:12345#192.168.1.1:554/h264/ch1/main/av_stream"
where
username:admin
password:12345
your camera ip address and port
ch1 is first camera on that DVR
replace cv2.VideoCamera(0) with this link like this for your camera
and it will work
camera.py
import cv2
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device 0. If you have trouble capturing
# from a webcam, comment the line below out and use a video file
# instead.
self.video = cv2.VideoCapture(0)
# If you decide to use video.mp4, you must have this file in the folder
# as the main.py.
# self.video = cv2.VideoCapture('video.mp4')
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
main.py
from flask import Flask, render_template, Response
from camera import VideoCamera
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
def gen(camera):
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
#app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
then you can follow this blog to increase your FPS of video stream
Thank You. May be, now urlopen is not under utllib. It is under urllib.request.urlopen.I use this code:
import cv2
from urllib.request import urlopen
import numpy as np
stream = urlopen('http://192.168.4.133:80/video_feed')
bytes = ''
while True:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
cv2.imshow('Video', img)
if cv2.waitKey(1) == 27:
exit(0)
You can Use RTSP instead of direct video feed.
Every IP Camera have RTSP to Stream Live Video.
So you can use RTSP Link instead of videofeed
If using python 3, you will probably need to use a bytearray instead of a string. (modifying the current top answer)
with urllib.request.urlopen('http://192.168.100.128:5000/video_feed') as stream:
bytes = bytearray()
while True:
bytes += stream.read(1024)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes[a:b+2]
bytes = bytes[b+2:]
img = cv2.imdecode(np.frombuffer(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
cv2.imshow('Video', img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()

Cannot read video output

I am using the example for background subtraction. It works well but the video output is unreadable. My video is in gray so that might be the reason why I get that problem. I couldn't find much information how to work with VideoWriter_fourcc & VideoWriter different parameters. I know that the video is 256x320 uint8.
import numpy as np
import cv2
#MOG2 Backgroundsubstrator
cap = cv2.VideoCapture('videotest.avi')
fgbg = cv2.createBackgroundSubtractorMOG2()
##
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (256,320))
##
while(cap.isOpened()):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
if ret==True:
cv2.imshow('frame',fgmask)
out.write(fgmask)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Using:
fourcc = cv2.VideoWriter_fourcc(*'XVID')
Works if you write the video as is. In this case, I am trying to write the video with a background subtraction. The fix is:
fourcc = cv2.VideoWriter_fourcc(*'DIB ')
Note: Do not forget the space after DIB. I am using Python 3.5 & OpenCV3.1

Resources