use OpenCV video in apps - python-3.x

I wish to create a program which gets two inputs from two different webcams. I then wish to use one of these videos as output so that e.g. google meet or zoom will be able to show them. Then if i press s it should switch between these videos. This part I can do myself.
What I need is a command 'switch' which should switch between the videos.
I am finding no way that one of these applications can get these videos.
I am using Python 3.7 (anaconda)
Here is my code (i got it from https://docs.opencv.org/master/dd/d43/tutorial_py_video_display.html, also note that i used four spaces because I am not very familiar with asking questions on StackOverflow):
import cv2 as cv
cap = cv.VideoCapture(0)
cap2 = cv.VideoCapture(1)
while True:
ret, frame = cap.read()
ret2, frame2 = cap2.read()
# if frame is read correctly ret is True
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
if not ret2:
print("Can't receive frame2 (stream end?). Exiting ...")
break
# Display the resulting frame
cv.imshow('frame', frame)
cv.imshow('frame2', frame2)
if cv.waitKey(1) == ord('q'):
print("Exiting...")
break
cap.release()
cv.destroyAllWindows()

So i accidentally found a solution:
Download OBS --> It creates a virtual video camera
Download (pip install) pyvirtualcam (allowes you to operate with the OBS cam)
you can now send pictures to the cam using the module:
cam = pyvirtualcam.Camera(width=int(capture.get(3)), height=int(capture.get(4)), fps=30)
cam.send(<Photo>)
Also make if you use a while loop to put this code in the end:
cam.sleep_until_next_frame()

Related

Getting None as output while reading the frames of a video using cv2.VideoCapture.read()

I am trying to read all the frames of a video in Opencv, but after a certain point it is giving me "None" as a result, due to which I am not able to perform the processing tasks which I plan to do.
import time
import numpy as np
import cv2
# Create our body classifier
body_classifier = cv2.CascadeClassifier('dataset/haarcascade_fullbody.xml')
# Initiate video capture for video file
cap = cv2.VideoCapture('dataset/ped.mp4')
# Loop once video is successfully loaded
while (cap.isOpened()):
time.sleep(0.05)
# Read first frame
ret, frame = cap.read()
print(frame)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
The error which I am getting is here below:
enter image description here
Some points I would like to clarify before is that:
The path for the video mentioned is correct and also I have tried a
changing the "time.sleep" and the speed of the video but nothing worked and got the same error again.
Can anyone please tell what is the reason behind the "None" value and how could this be resolved.
I am adding the link of the video below:
https://drive.google.com/file/d/1HtNrm5rI9rtMJRqoSrqc5d1EFmTX5IXS/view?usp=sharing

Creating cv2.VideoCapture() object directly from numpy array image data

The purpose is to take data from a virtual camera (from a camera in Gazebo simulation, updating every second) and use Detectron2 (requires data come from cv2.VideoCapture) to recognize other objects in the simulation. The virtual camera of course does not appear in lspci so I can't simply use cv2.VideoCapture(0).
So my code is
bridge = CvBridge()
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='bgr8') #cv_image is numpy.ndarray, size (100,100,3)
cap = cv2.VideoCapture()
ret, frame = cap.read(image=cv_image)
print(ret, frame)
but it just prints False None, I assume because there's nothing being captured in cap. I
f I replace line 2 with cap = cv2.VideoCapture(cv_image) I get the error,
TypeError: only size-1 arrays can be converted to Python scalars
since I believe it requires either and integer (representing webcam number) or string (representing video file).
And for reference,
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='bgr8') # cv_image is numpy.ndarray
cv2.imshow('image', cv_image)
cv2.waitKey(1)
displays the image perfectly fine. Could there be a way to use imshow() or something similar as input for VideoCapture()?
However, cap = cv2.VideoCapture(cv2.imshow('image', cv_image))opens a blank window and gives me,
[ERROR:0] global /io/opencv/modules/videoio/src/cap.cpp (116) open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.2.0) /io/opencv/modules/videoio/src/cap_images.cpp:293: error: (-215:Assertion failed) !_filename.empty() in function 'open'
How can I create a cv2.VideoCapture() object that can use the image data that I have? Or what's something that might point me in the right direction?
Ubuntu 18.04 and Python 3.6 with opencv-python 4.2.0.34
From what I found on Gazebo tutorials page:
In Rviz, add a ''Camera'' display and under ''Image Topic'' set it to /rrbot/camera1/image_raw.
In your case it probably won't be /rrbot/camera1/ name, but the one you are setting in .gazebo file
<cameraName>rrbot/camera1</cameraName>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
So you can create subscriber and use cv2.VideoCapture() for every single image from that topic.
My solution was to rewrite Detectron2's --input flag in the demo to constantly run a ROS2 callback with demo.run_on_image(cv_data). So instead of making it process video, it just quickly processes each new image one at a time. This is a workaround so that cv2.VideoCapture() is not needed.

Camera stays on after object is destroyed, how to turn off the camera LED in OpenCV 4.1.2.30?

The LED of my camera is not turning off even when the process is finished. I've simply created a function to capture the image and then the camera must be turned off, but that's not happening.
I've even tried writing .release() function and .VideoCaptureRelease() function, but all went in vain.
The Python version I'm using is 3.6.9, on Linux (Ubuntu 18.04), on PyCharm IDE 19.3.2. On top of all openCV version is 4.1.2.30.
The problem didn't occur in openCV 4.1.0.25!
Anyhow, in the latest version of OpenCV, out of the blue, the LED is permanently on after using the camera. Here's the code of my small task:
from cv2 import *
import os
class Camera:
def capture_pic():
cam = VideoCapture(0)
s, img = cam.read()
if s:
namedWindow("cam-test", flags=WINDOW_AUTOSIZE)
imshow("cam-test", img)
waitKey(0)
destroyWindow("cam-test")
imwrite("test_pic.jpg", img) # save image
imshow('test_pic.jpg', img)
waitKey(0)
destroyAllWindows()
cam.release() # Used but no results
Camera.capture_pic()
Any suggestions or help would be appreciated.
Thanks in advance
This issue was first reported here and it seems to be caused by a problem in the MSMF capture backend.
Some people report that a temporary fix is to set the following environment variable to 0 before running the script:
export OPENCV_VIDEOIO_PRIORITY_MSMF=0
You could release the cam after your if statement and just after that enter in an infinite while loop to keep the openCV screen opened.
Additionally, You could add a conditional with a waitkey to break the loop and then close the window.
from cv2 import *
import os
class Camera:
def capture_pic():
cam = VideoCapture(0)
s, img = cam.read()
if s:
namedWindow("cam-test", flags=WINDOW_AUTOSIZE)
imshow("cam-test", img)
destroyWindow("cam-test")
imwrite("test_pic.jpg", img) # save image
cv2.imshow('test_pic.jpg', img)
cam.release() # release the cam just after showing your image.
while True:
if cv2.waitKey(1) & 0xFF == ord('q'):
destroyAllWindows()
break
Camera.capture_pic()

Raspberry Pi use Webcam instead of PiCam

Here is the code to initialize raspberry camera by pyimagesearch blog.I want to add a webcam to capture frame by frame in
camera = PiCamera()
camera.resolution = tuple(conf["resolution"])
camera.framerate = conf["fps"]
rawCapture = PiRGBArray(camera, size=tuple(conf["resolution"]))
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image and initialize
# the timestamp and occupied/unoccupied text
frame = f.array
This is the part that continuously captures frames from PiCamera. The problem is that I want to read frame by frame from webcam too. I somehow got it working, But lost the code accidentally. Now I don't remember what did I do. I got it working in just about 2-3 extra lines. Please help me get it if you can. Thank you.

Multiple Images in EasyGui

Basically I want to create type of quiz in Python 3.4 with EasyGui using multiple images on the button boxes.
How I'd imagine it'd work would be like this:
import easygui as eg
# A welcome message
eg.msgbox ("Welcome to the quiz", "Quiz!")
# A short splash screen this could be looped
Finish = "Start"
while Finish == "Start":
Finish = eg.buttonbox("Do you want to start the quiz or quit?","Welcome",["Start","Quit"])
if Finish == "Quit":
break
#Question 1
image = "mickey.gif"
choices = ["Mickey","Minnie","Daffy Duck","Dave"]
reply=eg.buttonbox("Who is this?",image = image,choices = choices)
if reply == "Mickey":
eg.msgbox("Well done!","Correct")
else:
eg.msgbox("Wrong","Failure")
This works, but if I change the line
reply=eg.buttonbox("Who is this?",image=[image,image2,image3,image4],choices = choices)
But that doesn't seem to work, does anyone know if you can have more than one image per buttonbox?
at the current version of easygui, you can't have multiple images, only one image.
You could either:
use an external tool to create one big merged image out of several smaller images.
try to make the necessary changes direct inside easygui.py (it's all in one single file) if you have knowledge in tkinter
help / contact Robert Lugg as he works on an improved version of easygui https://github.com/robertlugg/easygui
allpic = ("image", "image2", "image3")
reply=eg.buttonbox("Who is this?",image=allpic,choices = choices)

Resources