Here is the code to initialize raspberry camera by pyimagesearch blog.I want to add a webcam to capture frame by frame in
camera = PiCamera()
camera.resolution = tuple(conf["resolution"])
camera.framerate = conf["fps"]
rawCapture = PiRGBArray(camera, size=tuple(conf["resolution"]))
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image and initialize
# the timestamp and occupied/unoccupied text
frame = f.array
This is the part that continuously captures frames from PiCamera. The problem is that I want to read frame by frame from webcam too. I somehow got it working, But lost the code accidentally. Now I don't remember what did I do. I got it working in just about 2-3 extra lines. Please help me get it if you can. Thank you.
Related
I am trying to read all the frames of a video in Opencv, but after a certain point it is giving me "None" as a result, due to which I am not able to perform the processing tasks which I plan to do.
import time
import numpy as np
import cv2
# Create our body classifier
body_classifier = cv2.CascadeClassifier('dataset/haarcascade_fullbody.xml')
# Initiate video capture for video file
cap = cv2.VideoCapture('dataset/ped.mp4')
# Loop once video is successfully loaded
while (cap.isOpened()):
time.sleep(0.05)
# Read first frame
ret, frame = cap.read()
print(frame)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
The error which I am getting is here below:
enter image description here
Some points I would like to clarify before is that:
The path for the video mentioned is correct and also I have tried a
changing the "time.sleep" and the speed of the video but nothing worked and got the same error again.
Can anyone please tell what is the reason behind the "None" value and how could this be resolved.
I am adding the link of the video below:
https://drive.google.com/file/d/1HtNrm5rI9rtMJRqoSrqc5d1EFmTX5IXS/view?usp=sharing
I wish to create a program which gets two inputs from two different webcams. I then wish to use one of these videos as output so that e.g. google meet or zoom will be able to show them. Then if i press s it should switch between these videos. This part I can do myself.
What I need is a command 'switch' which should switch between the videos.
I am finding no way that one of these applications can get these videos.
I am using Python 3.7 (anaconda)
Here is my code (i got it from https://docs.opencv.org/master/dd/d43/tutorial_py_video_display.html, also note that i used four spaces because I am not very familiar with asking questions on StackOverflow):
import cv2 as cv
cap = cv.VideoCapture(0)
cap2 = cv.VideoCapture(1)
while True:
ret, frame = cap.read()
ret2, frame2 = cap2.read()
# if frame is read correctly ret is True
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
if not ret2:
print("Can't receive frame2 (stream end?). Exiting ...")
break
# Display the resulting frame
cv.imshow('frame', frame)
cv.imshow('frame2', frame2)
if cv.waitKey(1) == ord('q'):
print("Exiting...")
break
cap.release()
cv.destroyAllWindows()
So i accidentally found a solution:
Download OBS --> It creates a virtual video camera
Download (pip install) pyvirtualcam (allowes you to operate with the OBS cam)
you can now send pictures to the cam using the module:
cam = pyvirtualcam.Camera(width=int(capture.get(3)), height=int(capture.get(4)), fps=30)
cam.send(<Photo>)
Also make if you use a while loop to put this code in the end:
cam.sleep_until_next_frame()
The purpose is to take data from a virtual camera (from a camera in Gazebo simulation, updating every second) and use Detectron2 (requires data come from cv2.VideoCapture) to recognize other objects in the simulation. The virtual camera of course does not appear in lspci so I can't simply use cv2.VideoCapture(0).
So my code is
bridge = CvBridge()
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='bgr8') #cv_image is numpy.ndarray, size (100,100,3)
cap = cv2.VideoCapture()
ret, frame = cap.read(image=cv_image)
print(ret, frame)
but it just prints False None, I assume because there's nothing being captured in cap. I
f I replace line 2 with cap = cv2.VideoCapture(cv_image) I get the error,
TypeError: only size-1 arrays can be converted to Python scalars
since I believe it requires either and integer (representing webcam number) or string (representing video file).
And for reference,
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='bgr8') # cv_image is numpy.ndarray
cv2.imshow('image', cv_image)
cv2.waitKey(1)
displays the image perfectly fine. Could there be a way to use imshow() or something similar as input for VideoCapture()?
However, cap = cv2.VideoCapture(cv2.imshow('image', cv_image))opens a blank window and gives me,
[ERROR:0] global /io/opencv/modules/videoio/src/cap.cpp (116) open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.2.0) /io/opencv/modules/videoio/src/cap_images.cpp:293: error: (-215:Assertion failed) !_filename.empty() in function 'open'
How can I create a cv2.VideoCapture() object that can use the image data that I have? Or what's something that might point me in the right direction?
Ubuntu 18.04 and Python 3.6 with opencv-python 4.2.0.34
From what I found on Gazebo tutorials page:
In Rviz, add a ''Camera'' display and under ''Image Topic'' set it to /rrbot/camera1/image_raw.
In your case it probably won't be /rrbot/camera1/ name, but the one you are setting in .gazebo file
<cameraName>rrbot/camera1</cameraName>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
So you can create subscriber and use cv2.VideoCapture() for every single image from that topic.
My solution was to rewrite Detectron2's --input flag in the demo to constantly run a ROS2 callback with demo.run_on_image(cv_data). So instead of making it process video, it just quickly processes each new image one at a time. This is a workaround so that cv2.VideoCapture() is not needed.
I am working on a project that requires sound snippets to be played from MP3 files in a playlist. The files are full songs.
I have tried pygame mixer and I can pass the start time of the file, but I cannot pass the end time that I want the music to stop, or be able to fade-in and fade out the current snippet.
I have looked at the vlc and ffmpeg libraries, but I do not see the functionality I am looking for.
I'm hoping someone may be aware of a library out there that may be able to do what I am trying to accomplish.
I finally figured out how to do exactly what I wanted to do!
In the spirit of helping others I am posting an answer to my own question.
My development environment:
Mac OS Mojave 10.14.6
Python 3.7.4
PyAudio 0.2.11
PyDub 0.23.1
Here it is in it's most rudimentary form:
import pyaudio
from pydub import AudioSegment
# Assign a mp3 source file to the PyDub Audiosegment
mp3 = AudioSegment.from_mp3("path_to_your_mp3_file")
# Specify starting and ending offsets from the beginning of the stream
# then apply a fadein and fadeout. All values are in millisecond (seconds * 1000).
mp3 = mp3[int(43000):int(58000)].fade_in(2000).fade_out(2000)
# In the above example the music will start 43 seconds into the track with a 2 second
# fade-in, and only play for 15 seconds with a 2 second fade-out. If you don't need
# these features, just comment out the line and the full mp3 will play.
# Assign the PyAudio player
player = pyaudio.PyAudio()
# Create the stream from the chosen mp3 file
stream = player.open(format = player.get_format_from_width(mp3.sample_width),
channels = mp3.channels,
rate = mp3.frame_rate,
output = True)
data = mp3.raw_data
while data:
stream.write(data)
data=0
stream.close()
player.terminate()
It isn't in the example above, but there is a way to process the stream and increase/decrease/mute the volume of the music.
One other thing that could be done is to set up a thread to pause the processing (writing) of the stream, which would emulate a pause button in a player.
I've been making video flipbooks by putting together individual frames of PNG files. At first I was using FFMPEG. I just got OpenCV working.
However, every time I make one, I have to manually choose the encoder to use.
The window has the title: Video Compression. The about box says it's Intel Indeo Video iYUV 2.0
Can I specify this somewhere in a Python call to OpenCV so I don't have to choose every time?
Here is the code that is making the video. Note I'm resizing the frames as they were different sized source frames.
video = cv.VideoWriter(outfile, -1, 30, (width, height), False)
for image in images:
cvimage = cv.imread(os.path.join(png_working_folder, image))
resized = cv.resize(cvimage, (800,800))
video.write(resized)
I found one solution.
I wasn't using fourcc previously. Adding it to my call got me over this hump.
fourcc = cv.VideoWriter_fourcc(*'XVID')
video = cv.VideoWriter(outfile, fourcc, 30, (width, height))
Adding the fourcc to the call was the key.