Can OpenCV's VideoWriter write in a separate process? - python-3.x

I'm trying to save a video to disk in a separate process. The program creates a buffer of images to save on the original process. When its done recording, it passes the file name and image buffer to a second process that will make its own VideoWriter and save the file. When the second process calls write, however, nothing happens. It hangs and doesn't output any errors.
I checked if the VideoWriter is open already and it is. I tried moving the code to the original process to see if it worked there and it does. I don't know if it is some setting I need to initialize in the new process or if it has to do with the way VideoWriter works.
Here's my code
def stop_recording(self):
"""Stops recording in a separate process"""
if self._file_dump_process is None:
self._parent_conn, child_conn = multiprocessing.Pipe()
self._file_dump_process = multiprocessing.Process(
target=self.file_dump_loop, args=(child_conn, self.__log))
self._file_dump_process.daemon = True
self._file_dump_process.start()
if self._recording:
self.__log.info("Stopping recording. Please wait...")
# Dump VideoWriter and image buffer to process
# Comment out when running on main procress
self._parent_conn.send([self._record_filename, self._img_buffer])
""" Comment in when running on main procress
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
effective_fps = 16.0
frame_shape = (640, 480)
record_file = cv2.VideoWriter(self._record_filename, fourcc,
effective_fps, frame_shape,
isColor=1)
for img in self._img_buffer:
self.__log.info("...still here...")
record_file.write(img)
# Close the file and set it to None
record_file.release()
self.__log.info("done.")
"""
# Delete the entire image buffer no matter what
del self._img_buffer[:]
self._recording = False
#staticmethod
def file_dump_loop(child_conn, parent_log):
fourcc = cv2.VideoWriter_fourcc(*"MJPG")
effective_fps = 16.0
frame_shape = (640, 480)
while True:
msg = child_conn.recv()
record_filename = msg[0]
img_buffer = msg[1]
record_file = cv2.VideoWriter(record_filename, fourcc,
effective_fps, frame_shape,
isColor=1)
for img in img_buffer:
parent_log.info("...still here...")
record_file.write(img)
# Close the file and set it to None
record_file.release()
del img_buffer[:]
parent_log.info("done.")
Here's the log output when I run it on one process:
2019-03-29 16:19:02,469 - image_processor.stop_recording - INFO: Stopping recording. Please wait...
2019-03-29 16:19:02,473 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,515 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,541 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,567 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,592 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,617 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,642 - image_processor.stop_recording - INFO: ...still here...
2019-03-29 16:19:02,670 - image_processor.stop_recording - INFO: done.
Here's the log output when I run it on a second process:
2019-03-29 16:17:27,299 - image_processor.stop_recording - INFO: Stopping recording. Please wait...
2019-03-29 16:17:27,534 - image_processor.file_dump_loop - INFO: ...still here...

I tried this, and was successful with the following code:
import cv2
cap, imgs = cv2.VideoCapture('exampleVideo.MP4'), []
# This function writes video
def write_video(list_of_images):
vid_writer = cv2.VideoWriter('/home/stephen/Desktop/re_encode.avi',cv2.VideoWriter_fourcc('M','J','P','G'),120, (640,480))
for image in list_of_images: vid_writer.write(image)
# Loop to read video and save images to a list
for frame in range(123):
_, img = cap.read()
imgs.append(img)
write_video(imgs)
cap.release()
Everything worked as expected, and when I checked how long it took to run, I found that the above code took .13 seconds to read the video and .43 seconds to write the video. If I read the video and write the video in the same loop (below) the total processing time is .56 seconds (which is .13+.43).
# Loop to save image to video
for frame in range(123):
_, img = cap.read()
vid_writer.write(img)
There is a big disadvantage of writing the images to a buffer (in memory) first, and then writing the images to a video file (on the hard drive). The buffer is saved on the RAM which will fill up very quickly, and you will likely get a memory error.

Related

Converting video to images using OpenCV library problem

I have this code which converts .mov videos to images with the specified frequency (e.g. every 30 seconds here).
import cv2
# the path to take the video from
vidcap = cv2.VideoCapture(r"C:\Users\me\Camera_videos\Images_30sec\PIR-206_7.MOV")
def getFrame(sec):
vidcap.set(cv2.CAP_PROP_POS_MSEC,sec*1000)
hasFrames,image = vidcap.read()
if hasFrames:
cv2.imwrite("image"+str(count)+".jpg", image) # save frame as JPG file
return hasFrames
sec = 0
frameRate = 30 #//it will capture image every 30 seconds
count= 1
success = getFrame(sec)
while success:
count = count + 1
sec = sec + frameRate
sec = round(sec, 2)
success = getFrame(sec)
I have no problem with smaller files. A 5min long .mov file for example, produces 11 images as expected (5 x 60 seconds / 30 seconds = about 10 images with the first image taken at 0 seconds).
However, when I tried a bigger file, which is 483 MB and is about 32mins long, I have encountered a problem.
It is expected to generate some 32 x 60/30 = 64 images.
However, it runs and runs generating some 40'000 images until I manually stop the program. it seems to be stuck at one of the last images??
I have uploaded both .mov files to my google drive, if anyone wants to have a look.
small file
https://drive.google.com/file/d/1guKtLgM-vwt-5fG3_suJrhVbtwMSjMQe/view?usp=sharing
large file
https://drive.google.com/file/d/1V_HVRM29qwlsU0vCyWiOuBP-tkjdokul/view?usp=sharing
Can somebody advise on what's going on here?

Is there a way to check the volume level of all processes with pipewire/pulseaudio?

I'm trying to find a way to check if i have any desktop audio AND what processes is producing sounds.
After some searching i found a way to list all the sink input in pipewire/pulseaudio using pactl list sink-inputs however i have no idea if that input is muted or not
example output:
Sink Input #512
Driver: protocol-native.c
Owner Module: 9
Client: 795
Sink: 1
Sample Specification: float32le 2ch 48000Hz
Channel Map: front-left,front-right
Format: pcm, format.sample_format = "\"float32le\"" format.rate = "48000" format.channels = "2" format.channel_map = "\"front-left,front-right\""
Corked: yes
Mute: no
Volume: front-left: 43565 / 66% / -10.64 dB, front-right: 43565 / 66% / -10.64 dB
balance 0.00
Buffer Latency: 165979 usec
Sink Latency: 75770 usec
Resample method: speex-float-1
Properties:
media.name = "Polish cow (English Lyrics Full Version) - YouTube"
application.name = "Firefox"
native-protocol.peer = "UNIX socket client"
native-protocol.version = "35"
application.process.id = "612271"
application.process.user = "user"
application.process.host = "host"
application.process.binary = "firefox"
application.language = "en_US.UTF-8"
window.x11.display = ":0"
application.process.machine_id = "93e71eeba04e43789f0972b7ea0e4b39"
application.process.session_id = "2"
application.icon_name = "firefox"
module-stream-restore.id = "sink-input-by-application-name:Firefox"
The obvious thing would be looking at the Mute and Volume line but that is not reliable at all, currently the youtube video is paused but Mute is show as no and Volume is still no different from when the youtube video is actually playing.
I need the solution to be script-able since I'll muting certain thing when there is another process that is making sounds, and play it again when there is no sound, using bash script. If it is not possible on pipewire/pulseaudio but it is possible with another sound server then please do tell me.

Sound activated recording in Julia

I'm recording audio with Julia and want to be able to trigger a 5 second recording after the audio signal exceeds a certain volume. This is my record script so far:
using PortAudio, SampledSignals, LibSndFile, FileIO, Dates
stream = PortAudioStream("HDA Intel PCH: ALC285 Analog (hw:0,0)")
buf = read(stream, 5s)
close(stream)
save(string("recording_", Dates.format(now(), "yyyymmdd_HHMMSS"), ".wav"), buf, Fs = 48000)
I'm new to Julia and signal processing in general. How can I tell this only to start recording once the audio exceeds a specified volume threshold?
You need to test the sound you capture for average amplitude and act on that. Save if loud enough, otherwise rinse and repeat.
using PortAudio, SampledSignals, LibSndFile, FileIO
const hassound = 10 # choose this to fit
suprathreshold(buf, thresh = hassound) = norm(buf) / sqrt(length(buf)) > thresh # power over threshold
stream = PortAudioStream("HDA Intel PCH: ALC285 Analog (hw:0,0)")
while true
buf = read(stream, 5s)
close(stream)
if suprathreshold(buf)
save("recording.wav", buf, Fs = 48000) # should really append here maybe???
end
end

How to Mimic nRF Connect (for Android) Actions to Pygatt Script?

I'm using nRF Connect for Android to test a BLE peripheral. The peripheral is a BSX Insight residual muscle oxygen monitor whose software application is no longer functional or supported by the manufacturer. Thus, my only option to use my device (BSX) is to write my own control software. I've written a Python 3.7 script that I run within a tkinter routine on my 64-bit Win 10 laptop. Also, I'm using the Pygatt library and a BLED112 BT dongle.
I can connect to the peripheral, read and write values just fine to characteristics, but I'm sure that the "conversion" from the process used in nRF Connect and to my script is incomplete and inefficient. So the first thing I'd like to confirm is that the correct respective functions from Pygatt are used. Once I'm using the correct functions from Pygatt, then I can compare respective outputs for the two data (characteristic values) streams that I want to capture and store.
The basic process in nRF Connect:
1. scan
2. select/connect the BSX Insight
3. expose the service and characteristics of interest
4. enable CCCDs
5. write the "start data" values (04-02)
These are the process command results from the nRF Connect log file. Starting with number four:
4.
D 09:04:54.491 gatt.setCharacteristicNotification(00002a37-0000-1000-8000-00805f9b34fb, true) 11
D 09:04:54.496 gatt.setCharacteristicNotification(2e4ee00b-d9f0-5490-ff4b-d17374c433ef, true) 20x
D 09:04:54.499 gatt.setCharacteristicNotification(2e4ee00d-d9f0-5490-ff4b-d17374c433ef, true) 25x
D 09:04:54.516 gatt.setCharacteristicNotification(2e4ee00e-d9f0-5490-ff4b-d17374c433ef, true) 32x
D 09:04:54.519 gatt.setCharacteristicNotification(00002a63-0000-1000-8000-00805f9b34fb, true) 36
D 09:04:54.523 gatt.setCharacteristicNotification(00002a53-0000-1000-8000-00805f9b34fb, true) 40
The above resulted from using the nRF command "Enable CCCDs." Basically every characteristic that could be enabled was enabled which is fine. The 'x' are the three that I need enabled. The others are extra. Note, I've annotated the respective handles for these UUIDs on the end of the line.
V 09:05:39.211 Writing command to characteristic 2e4ee00a-d9f0-5490-ff4b-d17374c433ef
D 09:05:39.211 gatt.writeCharacteristic(2e4ee00a-d9f0-5490-ff4b-d17374c433ef, value=0x0402)
I 09:05:39.214 Data written to 2e4ee00a-d9f0-5490-ff4b-d17374c433ef, value: (0x) 04-02
A 09:05:39.214 "(0x) 04-02" sent
Number five is where I write 0402 to the UUID above. This action sends the data/value streams from:
2e4ee00d-d9f0-5490-ff4b-d17374c433ef, with a descriptor handle 26
2e4ee00e-d9f0-5490-ff4b-d17374c433ef, with a descriptor handle 33
Once I've done the basic steps above in nRF Connect, the two characteristic value streams become active, and I can immediately see the converted values in my Garmin Edge 810 head unit.
So attempting to duplicate the same process within my tkinter snippet:
# this function fires from the 'On' button click event
def powerON():
powerON_buttonevent = 1
print(f"\tpowerON_buttonevent OK {powerON_buttonevent}")
# Connect to the BSX Insight
try:
adapter = pygatt.BGAPIBackend() # serial_port='COM3'
adapter.start()
device = adapter.connect('0C:EF:AF:81:0B:76', address_type=pygatt.BLEAddressType.public)
print(f"\tConnected: {device}")
except:
print(f"BSX Insight connection failure")
finally:
# adapter.stop()
pass
# Enable only these CCCDs
try:
device.char_write_handle(21, bytearray([0x01, 0x00]), wait_for_response=True)
device.char_write_handle(26, bytearray([0x01, 0x00]), wait_for_response=True)
device.char_write_handle(33, bytearray([0x01, 0x00]), wait_for_response=True)
print(f"\te00b DESC: {device.char_read_long_handle(21)}") # notifiy e00b
print(f"\te00d DESC: {device.char_read_long_handle(26)}") # notify e00d SmO2
print(f"\te00e DESC: {device.char_read_long_handle(33)}") # notify e00e tHb
# Here's where I tested functions from Pygatt...
# print(f"\t{device.get_handle('UUID_here')}") # function works
# print(f"\tvalue_handle/characteristic_config_handle: {device._notification_handles('UUID_here')}") # function works
# print(f"{device.char_read('UUID_here')}")
# print(f"{device.char_read_long_handle(handle_here)}") # function works
except:
print(f"CCCD write value failure")
finally:
# adapter.stop()
pass
# Enable the data streams
try:
device.char_write('2e4ee00a-d9f0-5490-ff4b-d17374c433ef', bytearray([0x04, 0x02]), wait_for_response=True) # function works
print(f"\te00a Power ON: {device.char_read('2e4ee00e-d9f0-5490-ff4b-d17374c433ef')}")
except:
print(f"e00a Power ON write failure")
finally:
# adapter.stop()
pass
# Subscribe to SmO2 and tHb UUIDs
try:
def data_handler(handle, value):
"""
Indication and notification come asynchronously, we use this function to
handle them either one at the time as they come.
:param handle:
:param value:
:return:
"""
if handle == 25:
print(f"\tSmO2: {value} Handle: {handle}")
elif handle == 32:
print(f"\ttHb: {value} Handle: {handle}")
else:
print(f"\tvalue: {value}, handle: {handle}")
device.subscribe("2e4ee00d-d9f0-5490-ff4b-d17374c433ef", callback=data_handler, indication=False, wait_for_response=True)
device.subscribe("2e4ee00e-d9f0-5490-ff4b-d17374c433ef", callback=data_handler, indication=False, wait_for_response=True)
print(f"\tSuccess 2e4ee00d: {device.char_read('2e4ee00d-d9f0-5490-ff4b-d17374c433ef')}")
print(f"\tSuccess 2e4ee00e: {device.char_read('2e4ee00e-d9f0-5490-ff4b-d17374c433ef')}")
# this statement causes a run-on continuity when enabled
# while True:
# sleep(1)
except:
print("e00d/e00e subscribe failure")
finally:
adapter.stop()
# pass
Problem: in the output window of my Atom editor, the two data streams start as expected. For example:
I 09:05:39.983 Notification received from 2e4ee00d-d9f0-5490-ff4b-d17374c433ef, value: (0x) 00- 00-00-00-C0-FF-00-00-C0-FF-84-65-B4-3B-9E-AB-83-3C-FF-03
and...
I 09:05:39.984 Notification received from 2e4ee00e-d9f0-5490-ff4b-d17374c433ef, value: (0x) 1C-00-00-FF-03-FF-0F-63-00-00-00-00-00-00-16-32-00-00-00-00
I'll see about seven to ten lines of data before the "stream" stops. There'll be a gap of about 20 seconds, and then a big dump of values. This is different from the output from nRF Connect, which is immediate and continous.
I have the logs from nRF Connect and Python...but I'm not sure which log entry points to the cause of the stop. Might this issue be related to the Peripheral Preferred Connection Parameters? The nRF Connect property read shows:
ConnectionInterval = 50ms~100ms
SlaveLatency = 1
SuperTimeoutMonitor = 200
The Python log entry shows this:
INFO:pygatt.backends.bgapi.bgapi:Connection status: handle=0x0, flags=5, address=0xb'760b81afef0c', connection interval=75.000000ms, timeout=1000, latency=0 intervals, bonding=0xff
Thoughts anyone? (And truly, thanks in advance.)
I've answered my questions. I now have to solve the new problem of why my tKinter dialog is "not responding" as a separate issue.
Thanks All
Edit 3/31/2020: I re-wrote the script using pyQt and now have a functional app.

No value for arguement in function call

I am very new to Python and am working through the Dagster hello tutorial
I have set up the following from the tutorial
import csv
from dagster import execute_pipeline, execute_solid, pipeline, solid
#solid
def hello_cereal(context):
# Assuming the dataset is in the same directory as this file
dataset_path = 'cereal.csv'
with open(dataset_path, 'r') as fd:
# Read the rows in using the standard csv library
cereals = [row for row in csv.DictReader(fd)]
context.log.info(
'Found {n_cereals} cereals'.format(n_cereals=len(cereals))
)
return cereals
#pipeline
def hello_cereal_pipeline():
hello_cereal()
However pylint shows
a no value for parameter
message.
What have I missed?
When I try to execute the pipeline I get the following
D:\python\dag>dagster pipeline execute -f hello_cereal.py -n
hello_cereal_pipeline 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
PIPELINE_START - Started execution of pipeline
"hello_cereal_pipeline". 2019-11-25 14:47:09 - dagster - DEBUG -
hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 -
ENGINE_EVENT - Executing steps in process (pid: 11684)
event_specific_data = {"metadata_entries": [["pid", null, ["11684"]],
["step_keys", null, ["{'hello_cereal.compute'}"]]]} 2019-11-25
14:47:09 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_START - Started execution
of step "hello_cereal.compute".
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute" 2019-11-25 14:47:10 - dagster - ERROR - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - STEP_FAILURE - Execution of
step "hello_cereal.compute" failed.
cls_name = "FileNotFoundError"
solid = "hello_cereal"
solid_definition = "hello_cereal"
step_key = "hello_cereal.compute"
File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\errors.py",
line 114, in user_code_error_boundary
yield File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\engine\engine_inprocess.py",
line 621, in _user_event_sequence_for_step_compute_fn
for event in gen: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 75, in _execute_core_compute
for step_output in _yield_compute_results(compute_context, inputs, compute_fn): File
"c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\execution\plan\compute.py",
line 52, in _yield_compute_results
for event in user_event_sequence: File "c:\users\kirst\appdata\local\programs\python\python38-32\lib\site-packages\dagster\core\definitions\decorators.py",
line 418, in compute
result = fn(context, **kwargs) File "hello_cereal.py", line 10, in hello_cereal
with open(dataset_path, 'r') as fd:
2019-11-25 14:47:10 - dagster - DEBUG - hello_cereal_pipeline -
96c575ae-0b7d-49cb-abf4-ce998865ebb3 - ENGINE_EVENT - Finished steps
in process (pid: 11684) in 183ms event_specific_data =
{"metadata_entries": [["pid", null, ["11684"]], ["step_keys", null,
["{'hello_cereal.compute'}"]]]} 2019-11-25 14:47:10 - dagster - ERROR
- hello_cereal_pipeline - 96c575ae-0b7d-49cb-abf4-ce998865ebb3 - PIPELINE_FAILURE - Execution of pipeline "hello_cereal_pipeline"
failed.
[Update]
From Rahul's comment I realised I had not copied the whole example.
When I corrected that I got a FileNotFoundError
To answer the original question about why you are receiving a "no value for parameter" pylint message -
This is because the pipeline function calls don't include any parameters in the constructors and the #solid functions have parameters defined. This is intentional from dagster and can be ignored by adding the following line either at the beginning of the module, or to the right of the line with the pylint message. Note that putting the python comment below at the beginning of the module tells pylint to ignore any instance of the warning in the module, whereas putting the comment in-line tells pylint to ignore only that instance of the warning.
# pylint: disable=no-value-for-parameter
Lastly, you could also put a similar ignore statement in a .pylintrc file too, but I'd advise against that as that would be project-global and you could miss true issues.
hope this helps a bit!
Please check whether the dataset(csv file) which you are using is in the same directory with your code file. That may be the case why are you getting the
FileNotFoundError error

Resources