Is OpenCV running two instances of SIFT detectAndCompute concurrently? - python-3.x

I can get SIFT keypoints and descriptors from two, seperate, large images (~2GB) when I run sift.detectAndCompute from the command line. I run it on one image, wait a very long time, but eventually get the keypoints and descriptors. Then I repeat for the second image, and again it takes a long time, but I DO eventually get my keypoints and descriptors. Here are the two lines I run from the IPython console in Spyder, which I am running on my machine with 32 GB of RAM. (MAX_MATCHES = 50000 in the code below):
sift = cv2.xfeatures2d.SIFT_create(MAX_MATCHES)
keypoints, descriptors = sift.detectAndCompute(imgGray, None)
This takes 10 minutes to finish, but it does finish. Next, I run this:
keypoints2, descriptors2 = sift.detectAndCompute(refimgGray, None)
When done, keypoints and keypoints2 DO contain 50000 keypoint objects.
However, if I run my script, which calls a function that uses sift.detectAndCompute and returns keypoints and descriptors, the process takes a long time, uses 100% of my memory and ~95% of my disk BW and then fails with this traceback:
runfile('C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py', wdir='C:/AV GIS/python scripts')
Reading reference image : C:\Users\kellett\Downloads\3074_transparent_mosaic_group1.tif
xfrm for image = (584505.1165100001, 0.027370000000000002, 0.0, 4559649.608440001, 0.0, -0.027370000000000002)
Reading image to align : C:\Users\kellett\Downloads\3071_transparent_mosaic_group1.tif
xfrm for image = (584499.92168, 0.02791, 0.0, 4559648.80372, 0.0, -0.02791)
Traceback (most recent call last):
File "<ipython-input-75-571660ddab7f>", line 1, in <module>
runfile('C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py', wdir='C:/AV GIS/python scripts')
File "C:\Users\kellett\AppData\Local\Continuum\anaconda3\envs\testgdal\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 668, in runfile
execfile(filename, namespace)
File "C:\Users\kellett\AppData\Local\Continuum\anaconda3\envs\testgdal\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py", line 445, in <module>
matches = find_matches(refKP, refDesc, imgKP, imgDesc)
File "C:/AV GIS/python scripts/img_align_w_geo_w_mask_refactor_ret_1.py", line 301, in find_matches
matches = matcher.match(dsc1, dsc2)
error: C:\ci\opencv_1512688052760\work\modules\core\src\stat.cpp:4024: error: (-215) (type == 0 && dtype == 4) || dtype == 5 in function cv::batchDistance
The function is simply called once for each image thusly:
print("Reading image to align : ", imFilename);
img, imgGray, imgEdgmask, imgXfrm, imgGeoInfo = read_ortho4align(imFilename)
refKP, refDesc = extractKeypoints(refimgGray, refEdgmask)
imgKP, imgDesc = extractKeypoints(imgGray, imgEdgmask)
HERE IS MY QUESTION (sorry for shouting): Do you think Python tries to run the two lines above concurrently in some way? If so, how can I force it to run serially? If not, do you have any idea why the two keypoint detections would work individually, but not when they come one after another in a script?
One more clue - I put in a statement to see if the script proceeds to the second detectAndCompute statement before it fails, and it does. (I just put a print statement in between the two.)

My error was coming later in my script where I was finding matches.
I have no reason to believe the two SIFT keypoint finding processes are occurring at the same time.
I downsampled the images I was searching for SIFT keypoints and was able to iterate my troubleshooting more quickly and found my error.
I will look at my error more closely next time before asking a question.

Related

Stable diffusion with openVino: Failed to set input blob with precision: I64, if CNNNetwork input blob precision is: FP64

I'm trying to make this version work on my CPU (Linux):
https://github.com/bes-dev/stable_diffusion.openvino
And it works fine without any initial image. But when I try to pass an initial image, I get this error:
Traceback (most recent call last):
File "/home/ideruga/workspace/stable_diffusion.openvino/demo.py", line 79, in <module>
main(args)
File "/home/ideruga/workspace/stable_diffusion.openvino/demo.py", line 39, in main
image = engine(
File "/home/ideruga/workspace/stable_diffusion.openvino/stable_diffusion_engine.py", line 188, in __call__
noise_pred = result(self.unet.infer_new_request({
File "/home/ideruga/anaconda3/lib/python3.9/site-packages/openvino/runtime/ie_api.py", line 266, in infer_new_request
return self.create_infer_request().infer(inputs)
......
File "/home/ideruga/anaconda3/lib/python3.9/site-packages/openvino/runtime/ie_api.py", line 31, in set_scalar_tensor
request.set_tensor(key, tensor)
RuntimeError: [ PARAMETER_MISMATCH ] Failed to set input blob with precision: I64, if CNNNetwork input blob precision is: FP64
It's bizarre, because I am not messing with any parameters. It's as if model that it downloads is not compatible with parsed input image.
I've actually found a bug in the linked repository, I'll submit a fix later today. The used model expects f64 but is fed with i64 value. I'll post a comment with the PR when it's submitted.

Encountered an internal AutoML error- ClientException: Message: No objects to concatenate

I am trying to implement Hierarchical time series forecasting on azureautoml pipelines.
I followed this notebook for implementation
https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
While I ran training pipeline on compute instance it worked, but when I am running the same on compute cluster it breaks at hts-proportion-calculation part.
This is the error I am getting,
system error:
Encountered an internal AutoML error. Error Message/Code: ClientException. Additional Info: ClientException:
      Message: No objects to concatenate
      InnerException: None
      ErrorResponse
{
"error": {
"message": "No objects to concatenate"
}
}
logs :
Loading arguments for scenario proportions-calculation
adding argument --input-medatadata
adding argument --hts-graph
adding argument --enable-event-logger
Input arguments dict is {'--input-medatadata': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_automl_training_workspaceblobstore/azureml/17ca5ae7-7269-4246-888f-e781071e3f5c/automl_training', '--hts-graph': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_hts_graph_workspaceblobstore/azureml/a2c1b15a-c895-41e8-b6a6-1ca37ebe9e77/hts_graph', '--enable-event-logger': None}
Unknown file to proceed outputs.txt
processing: outputs.txt with type None.
Cleaning up all outstanding Run operations, waiting 300.0 seconds
3 items cleaning up...
Cleanup took 0.001676321029663086 seconds
Traceback (most recent call last):
File "proportions_calculation_wrapper.py", line 47, in <module>
runtime_wrapper.run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_many_models/automl_pipeline_step_wrapper.py", line 63, in run
self._run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 44, in _run
proportions_calculation(self.arguments_dict, self.event_logger, script_run=self.step_run)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 173, in proportions_calculation
proportion_files_list, forecasting_parameters.time_column_name, graph.label_column_name
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 92, in calculate_time_agg_sum_for_all_files
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 304, in concat
sort=sort,
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 351, in __init__
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
Please let me know how can I resolve this issue ?
This error was incurred as Iteration timeout was not less than experiment timeout , but the system error & logs are a kind of misleading.
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
logs was pointing to pandas "No objects to concatenate"
This error can be overcome by setting iterationtimeout value less than experimenttime out value.
I had set iteration_timeout_minutes=60 which caused the error.
automl_settings = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
experiment_timeout_hours=1,
label_column_name=label_column_name,
track_child_runs=False,
forecasting_parameters=forecasting_parameters,
pipeline_fetch_max_batch_size=15,
model_explainability=model_explainability,
n_cross_validations="auto", # Feel free to set to a small integer (>=2) if runtime is an issue.
cv_step_size="auto",
# The following settings are specific to this sample and should be adjusted according to your own needs.
iteration_timeout_minutes=10,
iterations=15,
)
We are able to run the sample successfully using the compute cluster as given below.
from azureml.core.compute import ComputeTarget, AmlCompute
# Name your cluster
compute_name = "hts-compute"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print("Found compute target: " + compute_name)
else:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D16S_V3", max_nodes=20
)
# Create the compute target
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# For a more detailed view of current cluster status, use the 'status' property
print(compute_target.status.serialize())

PyAlgoTrade: How to use resampleBarFeed with multiple instruments?

I am resampling a few instruments with [pyalogtrade][1].
I have a base barfeed for 1-minute data, which is working fine
I have added a resampler to resample for 2 minutes, as follows:
class Strategy(strategy.BaseStrategy):
def __init__(self, instruments,feed, brk):
strategy.BaseStrategy.__init__(self, feed, brk)
self.__position = None
self.__instrument = instruments
self._resampledBF = self.resampleBarFeed(2 * bar.Frequency.MINUTE, self.resampledOnBar_2minute)
self.info ("initialised strategy")
I got this error:
2022-09-08 12:36:00,396 strategy [INFO] 1-MIN: INSTRUMENT1: Date: 2022-09-08 12:35:00+05:30 Open: 17765.55 High: 17774.5 Low: 17765.35 Close: 1777 myStrategy.run()
File "pyalgotrade\pyalgotrade\strategy\__init__.py", line 514, in run
self.__dispatcher.run()
File "pyalgotrade\pyalgotrade\dispatcher.py", line 109, in run
eof, eventsDispatched = self.__dispatch()
File "pyalgotrade\pyalgotrade\dispatcher.py", line 97, in __dispatch
if self.__dispatchSubject(subject, smallestDateTime):
File "pyalgotrade\pyalgotrade\dispatcher.py", line 75, in __dispatchSubject ret = subject.dispatch() is True
File "pyalgotrade\pyalgotrade\feed\__init__.py", line 106, in dispatch
dateTime, values = self.getNextValuesAndUpdateDS()
File "pyalgotrade\pyalgotrade\feed\__init__.py", line 81, in getNextValuesAndUpdateDS
dateTime, values = self.getNextValues()
File "pyalgotrade\pyalgotrade\barfeed\__init__.py", line 101, in getNextValues
raise Exception(
Exception: Bar date times are not in order. Previous datetime was 2022-09-08 12:34:00+05:30 and current datetime is 2022-09-08 12:34:00+05:30
However, the error does not occur if the self._resampledBF = self.resampleBarFeed is commented out.
Also, on searching online, I found a similar report/ possible fix reported earlier on Google groups: https://groups.google.com/g/pyalgotrade/c/v9ht1Bfz5Ds/m/ojF8uH8sFwAJ
The solution recommended was:
Sorry never mind, I fixed it. Using current timestamp instead of the one from IB and that fixed it.
Not sure if this is has been resolved.
Would like to know how to resolve the error while resampling.

Spurious out-of-memory error when allocating shared memory with multiprocessing

I'm trying to allocate a set of image buffers in shared memory using multiprocessing.RawArray. It works fine for smaller numbers of images. However, when I get to a certain number of buffers, I get a OSError indicating that I've run out of memory.
Obvious question, am I actually out of memory? By my count, the buffers I'm trying to allocate should be about 1 GB of memory, and according to the Windows Task Manager, I have about 20 GB free. I don't see how I could actually be out of memory!
Am I hitting some kind of artificial memory consumption limit that I can increase? If not, why is this happening, and how can I get around this?
I'm using Windows 10, Python 3.7, 64 bit architecture, 32 GB RAM total.
Here's a minimal reproducible example:
import multiprocessing as mp
import ctypes
imageDataType = ctypes.c_uint8
imageDataSize = 1024*1280*3 # 3,932,160 bytes
maxBufferSize = 300
buffers = []
for k in range(maxBufferSize):
print("Creating buffer #", k)
buffers.append(mp.RawArray(imageDataType, imageDataSize))
Output:
Creating buffer # 0
Creating buffer # 1
Creating buffer # 2
Creating buffer # 3
Creating buffer # 4
Creating buffer # 5
...etc...
Creating buffer # 278
Creating buffer # 279
Creating buffer # 280
Traceback (most recent call last):
File ".\Cruft\memoryErrorTest.py", line 10, in <module>
buffers.append(mp.RawArray(imageDataType, imageDataSize))
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\context.py", line 129, in RawArray
return RawArray(typecode_or_type, size_or_initializer)
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\sharedctypes.py", line 61, in RawArray
obj = _new_value(type_)
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\sharedctypes.py", line 41, in _new_value
wrapper = heap.BufferWrapper(size)
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", line 263, in __init__
block = BufferWrapper._heap.malloc(size)
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", line 242, in malloc
(arena, start, stop) = self._malloc(size)
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", line 134, in _malloc
arena = Arena(length)
File "C:\Users\Brian Kardon\AppData\Local\Programs\Python\Python37-32\lib\multiprocessing\heap.py", line 38, in __init__
buf = mmap.mmap(-1, size, tagname=name)
OSError: [WinError 8] Not enough memory resources are available to process this command
Ok, the folks over at Python bug tracker figured this out for me. For posterity:
I was using 32-bit Python, which is limited to a memory address space of 4 GB, much less than my total available system memory. Apparently enough of that space was taken up by other stuff that the interpreter couldn't find a large enough contiguous block for all my RawArrays.
The error does not occur when using 64-bit Python, so that seems to be the easiest solution.

pywinauto error argument 4: int too long to convert

I use Python3/pywinauto/and tested app - all are 64.
I got a error when I trying to expend a tree
tree_item = systreeview.GetItem([current_menu_item, u'xxxxxx'])
everything worked with 32 app.
*log:
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1523, in get_item
texts = [r.text() for r in roots]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1523, in <listcomp>
texts = [r.text() for r in roots]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 960, in text
return self._readitem()[1]
File "C:\Python36\lib\site-packages\pywinauto\controls\common_controls.py", line 1383, in _readitem
remote_mem)
ctypes.ArgumentError: argument 4: <class 'OverflowError'>: int too long to convert*
It was a bug. Fixed now. Thank you everyone.
Fixed another way in pull request #373. pywinauto 0.6.3 is out with the fix.
Just replaced 2 remaining win32functions.SendMessage calls with self.send_message everywhere.

Resources