OpenCV Image Denoising gives: Error: -215:Assertation failed - python-3.x

Trying to denoise a really simple image, using the code below. When printing out the array of data I get the following structure, which is expected as the image is greyscale:
[[ 62 62 63 ... 29 16 6]
[ 75 90 103 ... 21 16 12]
[ 77 100 118 ... 29 29 30]
...
[ 84 68 56 ... 47 50 53]
[101 94 89 ... 40 44 48]
Here is the code and the associated error, at this point I'm a little stuck. Any suggestions?
import cv2
from matplotlib import pyplot as plt
img = cv2.imread(path,0)
dst = cv2.fastNlMeansDenoising(img,None,10,10,7,21)
plt.subplot(211),plt.imshow(dst)
plt.subplot(212),plt.imshow(img)
plt.show()
____________________________________________________________________
runfile(___, wdir='G:/James Alexander/Python Programs')
Traceback (most recent call last):
File "<ipython-input-127-ce832752c183>", line 1, in <module>
runfile('G:/James Alexander/Python Programs/Noiseremoval.py', wdir=___)
File "___", line 704, in runfile
execfile(filename, namespace)
File "___", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "___", line 13, in <module>
dst = cv2.fastNlMeansDenoising(img,None,10,10,7,21)
error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\photo\src\denoising.cpp:120: error: (-215:Assertion failed) hn == 1 || hn == cn in function 'cv::fastNlMeansDenoising'

Read the documentation on the Denoising function that you're using. There are two ways to call the function and you seem to be doing a combination of the two.
dst = cv.fastNlMeansDenoising(src[, dst[, h[, templateWindowSize[, searchWindowSize]]]])
or
dst = cv.fastNlMeansDenoising(src, h[, dst[, templateWindowSize[, searchWindowSize[, normType]]]])
You are calling it with (src, dst, h, templateWindowSize, searchWindowSize, normType) which either has too many parameters or is in the wrong order, depending on which method you want to use.

change your parameters to
dst = cv2.fastNlMeansDenoising(img, None, 30, 7, 21)

Related

How to fix TypeError: Caught TypeError in DataLoader worker process 1 in Detectron2

I'm trying to train a Detectron2 model with a COCO dataset. My dataset seems to load correctly. But when I try to train the model using the DefaultTrainer I get
TypeError: Caught TypeError in DataLoader worker process 1.
This is my setup:
from detectron2.engine import DefaultTrainer
# TOTAL_NUM_IMAGES = 10531
cfg = get_cfg()
cfg.OUTPUT_DIR = os.path.join('./output')
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
# single_iteration = cfg.SOLVER.IMS_PER_BATCH
# iterations_for_one_epoch = TOTAL_NUM_IMAGES / single_iteration
# cfg.SOLVER.MAX_ITER = int(iterations_for_one_epoch) * 20
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (person). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
And I get this error after a couple of iterations:
[01/06 15:14:00 d2.utils.events]: eta: 11:25:20 iter: 125 total_loss: 0.9023 loss_cls: 0.1827 loss_box_reg: 0.1385 loss_mask: 0.5601 loss_rpn_cls: 0.009945 loss_rpn_loc: 0.0023 time: 0.5232 data_time: 0.3085 lr: 3.1219e-05 max_mem: 3271M
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-8c48e6e17647> in <module>()
26 trainer = DefaultTrainer(cfg)
27 trainer.resume_or_load(resume=False)
---> 28 trainer.train()
8 frames
/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
432 # instantiate since we don't know how to
433 raise RuntimeError(msg) from None
--> 434 raise exception
435
436
TypeError: Caught TypeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py", line 201, in __iter__
yield self.dataset[idx]
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py", line 90, in __getitem__
data = self._map_func(self._dataset[cur_idx])
File "/usr/local/lib/python3.7/dist-packages/detectron2/utils/serialize.py", line 26, in __call__
return self._obj(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 189, in __call__
self._transform_annotations(dataset_dict, transforms, image_shape)
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 128, in _transform_annotations
for obj in dataset_dict.pop("annotations")
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 129, in <listcomp>
if obj.get("iscrowd", 0) == 0
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/detection_utils.py", line 297, in transform_instance_annotations
p.reshape(-1) for p in transforms.apply_polygons(polygons)
File "/usr/local/lib/python3.7/dist-packages/fvcore/transforms/transform.py", line 297, in <lambda>
return lambda x: self._apply(x, name)
File "/usr/local/lib/python3.7/dist-packages/fvcore/transforms/transform.py", line 291, in _apply
x = getattr(t, meth)(x)
File "/usr/local/lib/python3.7/dist-packages/fvcore/transforms/transform.py", line 150, in apply_polygons
return [self.apply_coords(p) for p in polygons]
File "/usr/local/lib/python3.7/dist-packages/fvcore/transforms/transform.py", line 150, in <listcomp>
return [self.apply_coords(p) for p in polygons]
File "/usr/local/lib/python3.7/dist-packages/detectron2/data/transforms/transform.py", line 150, in apply_coords
coords[:, 0] = coords[:, 0] * (self.new_w * 1.0 / self.w)
TypeError: can't multiply sequence by non-int of type 'float'
Turns out some of the id's in "annotations" where written in scientific notation resulting in some id's with type float. Converting these to integers solved the problem.

Kera's ImageDataGenerator randomly crashes

I have the following structure, where I want to read jpg files from test.
./cats_dogs_small
├── test
│ ├── cats <- 1000 images
│ └── dogs <- 1000 images
To read the files, I use the following MWE:
import os
train_dir = os.path.join(os.environ['HOME'], 'Documents/cats_dogs_small')
train_dir = os.path.join(train_dir, 'train')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20
def extract_features(directory):
generator = datagen.flow_from_directory(directory,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
i = 0
for inputs_batch, labels_batch in generator:
print(i, end=' ')
i += 1
return features, labels
train_features, train_labels = extract_features(train_dir)
Every time I run it, I get the same error message:
2020-11-19 16:08:56.973416: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-11-19 16:08:56.973436: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Found 2000 images belonging to 2 classes.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 Traceback (most recent call last):
File "/~/Documents/keras/untitled0.py", line 30, in <module>
train_features, train_labels = extract_features(train_dir)
File "/~/Documents/keras/untitled0.py", line 25, in extract_features
for inputs_batch, labels_batch in generator:
File "/.local/lib/python3.8/site-packages/keras_preprocessing/image/iterator.py", line 104, in __next__
return self.next(*args, **kwargs)
File "/.local/lib/python3.8/site-packages/keras_preprocessing/image/iterator.py", line 116, in next
return self._get_batches_of_transformed_samples(index_array)
File "/.local/lib/python3.8/site-packages/keras_preprocessing/image/iterator.py", line 227, in _get_batches_of_transformed_samples
img = load_img(filepaths[j],
File "/.local/lib/python3.8/site-packages/keras_preprocessing/image/utils.py", line 114, in load_img
img = pil_image.open(io.BytesIO(f.read()))
File "/anaconda3/envs/keras28/lib/python3.8/site-packages/PIL/Image.py", line 2943, in open
raise UnidentifiedImageError(
UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f41286f3090>
The error randomly raises. Here I posted that the code crashed at 60, but sometimes crashes at 43, 69 or any other number. It seems the problem is not related to a specific image, but the way I'm using flow_from_directory / ImageDataGenerator.
Keras version: 2.4.3

How to apply label encoding to text data(list of list)

I am novice python data analyst trying to preprocess the text data (jsonl format) before it goes into Neural networks for topic modelling(VAE). I was able to clean the data and turn it into numpy array, further I wanted to apply label encoding to the cleaned text data but fail to do so. **How can one apply label encoding to list of list data format **?. The input data into label encoding is list of list and ouput has to be in same format.
numpy array format (type: <class 'numpy.ndarray'>)
[array([1131, 713, 857, 1130..........])
array([ 142, 1346, 1918, 1893, 61, 62, 1922,967......]) ])
array([135, 148, 14, 104, 154, 159, 136, 94, 149, 135, 117, 62, 130....])
array([135, 148, 14, 104, 154, 159, 136......])...................................]
The code is this way(after cleaning):(list of list -strings)
dictionary = gensim.corpora.Dictionary(process_texts) # creating a dictionary
label_covid_data =[list(filter(lambda x: x != -1, dictionary.doc2idx(doc))) for doc in process_texts] # converint it into numeric according to dictionary
covid_train_data,covid_test_data = train_test_split(label_covid_data, test_size=0.2, random_state = 3456) # dividing into train and test data
covid_train_narray = np.array([np.array(i) for i in covid_train_data]) # converting into numpy array format
label = preprocessing.LabelEncoder() # applying label encoding
covid_data_labels = label.fit_transform([label.fit_transform(i) for i in covid_train_narray])
Error I am getting:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\_label.py in _encode(values, uniques, encode, check_unknown)
111 try:
--> 112 res = _encode_python(values, uniques, encode)
113 except TypeError:
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\_label.py in _encode_python(values, uniques, encode)
59 if uniques is None:
---> 60 uniques = sorted(set(values))
61 uniques = np.array(uniques, dtype=values.dtype)
TypeError: unhashable type: 'numpy.ndarray'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-217-ebce4e37aad8> in <module>
4 label = preprocessing.LabelEncoder()
5 #movie_line_labels = label.fit_transform(covid_train_narray[0])
----> 6 covid_data_labels = label.fit_transform([label.fit_transform(i) for i in covid_train_narray])
7 covid_data_labels
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\_label.py in fit_transform(self, y)
250 """
251 y = column_or_1d(y, warn=True)
--> 252 self.classes_, y = _encode(y, encode=True)
253 return y
254
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\_label.py in _encode(values, uniques, encode, check_unknown)
112 res = _encode_python(values, uniques, encode)
113 except TypeError:
--> 114 raise TypeError("argument must be a string or number")
115 return res
116 else:
TypeError: argument must be a string or number

Pandas File Update/Replace values from another reference file

Please help me with the updation of a file, based on values from another file.
The file I received is "todays_file1.csv" and has below table:
name day a_col b_col c_col
alex 22-05 rep 68 67
stacy 22-05 sme 79 81
penny 22-05 rep 74 77
gabbi 22-05 rep 59 61
And so, I need to update the values from only ['day', 'b_col', 'c_col'] into the second file "my_file.csv" which has too many other columns.
name day a_col a_foo b_col b_foo c_col
penny 21-May rep 2 69 31 69
alex 21-May rep 2 71 34 62
gabbi 21-May rep 1 62 32 66
stacy 21-May sme 3 73 38 78
The code I have so far is below:
df1 = pd.read_csv("todays_file1.csv")
df2 = pd.read_csv("my_file.csv")
df2.replace(to_replace=df2['day', 'b_col', 'c_col'], value= df1['day', 'b_col', 'c_col'], inplace=True)
Please help, with how to replace the 3 columns based on the 'name' column which is common in both, but may be jumbled.
I get the error below:
Traceback (most recent call last):
File "D:\TESTING\Trial.py", line 93, in <module>
df2.replace(to_replace=df2['day', 'b_col', 'c_col'], value= df1['day', 'b_col', 'c_col'], inplace=True)
File "C:\Winpy\WPy64-3770\python-3.7.7.amd64\lib\site-packages\pandas\core\frame.py", line 2800, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Winpy\WPy64-3770\python-3.7.7.amd64\lib\site-packages\pandas\core\indexes\base.py", line 2648, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 111, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1619, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1627, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: ('day', 'b_col', 'c_col')
"anky" has provided the solution through the comments, and I am ever grateful.
The code below helps solve the problem.
df1 = pd.read_csv("todays_file1.csv")
df2 = pd.read_csv("my_file.csv")
df1.set_index('name')
df2.set_index('name')
df2.update(df1)
df2.to_csv("my_file.csv", index=False)
Thank you again Anky :)

tensorflow pip and conda clash

I am attempting to install tensorflow 1.8 to 1.10.1. Unfortunately, I installed it with both pip and conda, and while it worked when I tried to upgrade to tensorflow 1.10.1 I get the below error message. I have attempted to remove it with both pip and conda as well as create a new conda environment and install it fresh in conda. With no other versions of tensorflow installed (for any other conda env as well) I ran:
conda create -n testing python=3.6.5 scipy numpy jupyter scikit-learn matplotlib seaborn nltk tensorflow
Then when I import tensorflow I get the same error message:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py in <module>()
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
/anaconda3/envs/testing/lib/python3.6/imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
/anaconda3/envs/testing/lib/python3.6/imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: dlopen(/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 6): Symbol not found: __ZN10tensorflow10DeviceBase16eigen_cpu_deviceEv
Referenced from: /anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Expected in: /Users/avanders/tensorflow_libs/lib/libtensorflow_framework.so
in /anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-2-64156d691fe5> in <module>()
----> 1 import tensorflow as tf
/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/__init__.py in <module>()
20
21 # pylint: disable=g-bad-import-order
---> 22 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
23
24 try:
/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 # Protocol buffers
/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/anaconda3/envs/testing/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/anaconda3/envs/testing/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so, 6): Symbol not found: __ZN10tensorflow10DeviceBase16eigen_cpu_deviceEv
Referenced from: /anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Expected in: /Users/avanders/tensorflow_libs/lib/libtensorflow_framework.so
in /anaconda3/envs/testing/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
So it turns out the problem was a conflict in versions with the go wrappers I had installed for the tensorflow library. Once I updated those wrappers to the same version a conda install worked.

Resources