Error at visualization of mnist NN with keras - python-3.x

I installed anaconda and all the packages I needed. I tried to run this mnist example from keras It works fine. Now I'm trying to visualize the Graph with keras manual. but it does not work.
When I run the mnist example I receive this error Message:
Traceback (most recent call last):
File "dummy.py", line 77, in <module>
plot_model(model, to_file='model.png')
File "C:\Users\***\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\utils\vis_utils.py", line 132, in plot_model
dot = model_to_dot(model, show_shapes, show_layer_names, rankdir)
File "C:\Users\***\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\utils\vis_utils.py", line 55, in model_to_dot
_check_pydot()
File "C:\Users\***\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\utils\vis_utils.py", line 26, in _check_pydot
pydot.Dot.create(pydot.Dot())
File "C:\Users\***\AppData\Local\Continuum\anaconda3\lib\site-packages\pydot.py", line 1885, in create
assert p.returncode == 0, p.returncode
AssertionError: 1
I also tried the convnet_drawer. It failed too.
Any Ideas?

Related

TypeError: can't pickle cv2.CLAHE objects

I am trying to run the code on github(https://github.com/AayushKrChaudhary/RITnet)
I did not get the semantic segmentation dataset of OpenEDS, so I tried to download the png image from the Internet and put it in Semantic_Segmentation_Dataset\test\ to run the test program.
That code gives the following error:
Traceback (most recent call last):
File "test.py", line 59, in <module>
for i, batchdata in tqdm(enumerate(testloader),total=len(testloader)):
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__
w.start()
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle cv2.CLAHE objects
(Machine_Learning) C:\Users\b0743\Downloads\RITnet-master\RITnet-master>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
and my environment is:
# Name Version
cudatoolkit 10.1.243
cudnn 7.6.5
keras-applications 1.0.8
keras-base 2.3.1
keras-gpu 2.3.1
keras-preprocessing 1.1.0
matplotlib 3.3.1
matplotlib-base 3.3.1
numpy 1.19.1
numpy-base 1.19.1
opencv 3.3.1
pillow 7.2.0
python 3.6.10
pytorch 1.6.0
scikit-learn 0.23.2
scipy 1.5.2
torchsummary 1.5.1
torchvision 0.7.0
tqdm 4.48.2
I don’t know if this is a stupid question, but I hope someone can try to answer it for me.
I literally just got into the dataset python file and commented all the parts that require opencv.
Turns out it works but you wont get that sweet clahe and the other stuff but it works.
if you don't need the dataset thing just make a tensor out of the 640 by 400 image and put it in a empty array and put that array in an array until you have a 4d tensor and pass it in the dnn , and then put the output through the get predictions function and viola you have a array of eye features.

cuDNN error: CUDNN_STATUS_EXECUTION_FAILED while using flair

I have been using https://github.com/zalandoresearch/flair#example-usage
tried using flair to experiment flair but then I don't know why I am not able to use the GPU.
and tried the following:
>>> from flair.data import Sentence
>>> from flair.models import SequenceTagger
>>> sentence = Sentence('I love Berlin .')
>>> tagger = SequenceTagger.load('ner')
2019-07-20 17:52:15,062 loading file /home/vz/.flair/models/en-ner-conll03-v0.4.pt
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/flair/nn.py", line 103, in load
model = cls._init_model_with_state_dict(state)
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py", line 205, in _init_model_with_state_dict
locked_dropout=use_locked_dropout,
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/flair/models/sequence_tagger_model.py", line 166, in __init__
self.to(flair.device)
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 386, in to
return self._apply(convert)
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 127, in _apply
self.flatten_parameters()
File "/home/vz/miniconda3/envs/gp/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 123, in flatten_parameters
self.batch_first, bool(self.bidirectional))
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
Can anyone please help me as to how to fix this error ?
Thanks in advance.
The error is with my machine and CUDNN requirement i would suggest every one to install pytorch with conda so the way to install should be something like this
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
to Eradicate any kind of issues with the installation.

How to convert a model from torch to pytorch or caffe or any other model?

I'm trying to convert this t7 model to pytorch or to caffe or to caffe2 or to any other model..
this is what I get when converting to pytorch with the code from:
https://github.com/clcarwin/convert_torch_to_pytorch
has anyone had this error or know what to do with it?
roy#roy-Lenovo:~/convert_torch_to_pytorch$ python convert_torch.py -m model_a.t7
Traceback (most recent call last):
File "convert_torch.py", line 314, in <module>
torch_to_pytorch(args.model,args.output)
File "convert_torch.py", line 262, in torch_to_pytorch
slist = lua_recursive_source(lnn.Sequential().add(model))
File "/home/roy/.local/lib/python2.7/site-packages/torch/legacy/nn/Sequential.py", line 15, in add
self.output = module.output
File "/home/roy/.local/lib/python2.7/site-packages/torch/utils/serialization/read_lua_file.py", line 99, in __getattr__
return self._obj.get(k)
AttributeError: 'list' object has no attribute 'get'

Keras to CoreML value arror: list.remove(x): x not in list

Im trying to convert my Keras model (mobilenet + dence layers). The problem is that when I want to use coremltools for conversion I faced with the following problem:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-
packages/IPython/core/interactiveshell.py", line 3265, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-7905693382e5>", line 1, in <module>
coreml_model = coremltools.converters.keras.convert(loaded_model)
File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/keras/_keras_converter.py", line 752, in convert
custom_conversion_functions=custom_conversion_functions)
File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/keras/_keras_converter.py", line 550, in convertToSpec
custom_objects=custom_objects)
File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/keras/_keras2_converter.py", line 206, in _convert
graph.build()
File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/keras/_topology2.py", line 687, in build
self._remove_old_edges(layer)
File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/keras/_topology2.py", line 429, in _remove_old_edges
self._remove_edge(layer, succ)
File "/usr/local/lib/python3.6/dist-packages/coremltools/converters/keras/_topology2.py", line 365, in _remove_edge
self.edge_map[src].remove(snk)
ValueError: list.remove(x): x not in list
Im trying to do this conversion by the following code:
js_file = open(args.ddir + args.mdl +'.json','r')
loaded_json_model = js_file.read()
js_file.close()
from keras.applications import mobilenet
from keras.utils.generic_utils import CustomObjectScope
from keras.models import model_from_json
with CustomObjectScope({'relu6': mobilenet.mobilenet.relu6}):
loaded_model = model_from_json(loaded_json_model)
loaded_model.load_weights(args.ddir + args.mdl + '.h5')
coreml_model = coremltools.converters.keras.convert(loaded_model,
input_names="image",
image_input_names="image"
)
I solved the problem by using proper version of the Keras which includes both Mobilenet (feature extractor) and at the same time "relu6". The only version (up to now) that worked for me is version "2.1.6". By this version I successfully did the conversion.
coremltools does not supporting some layers for the moment (including relu6). This issue can be handled by "CustomObjectScope" an it was shown in the provided code.
Note that, the network should be trained at this version (2.1.6) again.

Keras Convolution3D subsample error

I was trying to build a 3D convolutional layer using keras. It works fine, but when I added a subsample parameter it crashed. The code:
l_1 = Convolution3D(2, 10,10,10,
border_mode='same',
name = 'l_1',
activation='relu',
subsample = (5,5,5)
)(inputs)
the error is:
Traceback (most recent call last):
File "image_proc_09.py", line 244, in <module>
)(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 635, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 166, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/usr/local/lib/python2.7/dist-packages/keras/layers/convolutional.py", line 1234, in call
filter_shape=self.W_shape)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 1627, in conv3d
dim_ordering, volume_shape, filter_shape)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 1686, in _old_theano_conv3d
assert(strides == (1, 1, 1))
AssertionError
I am using theano 0.8.2.
Thanks
You cannot use the subsample parameter with border_mode='same'. Use 'valid' or 'full'
Check out the line of code where the assertion error happens

Resources