Keras Convolution3D subsample error - theano

I was trying to build a 3D convolutional layer using keras. It works fine, but when I added a subsample parameter it crashed. The code:
l_1 = Convolution3D(2, 10,10,10,
border_mode='same',
name = 'l_1',
activation='relu',
subsample = (5,5,5)
)(inputs)
the error is:
Traceback (most recent call last):
File "image_proc_09.py", line 244, in <module>
)(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 635, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 166, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/usr/local/lib/python2.7/dist-packages/keras/layers/convolutional.py", line 1234, in call
filter_shape=self.W_shape)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 1627, in conv3d
dim_ordering, volume_shape, filter_shape)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 1686, in _old_theano_conv3d
assert(strides == (1, 1, 1))
AssertionError
I am using theano 0.8.2.
Thanks

You cannot use the subsample parameter with border_mode='same'. Use 'valid' or 'full'
Check out the line of code where the assertion error happens

Related

python Tensorflow 2.4.0 'input must be 4-dimensional[1,1,371,300,3]' ERROR

im running Nicholas Rennote's TFODCourse.
when i execute the Evaluate the model code:
python Tensorflow\models\research\object_detection\model_main_tf2.py --model_dir=Tensorflow\workspace\models\my_ssd_mobnet --pipeline_config_path=Tensorflow\workspace\models\my_ssd_mobnet\pipeline.config --checkpoint_dir=Tensorflow\workspace\models\my_ssd_mobnet
error occurs like this
Traceback (most recent call last):
File "Tensorflow\models\research\object_detection\model_main_tf2.py", line 115, in <module>
tf.compat.v1.app.run()
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "Tensorflow\models\research\object_detection\model_main_tf2.py", line 82, in main
model_lib_v2.eval_continuously(
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\object_detection-0.1-py3.8.egg\object_detection\model_lib_v2.py", line 1151, in eval_continuously
eager_eval_loop(
File "C:\Users\All_Nighter\miniconda3\envs\TF\lib\site-packages\object_detection-0.1-py3.8.egg\object_detection\model_lib_v2.py", line 928, in eager_eval_loop
for i, (features, labels) in enumerate(eval_dataset):
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 761, in __next__
return self._next_internal()
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 744, in _next_internal
ret = gen_dataset_ops.iterator_get_next(
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\ops\gen_dataset_ops.py", line 2727, in iterator_get_next
_ops.raise_from_not_ok_status(e, name)
File "C:\Users\All_Nighter\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\framework\ops.py", line 6897, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: input must be 4-dimensional[1,1,371,300,3]
[[{{node ResizeImage/resize/ResizeBilinear}}]] [Op:IteratorGetNext]
I can't understand what is input must be 4-dimensional[1,1,371,300,3] means.
i tried Labeling again, and downgrade TF to 2.4.0. but still happend.
ssd_mobilenet model expects input
A three-channel image of variable size - the model does NOT support
batching. The input tensor is a tf.uint8 tensor with shape [1, height,
width, 3] with values in [0, 255]
In this case you are giving 4-dimensional input[1,1,371,300,3],
Reshape your input data as [1,371,300,3].

How to replace LSTMBlockCell with LSTMBlockFusedCell in Python TensorFlow

Replacing LSTMBlockCell with LSTMBlockFusedCell throws an error.
The full error message:
Traceback (most recent call last):
File "Classification-DL_ULSTM4.py", line 81, in <module>
logits=ULSTM(x_,n_input,n_hidden,n_steps,n_classes)
File "Classification-DL_ULSTM4.py", line 25, in ULSTM
outputs,_=lstm_cell(x,dtype=tf.float32)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/layers/base.py", line 548, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/keras/engine/base_layer.py", line 819, in __call__
self.name)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/keras/engine/input_spec.py", line 155, in assert_input_compatibility
' input tensors. Inputs received: ' + str(inputs))
ValueError: Layer lstm_fused_cell expects 1 inputs, but it received 250 input tensors. Inputs received: [<tf.Tensor 'split:0' shape=(?, 1) dtype=float32>, <tf.Tensor 'split:1' shape=(?, 1) dtype=float32>
previews code
def ULSTM(x,n_input,n_hidden,n_steps,n_classes):
x=tf.transpose(x,[1,0,2])
x=tf.reshape(x,[-1,n_input])
x=tf.split(x,n_steps)
lstm_cell=tf.contrib.rnn.LSTMBlockCell(n_hidden,forget_bias=1.0)
outputs,_=tf.contrib.rnn.static_rnn(lstm_cell,x,dtype=tf.float32)
what I replace
lstm_cell=tf.contrib.rnn.LSTMBlockFusedCell(n_hidden,forget_bias=1.0)
outputs,_=lstm_cell(x,dtype=tf.float32)
args
n_input=1
n_hidden=1
n_steps=250
n_classes=4
tic=time.time()
x=tf.placeholder(tf.float32, [None, 250])
x_=tf.reshape(x,[-1,250,1])
y_=tf.placeholder(tf.float32,[None,4])
logits=ULSTM(x_,n_input,n_hidden,n_steps,n_classes)
learning_rate=0.001
batch_size=16
maxiters=10000
I know LSTMBlockFusedCell is inherited from FusedRNNCell instead of RNNCell, so I cannot use standard tf.nn.static_rnn or tf.nn.dynamic_rnn in which they require RNNCell instance.But I don't know how to change it without error.

Input to reshape is a tensor with 788175 values, but the requested shape has 1050900

I am importing in some arrays of data to train on but tensorflow is outputting below error.
inp = open('train.csv',"rb")
X = pickle.load(inp)
X = X/255.0
X = np.array(X)
model = keras.Sequential([
keras.layers.Flatten(input_shape=(113, 75, 3)),
keras.layers.Dense(75, activation=tf.nn.relu),
keras.layers.Dense(50, activation=tf.nn.relu),
keras.layers.Dense(75, activation=tf.nn.relu),
keras.layers.Dense(25425, activation=tf.nn.softmax),
keras.layers.Reshape((113, 75, 4))
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(X, X, epochs=5)
I should be able to create an autoencoder but the program outputs this:
Traceback (most recent call last):
File "C:\Users\dalto\Documents\geo4\train.py", line 24, in <module>
model.fit(X, X, epochs=5)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training.py", line 643, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 664, in fit
steps_name='steps_per_epoch')
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 383, in model_iteration
batch_outs = f(ins_batch)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\backend.py", line 3510, in __call__
outputs = self._graph_fn(*converted_inputs)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 572, in __call__
return self._call_flat(args)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 671, in _call_flat
outputs = self._inference_function.call(ctx, args)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 445, in call
ctx=ctx)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 788175 values, but the requested shape has 1050900
[[node reshape/Reshape (defined at C:\Users\dalto\Documents\geo4\train.py:24) ]] [Op:__inference_keras_scratch_graph_922]
Function call stack:
keras_scratch_graph
If I change the Reshape to (113, 75, 3) I get this it doesn't fix the error it just changes it:
Traceback (most recent call last):
File "C:\Users\dalto\Documents\geo4\train.py", line 24, in <module>
model.fit(X, X, epochs=5)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training.py", line 643, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 664, in fit
steps_name='steps_per_epoch')
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 383, in model_iteration
batch_outs = f(ins_batch)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\backend.py", line 3510, in __call__
outputs = self._graph_fn(*converted_inputs)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 572, in __call__
return self._call_flat(args)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 671, in _call_flat
outputs = self._inference_function.call(ctx, args)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\function.py", line 445, in call
ctx=ctx)
File "C:\Users\dalto\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible
shapes: [31,113,75] vs. [31,113,75,3]
[[node metrics/accuracy/Equal (defined at
C:\Users\dalto\Documents\geo4\train.py:24) ]] [Op:__inference_keras_scratch_graph_922]
The input and output size after reshape must be same. So, you'll have to use (113, 75, 3) instead of (113, 75, 4).
Now, by using (113, 75, 3), you're getting the unequal error because you're using sparse_categorical_crossentropy as your loss function, you should instead use categorical_crossentropy.
The basic difference between these is that sparse_categorical_crossentropy works when you have direct integers as your label, and categorical_crossentropy works when you have one-hot encoded labels.
Corrected:
inp = open('train.csv',"rb")
X = pickle.load(inp)
X = X/255.0
X = np.array(X)
model = keras.Sequential([
keras.layers.Flatten(input_shape=(113, 75, 3)),
keras.layers.Dense(75, activation=tf.nn.relu),
keras.layers.Dense(50, activation=tf.nn.relu),
keras.layers.Dense(75, activation=tf.nn.relu),
keras.layers.Dense(25425, activation=tf.nn.softmax),
keras.layers.Reshape((113, 75, 4))
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X, X, epochs=5)

keras model.fit_generator error.how do i solve this issue?

I have checked documentation for keras.fit_generator function still not able to find the problem
Libraries are working fine in my laptop
My code:
# train the network
print("training network...")
sys.stdout.flush()
#class_mode ='categorical', # 2D one-hot encoded labels
H = model.fit_generator(aug.flow(Xtrain, trainY,batch_size=BS),
validation_data=(Xval, valY),
steps_per_epoch=len(trainX) // BS,
epochs=EPOCHS, verbose=1)
# save the model to disk
print("Saving model to disk")
sys.stdout.flush()
model.save("/tmp/mymodel")
i am getting following error for my code :
Traceback (most recent call last):File
"C:\Users\user\AppData\Local\conda\conda\envs\my_root\lib\site-
packages\IPython\core\interactiveshell.py", line 3267, in run_code
File "<ipython-input-80-935b20410c11>", line 8, in <module>
epochs=EPOCHS, verbose=1)
File "C:\Users\user\AppData\Local\conda\conda\envs\my_root\lib\site-
packages\keras\legacy\interfaces.py", line 91, in wrapper
File "C:\Users\user\AppData\Local\conda\conda\envs\my_root\lib\site-
packages\keras\engine\training.py", line 1418, in fit_generator
File "C:\Users\user\AppData\Local\conda\conda\envs\my_root\lib\site-
packages\keras\engine\training_generator.py", line 162, in fit_generator
File "C:\Users\user\AppData\Local\conda\conda\envs\my_root\lib\site-
packages\keras\utils\data_utils.py", line 647, in __init__
File "C:\Users\user\AppData\Local\conda\conda\envs\my_root\lib\site-
packages\keras\utils\data_utils.py", line 433, in __init__
File"C:\Users\user\AppData\Local\conda\conda\envs\
my_root\lib\multiprocessing\context.py", line 133, in Value
File "C:\Users\user\AppData\Local\conda\conda\envs\
my_root\lib\multiprocessing\sharedctypes.py", line 182
exec template % ((name,)*7) in d
^
SyntaxError: invalid syntax

ValueError: The passed save_path is not a valid checkpoint: C:\Users\User\model.tflearn

I have been trying to create a chatbot but I keep getting the following error. I am a beginner in TensorFlow.
Traceback (most recent call last):
File "main.py", line 78, in <module>
model.load("model.tflearn")
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tflearn\models\dnn.py", line 308, in load
self.trainer.restore(model_file, weights_only, **optargs)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tflearn\helpers\trainer.py", line 490, in restore
self.restorer.restore(self.session, model_file)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tensorflow\python\training\saver.py", line 1278, in restore
compat.as_text(save_path))
ValueError: The passed save_path is not a valid checkpoint: C:\Users\User\model.tflearn
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 80, in <module>
model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tflearn\models\dnn.py", line 216, in fit
callbacks=callbacks)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tflearn\helpers\trainer.py", line 339, in fit
show_metric)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tflearn\helpers\trainer.py", line 816, in _train
tflearn.is_training(True, session=self.session)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tflearn\config.py", line 95, in is_training
tf.get_collection('is_training_ops')[0].eval(session=session)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tensorflow\python\framework\ops.py", line 731, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tensorflow\python\framework\ops.py", line 5579, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
run_metadata_ptr)
File "C:\Users\User\Anaconda3\envs\newbot\lib\site-packages\tensorflow\python\client\session.py", line 1096, in _run
raise RuntimeError('Attempted to use a closed Session.')
RuntimeError: Attempted to use a closed Session.
This is my TensorFlow code:
tensorflow.reset_default_graph()
net = tflearn.input_data(shape=[None, len(training[0])])
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, len(output[0]), activation="softmax")
net = tflearn.regression(net)
model = tflearn.DNN(net)
try:
model.load("model.tflearn")
except:
model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True)
model.save("model.tflearn")
I am using:
Python 3.6.9
TensorFlow 1.14.0
TFLearn 0.3.2
Thank you in advance!
Change your Tensorflow code to:
try:
model.load('model.tflearn')
except:
tensorflow.reset_default_graph()
net = tflearn.input_data(shape=[None, len(training[0])])
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, len(output[0]), activation='softmax')
net = tflearn.regression(net)
model = tflearn.DNN(net)
model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True)
model.save("model.tflearn")
I think the problem happens because you are creating and reseting a model and then requesting to load it, and then the framework gets lost.
Firstly, from this error message
ValueError: The passed save_path is not a valid checkpoint: C:\Users\User\model.tflearn
it looks like C:\Users\User\model.tflearn doesn't exist.
Secondly, you have model.fit function in the exception handling block. Is it done intentionally? I would imagine you want to proceed with the fit and save function only if you are able to load the model successfully.

Resources