ValueError: Negative dimension size caused by subtracting 35 from 15 for 'MaxPool_2' - keras

I am trying to implement the cnn keras model with pretrained word embeddings from the official example, but with my own custom dataset. Here is the url:
https://github.com/fchollet/keras/blob/master/examples/pretrained_word_embeddings.py
I am using Keras 1.2.0 with Tensorflow 1.2.1.
I get an error on lines 132-134. After searching online, all the posts pointed out to the ordering. I tried the suggestions for both tf and th, but still it didnt work.
from keras import backend as K
K.set_image_dim_ordering('tf')
Any ideas?
File "/home/usr/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2508, in create_op
set_shapes_for_outputs(ret)
File "/home/usr/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1873, in set_shapes_for_outputs
shapes = shape_func(op)
File "/home/usr/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1823, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/home/usr/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "/home/usr/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 35 from 15 for 'MaxPool_2' (op: 'MaxPool') with input shapes: [?,15,1,128].

Related

RuntimeError: Error(s) in loading state_dict for DynamicUnet: Missing key(s) in state_dict: "layers.0.4.0.conv3.weight" | size mismatch for layers

Goal:
Pickled model and exported weights come from a separate training environment. Here, I aim to load the model and weights to run inference with new datasets.
Versions:
torch==1.7.1
fastai==2.7.7
fastcore==1.5.6
torch==1.7.1
torchvision==0.8.2
Code:
from fastai.vision.all import *
learn = load_learner('export.pkl', cpu=True)
learn.load('model_3C_34_CELW_V_1.1')
Traceback:
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ python3 model/Run_model.py
Traceback (most recent call last):
File "/home/me/PycharmProjects/project/model/Run_model.py", line 4, in <module>
learn.load('model_3C_34_CELW_V_1.1')
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/fastai/learner.py", line 387, in load
load_model(file, self.model, self.opt, device=device, **kwargs)
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/fastai/learner.py", line 54, in load_model
get_model(model).load_state_dict(model_state, strict=strict)
File "/home/me/miniconda3/envs/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DynamicUnet:
Missing key(s) in state_dict: "layers.0.4.0.conv3.weight", "layers.0.4.0.bn3.weight", "layers.0.4.0.bn3.bias", "layers.0.4.0.bn3.running_mean",
size mismatch for layers.12.0.weight: copying a param with shape torch.Size([3, 99, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 291, 1, 1]).
You get this error when you change the number of classes, maybe while retraining your U net model with different number of classes.
I suggest you change the new model instance by using num_of_classes=99
The model whose pretrained weights you are trying to load has 291 classes whilst yours has 99 only.
Yes; I needed an updated .pkl files that worked with the weights .pth file. Thanks
Source

Copy paste a Naive Bayes example code on vscode but got errors

I copied the code from datacamp to try the Naive Bayes classification on my own on python 3.8 . but when run the code the compiler gives this error
Traceback (most recent call last):
File "c:\Users\USER\Desktop\DATA MINING\NaiveTest.py", line 34, in <module>
model.fit(features,label)
File "C:\Users\USER\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\naive_bayes.py", line 207, in fit
X, y = self._validate_data(X, y)
File "C:\Users\USER\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\base.py", line 433, in _validate_data
X, y = check_X_y(X, y, **check_params)
File "C:\Users\USER\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\USER\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\utils\validation.py", line 814, in check_X_y
X = check_array(X, accept_sparse=accept_sparse,
File "C:\Users\USER\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "C:\Users\USER\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\utils\validation.py", line 630, in check_array
raise ValueError(
ValueError: Expected 2D array, got scalar array instead:
array=<zip object at 0x0F2C4C28>.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
I am posting the whole code cause I'm not sure which part that causes this so I'm requesting help to solve this.
# Assigning features and label variables
weather=['Sunny','Sunny','Overcast','Rainy','Rainy','Rainy','Overcast','Sunny','Sunny','Rainy','Sunny','Overcast','Overcast','Rainy']
temp=['Hot','Hot','Hot','Mild','Cool','Cool','Cool','Mild','Cool','Mild','Mild','Mild','Hot','Mild']
play=['No','No','Yes','Yes','Yes','No','Yes','No','Yes','Yes','Yes','Yes','Yes','No']
# Import LabelEncoder
from sklearn import preprocessing
#creating labelEncoder
le = preprocessing.LabelEncoder()
# Converting string labels into numbers.
weather_encoded=le.fit_transform(weather)
print (weather_encoded)
temp_encoded=le.fit_transform(temp)
label=le.fit_transform(play)
print ("Temp:",temp_encoded)
print ("Play:",label)
#Combinig weather and temp into single listof tuples
features=zip(weather_encoded,temp_encoded)
print(list(zip(weather_encoded,temp_encoded)))
print([i for i in zip(weather_encoded,temp_encoded)])
from sklearn.naive_bayes import GaussianNB
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(features,label)
#Predict Output
predicted= model.predict([[0,2]]) # 0:Overcast, 2:Mild
print ("Predicted Value:", predicted)
supposedly the result something like this Predicted Value: [1]
but it gave this error instead
What happens is that features should be a list to be passed to model.fit, currently they are type zip
#Combinig weather and temp into single listof tuples
features=zip(weather_encoded,temp_encoded)
you may need to convert features to list, e.g.
#Combinig weather and temp into single listof tuples
features=list(zip(weather_encoded,temp_encoded))

TensorFlow 2.1 using TPUEstimator: RuntimeError: All tensors outfed from TPU should preserve batch size dimension, but got scalar Tensor

I just converted an existing project from TF 1.14 to TF 2.1 which uses the TPUEstimator API. After making the conversion, testing locally (i.e. use_tpu=False) runs successfully. However, I am getting errors when running on Google Cloud TPU (i.e. use_tpu=True).
Note: This is in the context of the AdaNet AutoML framework (v0.8.0), although I suspect this may be a general TPUEstimator-related error, as the errors appear to originate in the tpu_estimator.py and error_handling.py scripts seen in the Traceback below:
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 3032, in train
rendezvous.record_error('training_loop', sys.exc_info())
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/error_handling.py", line 81, in record_error
if value and value.op and value.op.type == _CHECK_NUMERIC_OP_NAME:
AttributeError: 'RuntimeError' object has no attribute 'op'
During handling of the above exception, another exception occurred:
File "workspace/trainer/train.py", line 331, in <module>
main(args=parsed_args)
File "workspace/trainer/train.py", line 177, in main
run_config=run_config)
File "workspace/trainer/train.py", line 68, in run_experiment
estimator.train(input_fn=train_input_fn, max_steps=total_train_steps)
File "/usr/local/lib/python3.6/site-packages/adanet/core/estimator.py", line 853, in train
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 3035, in train
rendezvous.raise_errors()
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/error_handling.py", line 143, in raise_errors
six.reraise(typ, value, traceback)
File "/usr/local/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 3030, in train
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 374, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1164, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1194, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 2857, in _call_model_fn
config)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1152, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 3186, in _model_fn
host_ops = host_call.create_tpu_hostcall()
File "/usr/local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 2226, in create_tpu_hostcall
'dimension, but got scalar {}'.format(dequeue_ops[i][0]))
RuntimeError: All tensors outfed from TPU should preserve batch size dimension, but got scalar Tensor("OutfeedDequeueTuple:1", shape=(), dtype=int64, device=/job:tpu_worker/task:0/device:CPU:0)'
The previous version of the project using TF 1.14 runs both locally and on TPU using TPUEstimator without issues. Is there something obvious I am potentially missing for the conversion over to TF 2.1 when using TPUEstimator API?
Have you applied the following:
dataset = ...
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(batch_size))
this potentially drops the last few samples from a file to ensure that every batch has a static shape of batch_size, which is required when training on TPUs.

Model.fit Value Error (Text Classification Model)

I need your help please...
I am trying go get the following Text Classification Module working:
# Train and validate model.
history = model.fit(x_train,
train_labels,
epochs=epochs,
callbacks=callbacks,
validation_data=(x_val, val_labels),
verbose=2,
batch_size=batch_size) # Logs once per epoch.
Source File Can be Found Here: Google - Git Hub Text Classification Code
However I am getting the following error on execution:
Traceback (most recent call last):
File "train_ngram_model.py", line 113, in <module>
train_ngram_model(data)
File "train_ngram_model.py", line 93, in train_ngram_model
batch_size=batch_size) # Logs once per epoch.
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 646, in _process_inputs
x, y, sample_weight=sample_weights)
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 2383, in _standardize_user_data
batch_size=batch_size)
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 2428, in _standardize_tensors
converted_x.append(_convert_scipy_sparse_tensor(a, b))
File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 3198, in _convert_scipy_sparse_tensor
raise ValueError('A SciPy sparse matrix was passed to a model '
ValueError: A SciPy sparse matrix was passed to a model that expects dense inputs. Please densify your inputs first, such as by calling `x.toarray()`.
I have spent several hours now to find a solution, and I haven't gotten anywhere.
Thank you in advance for your reply.

ValueError: negative dimensions are not allowed when loading .pkl file

Although there are many question threads for error ValueError: negative dimensions are not allowed
I couldn't find the answer for my problem
After training Machine learning model using SGDclassifer
clf=linear_model.SGDClassifier(loss='log',random_state=20000,verbose=1,class_weight='balanced')
model=clf.fit(X,Y)
Dimension of X is (1651880,246177)
The below code is working i.e when saving model object and when using model for prediction
joblib.dump(model, 'trainedmodel.pkl',compress=3)
prediction_result=model.predict(x_test)
but getting error when loading the saved model
model = joblib.load('trainedmodel.pkl')
below is the error message
Please help me out to resolve it.
File "C:\Users\Taxonomy\AppData\Roaming\Python\Python36\site-packages\sklearn\externals\joblib\numpy_pickle.py", line 598, in load
obj = _unpickle(fobj, filename, mmap_mode)
File "C:\Users\Taxonomy\AppData\Roaming\Python\Python36\site-packages\sklearn\externals\joblib\numpy_pickle.py", line 526, in _unpickle
obj = unpickler.load()
File "C:\Users\Taxonomy\Anaconda3\lib\pickle.py", line 1050, in load
dispatch[key[0]](self)
File "C:\Users\Taxonomy\AppData\Roaming\Python\Python36\site-packages\sklearn\externals\joblib\numpy_pickle.py", line 352, in load_build
self.stack.append(array_wrapper.read(self))
File "C:\Users\Taxonomy\AppData\Roaming\Python\Python36\site-packages\sklearn\externals\joblib\numpy_pickle.py", line 195, in read
array = self.read_array(unpickler)
File "C:\Users\Taxonomy\AppData\Roaming\Python\Python36\site-packages\sklearn\externals\joblib\numpy_pickle.py", line 141, in read_array
array = unpickler.np.empty(count, dtype=self.dtype)
ValueError: negative dimensions are not allowed
Try to dump model with protocol 4.
from python's pickle docs:
Protocol version 4 was added in Python 3.4. It adds support for very
large objects, pickling more kinds of objects, and some data format
optimizations. Refer to PEP 3154 for information about improvements
brought by protocol 4.

Resources