visualizing training images from training data batch - python-3.x

Visualizing training images from the training data batch
train_images, train_labels = next(train_data.as_numpy_iterator())
show_25_images(train_images, train_labels)
typeError Traceback (most recent call last)
in
1 #Visualize training images from the training data batch
----> 2 train_images, train_labels = next(train_data.as_numpy_iterator())
3 show_25_images(train_images, train_labels)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/data/ops/dataset_ops.py in as_numpy_iterator(self)
614 component_spec,
615 (tensor_spec.TensorSpec, ragged_tensor.RaggedTensorSpec)):
--> 616 raise TypeError(
617 f"tf.data.Dataset.as_numpy_iterator() is not supported for "
618 f"datasets that produce values of type {component_spec.value_type}")
TypeError: tf.data.Dataset.as_numpy_iterator() is not supported for datasets that produce values of type <class 'tensorflow.python.data.util.structure.NoneTensor'>
set of images

Related

How can I solve svm predict model problem

Im having problem by svm predict model
from sklearn.svm import SVC
svm_model = SVC(kernel='rbf', C=8, gamma=0.1)
svm_model.fit(X_train_std, y_train)
y_pred = svm_model.predict(X_test_std)
/usr/local/lib/python3.8/dist-packages/sklearn/utils/validation.py:993: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-53-398f1caaa8e8> in <module>
3 svm_model = SVC(kernel='rbf', C=8, gamma=0.1)
4
----> 5 svm_model.fit(X_train_std, y_train)
6
7 y_pred = svm_model.predict(X_test_std)
2 frames
/usr/local/lib/python3.8/dist-packages/sklearn/utils/multiclass.py in check_classification_targets(y)
195 "multilabel-sequences",
196 ]:
--> 197 raise ValueError("Unknown label type: %r" % y_type)
198
199
ValueError: Unknown label type: 'continuous'
I thought y type problem
train = pd.get_dummies(train, columns=['LSTAT'], drop_first=True)
So I use that but problem was disappeared
Somebody help me

Tensorflow HammingLoss gives ValueError with keras.utils.Sequence

I am working on a multi-label image classification problem with 13 labels. I want to use Hamming Loss to evaluate the performance of the model. So I specified tfa.metrics.HammingLoss(mode = 'multilabel') in the metrics parameter during model compilation. This worked when I provided both X_train and y_train to model.fit(), but it threw a ValueError when I used a Sequence object (described below) for training.
Data Generator description
I used a keras.utils.Sequence input object similar to what is present here. The generator returns 2 numpy arrays for each batch - the first array consists of the input images of shape (128, 128, 3) and the second array consists of labels each of shape (13,).
This is what my code looks like:
model.compile(
loss='binary_crossentropy',
optimizer='rmsprop',
metrics=[tfa.metrics.HammingLoss(mode = 'multilabel')]
)
model.fit(
train_datagen,
epochs = 5,
batch_size = BATCH_SIZE,
steps_per_epoch = TOTAL // BATCH_SIZE
)
And this is the error that I obtained:
Epoch 1/5
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-140-978987a2bbaa> in <module>
3 epochs=5,
4 batch_size=BATCH_SIZE,
----> 5 steps_per_epoch = 2000 // BATCH_SIZE
6 # validation_data=validation_generator,
7 )
4 frames
/usr/local/lib/python3.7/dist-packages/tensorflow_addons/metrics/hamming.py in else_body_2()
64 try:
65 do_return = True
---> 66 retval_ = (ag__.ld(nonzero) / ag__.converted_call(ag__.ld(y_true).get_shape, (), None, fscope)[(- 1)])
67 except:
68 do_return = False
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1051, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/metrics/utils.py", line 66, in update_state *
matches = self._fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/metrics/hamming.py", line 133, in hamming_loss_fn *
return nonzero / y_true.get_shape()[-1]
ValueError: None values not supported.
How do I correct this? Is there any issue with the format of the labels?

ValueError: continuous is not supported for RandomForestRegressor

After I had Pipeline preprocessed the weld data, I was able to get clean data in the output. Next, I need to pass the cleaned data through the model for training. Both the data preprocessing and model training steps can be further encapsulated in a Pipeline as follows:
from sklearn.ensemble import RandomForestRegressor
completed_pl = Pipeline(
steps=[
("preprocessor", preprocessor),
("classifier", RandomForestRegressor())
]
)
# training
completed_pl.fit(X_train, y_train)
# accuracy
y_train_pred = completed_pl.predict(X_train)
print(f"Accuracy on train: {accuracy_score(list(y_train), list(y_train_pred)):.2f}")
y_pred = completed_pl.predict(X_test)
print(f"Accuracy on test: {accuracy_score(list(y_test), list(y_pred)):.2f}")
I have used load_boston dataset from sklearn
And the error :
ValueError Traceback (most recent call last)
<ipython-input-86-d0b1928cf1a7> in <module>
12 # accuracy
13 y_train_pred = completed_pl.predict(X_train)
---> 14 print(f"Accuracy on train: {accuracy_score(list(y_train), list(y_train_pred)):.2f}")
15
16 y_pred = completed_pl.predict(X_test)
1 frames
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py in _check_targets(y_true, y_pred)
102 # No metrics support "multiclass-multioutput" format
103 if y_type not in ["binary", "multiclass", "multilabel-indicator"]:
--> 104 raise ValueError("{0} is not supported".format(y_type))
105
106 if y_type in ["binary", "multiclass"]:
ValueError: continuous is not supported

save and load fine-tuned bert classification model using tensorflow 2.0

I am trying to save a fine-tuned binary classification model based on pretrained Bert module 'uncased_L-12_H-768_A-12'. I'm using tf2.
The code set up the model structure:
bert_classifier, bert_encoder =bert.bert_models.classifier_model(bert_config, num_labels=2)
then:
# import pre-trained model structure from the check point file
checkpoint = tf.train.Checkpoint(model=bert_encoder)
checkpoint.restore(
os.path.join(gs_folder_bert, 'bert_model.ckpt')).assert_consumed()
then: I compiled and fit the model
bert_classifier.compile(
optimizer=optimizer,
loss=loss,
metrics=metrics)
bert_classifier.fit(
Text_train, Label_train,
validation_data=(Text_val, Label_val),
batch_size=32,
epochs=1)
at last: I saved the model in the model folder which then automatically generates a file named saved_model.pb within
bert_classifier.save('/content/drive/My Drive/model')
also tried this:
tf.saved_model.save(bert_classifier, export_dir='/content/drive/My Drive/omg')
now I try to load the model and apply it on test data:
from tensorflow import keras
ttt = keras.models.load_model('/content/drive/My Drive/model')
I got:
KeyError Traceback (most recent call last)
<ipython-input-77-93f80aa585da> in <module>()
----> 1 tf.keras.models.load_model(filepath='/content/drive/My Drive/omg', custom_objects={'Transformera':bert_classifier})
9 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model/load.py in _revive_graph_network(self, metadata, node_id)
392 else:
393 model = models_lib.Functional(
--> 394 inputs=[], outputs=[], name=config['name'])
395
396 # Record this model and its layers. This will later be used to reconstruct
KeyError: 'name'
This error message doesn't help me with what to do...please kindly advice.
I also tried to save the model in h5 format, but when i load it
ttt = keras.models.load_model('/content/drive/My Drive/model.h5')
I got this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-36-12f76139ec24> in <module>()
----> 1 ttt = keras.models.load_model('/content/drive/My Drive/model.h5')
5 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name)
294 cls = get_registered_object(class_name, custom_objects, module_objects)
295 if cls is None:
--> 296 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name)
297
298 cls_config = config['config']
ValueError: Unknown layer: BertClassifier
Seems as if you have the answer right in the question: '/content/drive/My Drive/model' will fail due to the whitespace character.
You could try it with escaping the backspace: '/content/drive/My\ Drive/model'.
Other option, after I had exactly the same problem with saving and loading. What helped was to just save the weights of the pre-trained model and not saving the whole model:
Just take a look right here: https://keras.io/api/models/model_saving_apis/, especially at the methods save_weights() and load_weights().

I am trying to classify text using NLTK Naive Bayes Classifier. I am getting ValueError: too many values to unpack (expected 2)

I text cleaned and generated bi-grams. The bi-grams are displaying but I am trying to train and test text to classify using NLTK Naïve Bayes. I am getting the error shown in the title.
import nltk
from nltk.util import ngrams
#generating bigrams for all narratives
bigrams_all=ngrams(df,2)
#printing bigrams of one narrative
ninety_seven=df.loc[97].loc['FSR Narrative']
nine_bi=ngrams(ninety_seven,2)
print(nine_bi)
print([" ".join(t) for t in nine_bi])
# set that we'll train our classifier with
training_set = df[:1280]
# set that we'll test our classifier with
training_set = df[1280:]
classifier = nltk.NaiveBayesClassifier.train(training_set)
the Error trace:
ValueError Traceback (most recent call last)
<ipython-input-13-745201c14989> in <module>()
113 training_set = df[1280:]
114
--> 115 classifier = nltk.NaiveBayesClassifier.train(training_set)
C:\Anaconda\envs\py35\lib\site-packages\nltk\classify\naivebayes.py in train(cls, labeled_featuresets, estimator)
195 # Count up how many times each feature value occurred, given
196 # the label and featurename.
--> 197 for featureset, label in labeled_featuresets:
198 label_freqdist[label] += 1
199 for fname, fval in featureset.items():
ValueError: too many values to unpack (expected 2)

Resources