I am using google colab and pytorch lightning to train a classification model.I imported 'LightningModule' but still the error persists. This is the code.
trainer.fit(model, train_loader, val_loader)
It shows this error.
Related
I trained a Bert Model for NER. It worked fine (obviously it took time to learn). I saved the model with pickle as
with open('model_pkl', 'wb') as file:
pickle.dump(model, file)
When I am trying to load this saved model I am getting following error. AttributeError: Can't get attribute 'BertModel' on <module '__main__' from '<input>'>. This method works for lists, dictionaries etc but producing error on pytorch model. I am using python 3.8.10
Try using torch.save and torch.load.
I am working through the Barlow Twins tutorial with PyTorch Lightning and I am having trouble loading the encoder portion of the model using the checkpoint after training.
During model training, checkpoints are saved with ModelCheckpoint. In the tutorial, the author offers two options for then getting the encoder portion of the model with trained weights: 1) calling model.encoder (model has to have been trained in the active kernel for this to work) or 2) loading the trained model with:
ckpt_model = torch.load('[checkpoint name].ckpt')
And then calling
encoder = ckpt_model.encoder
I would like to be able to load the model/encoder from a saved checkpoint but when I try to do this, I get the error: AttributeError: 'dict' object has no attribute 'encoder'
This seems to make sense to me because the model is being loaded with a simple Torch loader rather than the lightning loader.
When I print the contents of ckpt_model using ckpt_model.keys() I get:
dict_keys(['epoch', 'global_step', 'pytorch-lightning_version', 'state_dict', 'loops', 'callbacks', 'optimizer_states', 'lr_schedulers']). The state_dict contains weights and biases for the encoder layer.
Then when I try loading the model with the PyTorch Lightning loader:
BarlowTwins.load_from_checkpoint('[checkpoint name].ckpt')
I get the error: TypeError: init() missing 4 required positional arguments: 'encoder', 'encoder_out_dim', 'num_training_samples', and 'batch_size'. I might be able to save those in the checkpoint with save_hyperparameters.
My question: how can I load the encoder portion of the model in the simple way possible? I only need the encoder portion of the model loaded.
Thanks so much for help in advance!
I have pre-trained model centernet_hg104_512x512_kpts_coco17_tpu-32, created .record files and annotated with keypoints dataset. When I run command:
python model_main_tf2.py --alsologtostderr --pipeline_config_path=pipelines/keypoints/centernet_hg104_512x512_kpts_coco17_tpu-32.config --model_dir=workspace/training_dir/centernet_hg104_512x512_kpts_coco17_tpu-32/
the error appeared:
TypeError: in user code:
File "venv/lib/python3.8/site-packages/object_detection/inputs.py", line 887, in transform_and_pad_input_data_fn *
tensor_dict = pad_input_data_to_static_shapes(
File "venv/lib/python3.8/site-packages/object_detection/inputs.py", line 319, in transform_input_data *
out_tensor_dict[flds_gt_kpt_weights] = (
File "venv/lib/python3.8/site-packages/object_detection/core/keypoint_ops.py", line 349, in keypoint_weights_from_visibilities *
per_keypoint_weight_mult = tf.ones((1, num_keypoints,), dtype=tf.float32)
TypeError: Expected int32, but got None of type 'NoneType
In pipeline.config I have the paths to the label maps files and .record files.
I've trained boxes models without any problems, but with keypoints annotations, I didn't found the right solution.
I have created a detailed github repo Custom Keypoint Detection for dataset preparation, model training and inference on Centernet-hourglass104 keypoint detection model based on Tensorflow Object detection API with examples.
The dataset preparation for Keypoint detection model is completely different from object detection. Unfortunately, there are no official documentation from Tensorflow. Therefore I have curated the steps for dataset preparation, pipeline configuration and model training. This repo could help you in training your keypoint detection model on custom dataset.
Any issues related to the project can be raised in the github itself and doubts can be cleared here.
I was following the tutorial for classifying structured data from here. I trained the model as described with my data which only comprised of numeric data.
I have trained and optimised my model on google colab and downloaded it locally to test it on some new data.
However, when I load the model using this snippet.
from keras.models import load_model
model = load_model("my_model.h5")
I get the following error
Unknown layer: DenseFeatures
...along with the trace.
I have tried setting up custom_objects while loading the model like this
model = load_model("my_model.h5", custom_objects={'DenseFeatures': tf.keras.layers.DenseFeatures})
But I still get the following error:
__init__() takes at least 2 arguments (3 given)
What could I be doing wrong? I tried going through documentation but couldn't find anything of help on github.
I am trying to train a segmentation model, tried fit_generator and also on a single batch as:
autoencoder.fit(x,y)
x and y has shape same as of the model input and output, but it gives following error.
InvalidArgumentError: Node 'IsVariableInitialized_2196': Unknown input
node 'batch_normalization_1/moving_mean/biased'
This error message generates in keras tensorflow during training a segmentation model. I have tried a naive model, the issue still exits.
The input of model is (128,128,3) image and output is its flattened segmentation i.e. (16384, 66) where there are 66 classes.