I trained a Bert Model for NER. It worked fine (obviously it took time to learn). I saved the model with pickle as
with open('model_pkl', 'wb') as file:
pickle.dump(model, file)
When I am trying to load this saved model I am getting following error. AttributeError: Can't get attribute 'BertModel' on <module '__main__' from '<input>'>. This method works for lists, dictionaries etc but producing error on pytorch model. I am using python 3.8.10
Try using torch.save and torch.load.
Related
I trained a model on Google colab , and dumped it, then I was finishing the project on VS Code on my system. I got an error saying the versions don't match since I had the latest version of libraries and the colab had an older version. I had the retrain the model on my system, which took a lot of time, as my System has a basic configuration. My only use of colab was to put the training stress on colab rather than my system
I didn't know there will be a version conflict as I thought Colab will have the latest version of libraries
I have in colab (Google) tensorflow version 2.9.2 and in my Raspberry 4 tensorflow versio 2.4.1. So diferent versions. I made in colab a pre-training model VGG19 with input_shape(220,220,3). And I classificated 2 types image
So once I have trained the model in the Colab.Google environment, then from Colab except the model and its weights.
# serialize model to JSON
model_json = loaded_model2.to_json()
with open('/content/drive/MyDrive/dataset/extract/model_5.json', "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
loaded_model2.save_weights('/content/drive/MyDrive/model_5.h5')
print("Saved model to disk")
As I have indicated, then I intend to use that trained model on my Raspberry4. So I create a model on the Raspberry just like I did in Colab, but I don't do 'fit'. What I do next is load the .h5 file with the 'weights' that was generated in Colab.Google.
And in principle this has worked for me
model_new = tf.keras.Sequential()
model_new.add(tf.keras.applications.VGG19(include_top=false, weights='imagenet',pooling='avg',input_shape=(220,220,3)))
model_new.add(tf.keras.layers.Dense(2,activation="softmax"))
opt = tf.keras.optimizers.SGC(0,004)
model_new.compile(loss='categorical_crossentropy',optimizer=opt,metrics=['accuracy'])
model_new.load_weights('/home/pi/projects/models/model_5.h5)
I am working through the Barlow Twins tutorial with PyTorch Lightning and I am having trouble loading the encoder portion of the model using the checkpoint after training.
During model training, checkpoints are saved with ModelCheckpoint. In the tutorial, the author offers two options for then getting the encoder portion of the model with trained weights: 1) calling model.encoder (model has to have been trained in the active kernel for this to work) or 2) loading the trained model with:
ckpt_model = torch.load('[checkpoint name].ckpt')
And then calling
encoder = ckpt_model.encoder
I would like to be able to load the model/encoder from a saved checkpoint but when I try to do this, I get the error: AttributeError: 'dict' object has no attribute 'encoder'
This seems to make sense to me because the model is being loaded with a simple Torch loader rather than the lightning loader.
When I print the contents of ckpt_model using ckpt_model.keys() I get:
dict_keys(['epoch', 'global_step', 'pytorch-lightning_version', 'state_dict', 'loops', 'callbacks', 'optimizer_states', 'lr_schedulers']). The state_dict contains weights and biases for the encoder layer.
Then when I try loading the model with the PyTorch Lightning loader:
BarlowTwins.load_from_checkpoint('[checkpoint name].ckpt')
I get the error: TypeError: init() missing 4 required positional arguments: 'encoder', 'encoder_out_dim', 'num_training_samples', and 'batch_size'. I might be able to save those in the checkpoint with save_hyperparameters.
My question: how can I load the encoder portion of the model in the simple way possible? I only need the encoder portion of the model loaded.
Thanks so much for help in advance!
After saving a Keras model, where is it saved? Is it saved locally? The below code is saving a model.
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5
model = load_model('my_model.h5')
After saving the model can I use the load model again in another jupyter ipynb file?
When we do model.save, it will save model architecture and weight at the same time. It will save as .h5 locally and we can load it in another environment (local notebook, kaggle notebook, colab, GCP, etc). Using load_model will load fully model (arch + weight).
Please, read the official blog, you will find plenty of examples. https://www.tensorflow.org/guide/keras/save_and_serialize
(also posted in https://github.com/dmis-lab/biobert/issues/98)
Hi, does anyone know how to load biobert as a keras layer using the huggingface transformers (version 2.4.1)? I tried several possibilities but none of these worked. All that I found out is how to use the pytorch version but I am interested in the keras layer version. Below are two of my attempts (I saved the biobert files into folder "biobert_v1.1_pubmed").
Attempt 1:
biobert_model = TFBertModel.from_pretrained('bert-base-uncased')
biobert_model.load_weights('biobert_v1.1_pubmed/model.ckpt-1000000')
Error message:
AssertionError: Some objects had attributes which were not restored:
: ['tf_bert_model_4/bert/embeddings/word_embeddings/weight']
: ['tf_bert_model_4/bert/embeddings/position_embeddings/embeddings']
(and many more lines like above...)
Attempt 2:
biobert_model = TFBertModel.from_pretrained("biobert_v1.1_pubmed/model.ckpt-1000000", config='biobert_v1.1_pubmed/bert_config.json')
Error message:
NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights).
Any help appreciated! My experience with huggingface's transformers library is almost zero. I also tried to load the following two models but it seems they only support the pytorch version.
https://huggingface.co/monologg/biobert_v1.1_pubmed
https://huggingface.co/adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers
Might be a bit late but I have found a not so elegant fix to this problem. The tf bert models in the transformers library can be loaded with a PyTorch save file.
Step 1: Convert the tf checkpoint to a Pytorch save file with the following command (more here: https://github.com/huggingface/transformers/blob/master/docs/source/converting_tensorflow_models.rst)
transformers-cli convert --model_type bert\
--tf_checkpoint=./path/to/checkpoint_file \
--config=./bert_config.json \
--pytorch_dump_output=./pytorch_model.bin
Step 2: Make sure to combine the following files in a directory
config.json - bert config file (must be renamed from bert_config.json!)
pytorch_model.bin - the one we just converted
vocab.txt - bert vocab file
Step 3: Load model from the directory we just created
model = TFBertModel.from_pretrained('./pretrained_model_dir', from_pt=True)
There is actually also an argument "from_tf" which, according to the documentation should work with tf style checkpoints but I can't get it to work. See: https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained
I was following the tutorial for classifying structured data from here. I trained the model as described with my data which only comprised of numeric data.
I have trained and optimised my model on google colab and downloaded it locally to test it on some new data.
However, when I load the model using this snippet.
from keras.models import load_model
model = load_model("my_model.h5")
I get the following error
Unknown layer: DenseFeatures
...along with the trace.
I have tried setting up custom_objects while loading the model like this
model = load_model("my_model.h5", custom_objects={'DenseFeatures': tf.keras.layers.DenseFeatures})
But I still get the following error:
__init__() takes at least 2 arguments (3 given)
What could I be doing wrong? I tried going through documentation but couldn't find anything of help on github.