I am using Pytorch for image classification. I am looking for CNN models pretrained on a dataset other than ImageNet, I have found a link to a ".ckpt" file. I also found tutorials on loading this file with Tenserflow, but not using pytorch.
How can I load pretrained model using Pytorch from ".ckpt" file ?
I agree with #jodag that in general, PyTorch and Tensorflow are not interoperable. There are some special cases in which you may be able to do this. For example, HuggingFace provides support for converting the transformer model from TensorFlow to PyTorch.
There is a related (though closed) question on DataScience StackExchange where the idea is to rewrite the Tensorflow model into PyTorch and then loads the weights from the checkpoint file. Note that this can get tricky at times.
Related
I am new in Pytorch. My question is: How do I apply transfer learning to a custom dataset? I am doing image segmentation on brain tumors. I can find examples which use U-net structure but I could not find examples using weights of the pre-trained models for a U-net image segmentation?
You could obtain pre-trained models in two ways:
Model weights or complete models shared in formats such .pt or .pth:
In this case, Saving and Loading Models is a good starting point. Copying from the tutorial there, you could load a model as
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
The other way is to load the model from torchvision. A list is available models is available at Torchvision Models. U-Net is not available yet. However, it is possible to load a pre-trained model as the encoder and write a separate decoder to form a U-Net with a pre-trained encoder.
In this case, the model object returned from the function calls shown in the API are already loaded with pretrained weights when pretrained=True.
For writing a custom dataloader, PyTorch data loaders may be a useful guide.
I have h5 weights from a Keras model.
I want to rewrite the Keras model into a tf.keras model (using TF2.x).
I know that only the high level API changed, but do you know if I still can use the h5 weights?
Most likely they can be loaded, but is the structure different between Keras and tf.keras weights?
Thanks
It seems that they are the same
cudos to Mohsin hasan answer
In the past, when I had to convert tf.keras model to keras model, I
did following:
Train model in tf.keras
Save only the weights tf_model.save_weights("tf_model.hdf5")
Make Keras model architecture using all layers in keras (same as the tf keras one)
load weights by layer names in keras: keras_model.load_weights(by_name=True)
This seemed to work for me. Since, I was using out of box architecture
(DenseNet169), I had to very less work to replicate tf.keras network
to keras.
And the answer from Alex Cohn
tf.keras HDF5 model and Keras HDF5 models are not different things,
except for inevitable software version update synchronicity. This is
what the official docs say:
tf.keras is TensorFlow's implementation of the Keras API specification. This is a high-level API to build and train models that
includes first-class support for TensorFlow-specific functionality
If the convertor can convert a keras model to tf.lite, it will deliver
same results. But tf.lite functionality is more limited than tf.keras.
If this feature set is not enough for you, you can still work with
tensorflow, and enjoy its other advantages.
I have a gradient boost model saved in the .pkl format. I have to load this model in tensorflowjs. i can see that there is a way to load a keras model but I can't find a way to load a sklearn model. Is it possible to do this?
It is not possible to load sklearn model in tensorflow.js. Tensorflow.js allows to load models written in tensorflow.
Though, I haven't tried myself, but I think that you can possibly use the scikit learn wrapper to rewrite the classifier in tensorflow. The model can be saved and converted to a format that can be loaded in tensorflow.js.
I want to convert a Keras model to Tensorflow Lite model. When I examined the documentation, it is stated that we can use tf.keras HDF5 models as input. Does it mean I can use my saved HDF5 Keras model as input to it or tf.keras HDF5 model and Keras HDF5 models are different things?
Documentation: https://www.tensorflow.org/lite/convert
Edit: I could convert my Keras model to Tensorflow Lite model with using this API, but I didn't test it yet. My code:
converter = tf.lite.TFLiteConverter.from_keras_model_file(path + 'plant-
recognition-model.h5')
tflite_model = converter.convert()
with open('plant-recognition-model.tflite', 'wb') as f:
f.write(tflite_model)
tf.keras HDF5 model and Keras HDF5 models are not different things, except for inevitable software version update synchronicity. This is what the official docs say:
tf.keras is TensorFlow's implementation of the Keras API specification. This is a high-level API to build and train models that includes first-class support for TensorFlow-specific functionality
If the convertor can convert a keras model to tf.lite, it will deliver same results. But tf.lite functionality is more limited than tf.keras. If this feature set is not enough for you, you can still work with tensorflow, and enjoy its other advantages.
May be, it won't take too long before your models can run on a smartphone.
I only want to use pre-trained model in pytorch without installing the whole package.
Can I just copy the model module from pytorch?
I'm afraid you cannot do that: in order to run the model, you need not only the trained weights ('.pth.tar' file) but also the "structure" of the net: that is, the layers, how they are connected to each other etc. This network structure is coded in python and requires pytorch to be installed.
A way of using PyTorch models without Installing PyTorch is if the model is exported in Onnx format. Once the model is in Onnx format the model can be Imported into the Onnx runtime and ca be used for Inferencing. This tutorial should help you out.Pytorch ONNX