Fairly new into machine learning - I have a training script that trains a default model with no weights into a classifier using images as an input, and after the training, it gives an output with an .hdf5 file.
I am aware that the .hdf5 file will contain the model's info and the weights (https://www.tinymind.com/learn/terms/hdf5). But Im trying to find a way to use the output .hdf5 file to a certain script that uses pytorch to classify objects.
There has not been much information how to use this output file format as a model in pytorch after searching.
What would I need to do in order to use this output file as a trained model to be used in torch? I would appreciate it if someone points me in the right direction to get started
Related
I am struggling with restoring a keras model from a .pb file. I have seen a couple of posts explaining how to do inference using a model saved in .pb format, but what I need is to load the weights into a keras.Model class.
I have a function that returns an instance of the said original model, but untrained. I want to restore the weights of that model from the .pb file. My goal is to then truncate the model and use the truncated model for other purposes.
So far the furthest I've gotten is using the tf.saved_model.load(session, ['serving'], export_dir) function to get a tensorflow.core.protobuf.meta_graph_pb2.MetaGraphDef object. From here I can access the graph_def attribute, and from there the nodes.
How can I go from that to getting the weights and then loading those into the instance of the untrained keras Model?
Maybe if that's not doable there is a way to "truncate" the graph_def somehow and then make inference using that?
I am new in Pytorch. My question is: How do I apply transfer learning to a custom dataset? I am doing image segmentation on brain tumors. I can find examples which use U-net structure but I could not find examples using weights of the pre-trained models for a U-net image segmentation?
You could obtain pre-trained models in two ways:
Model weights or complete models shared in formats such .pt or .pth:
In this case, Saving and Loading Models is a good starting point. Copying from the tutorial there, you could load a model as
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
The other way is to load the model from torchvision. A list is available models is available at Torchvision Models. U-Net is not available yet. However, it is possible to load a pre-trained model as the encoder and write a separate decoder to form a U-Net with a pre-trained encoder.
In this case, the model object returned from the function calls shown in the API are already loaded with pretrained weights when pretrained=True.
For writing a custom dataloader, PyTorch data loaders may be a useful guide.
Is there a good way or a tool to convert .caffemodel weight files to HDF5 files that can be used with Keras?
I don't care so much about converting the Caffe model definitions, I can easily write those in Keras manually, I'm just interested in getting the trained weights out of that Caffe binary protocol buffer format and into the Keras format. I'm not a Caffe user, i.e. not very familiar with it.
I've implemented a neural network using Keras. Once trained and tested for final test accuracy, using a matrix with a bunch of rows containing features (plus corresponding labels), I have a model which I should be able to use for prediction.
How can I feed a single unseen example, meaning a feature vector to the model, to obtain a class prediction?
I've looked at their documentation here but could not find a method for it.
What you want is the predict method, it takes a batch of input samples and produces predictions, which are the outputs computer by your network. To feed a single example you can just put it inside a numpy ndarray wrapper.
I want to extract features using caffe and train those features using SVM. I have gone through this link: http://caffe.berkeleyvision.org/gathered/examples/feature_extraction.html. This links provides how we can extract features using caffenet. But I want to use Lenet architecture here. I am unable to change this line of command for Lenet:
./build/tools/extract_features.bin models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10 leveldb
And also, after extracting the features, how to train these features using SVM? I want to use python for this. For eg: If I get features from this code:
features = net.blobs['pool2'].data.copy()
Then, how can I train these features using SVM by defining my own classes?
You have two questions here:
Extracting features using LeNet
Training an SVM
Extracting features using LeNet
To extract the features from LeNet using the extract_features.bin script you need to have the model file (.caffemodel) and the model definition for testing (.prototxt).
The signature of extract_features.bin is here:
Usage: extract_features pretrained_net_param feature_extraction_proto_file extract_feature_blob_name1[,name2,...] save_feature_dataset_name1[,name2,...] num_mini_batches db_type [CPU/GPU] [DEVICE_ID=0]
So if you take as an example val prototxt file this one (https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/train_val.prototxt), you can change it to the LeNet architecture and point it to your LMDB / LevelDB. That should get you most of the way there. Once you did that and get stuck, you can re-update your question or post a comment here so we can help.
Training SVM on top of features
I highly recommend using Python's scikit-learn for training an SVM from the features. It is super easy to get started, including reading in features saved from Caffe's format.
Very lagged reply, but should help.
Not 100% what you want, but I have used the VGG-16 net to extract face features using caffe and perform a accuracy test on a small subset of the LFW dataset. Exactly what you needed is in the code. The code creates classes for training and testing and pushes them into the SVM for classification.
https://github.com/wajihullahbaig/VGGFaceMatching