I want to run pre trained weights by using Darknet api in python.
Can any help me please how can we do that?
I assume that you have compiled the darknet. Than you just put the pretrained weights into darknet folder and run:
./darknet detect cfg/yolov3.cfg pretrained.weights data/dog.jpg
Related
I am trying to port two pre-trained keras models into the IPU machine. I managed to load and run them using IPUstrategy.scope but I dont know if i am doing it the right way. I have my pre-trained models in .h5 file format.
I load them this way:
def first_model():
model = tf.keras.models.load_model("./model1.h5")
return model
After searching your ipu.keras.models.py file I couldn't find any load methods to load my pre-trained models, and this is why i used tf.keras.models.load_model().
Then i use this code to run:
cfg=ipu.utils.create_ipu_config()
cfg=ipu.utils.auto_select_ipus(cfg, 1)
ipu.utils.configure_ipu_system(cfg)
ipu.utils.move_variable_initialization_to_cpu()
strategy = ipu.ipu_strategy.IPUStrategy()
with strategy.scope():
model = first_model()
print('compile attempt\n')
model.compile("sgd", "categorical_crossentropy", metrics=["accuracy"])
print('compilation completed\n')
print('running attempt\n')
res = model.predict(input_img)[0]
print('run completed\n')
you can see the output here:link
So i have some difficulties to understand how and if the system is working properly.
Basically the model.compile wont compile my model but when i use model.predict then the system first compiles and then is running. Why is that happening? Is there another way to run pre-trained keras models on an IPU chip?
Another question I have is if its possible to load a pre-trained keras model inside an ipu.keras.model and then use model.fit/evaluate to further train and evaluate it and then save it for future use?
One last question I have is about the compilation part of the graph. Is there a way to avoid recompilation of the graph every time i use the model.predict() in a different strategy.scope()?
I use tensorflow2.1.2 wheel
Thank you for your time
To add some context, the Graphcore TensorFlow wheel includes a port of Keras for the IPU, available as tensorflow.python.ipu.keras. You can access the API documentation for IPU Keras at this link. This module contains IPU-specific optimised replacement for TensorFlow Keras classes Model and Sequential, plus more high-performance, multi-IPU classes e.g. PipelineModel and PipelineSequential.
As per your specific issue, you are right when you mention that there are no IPU-specific ways to load pre-trained Keras models at present. I would encourage you, as you appear to have access to IPUs, to reach out to Graphcore Support. When doing so, please attach your pre-trained Keras model model1.h5 and a self-contained reproducer of your code.
Switching topic to the recompilation question: using an executable cache prevents recompilation, you can set that up with environmental variable TF_POPLAR_FLAGS='--executable_cache_path=./cache'. I'd also recommend to take a look into the following resources:
this tutorial gathers several considerations around recompilation and how to avoid it when using TensorFlow2 on the IPU.
Graphcore TensorFlow documentation here explains how to use the pre-compile mode on the IPU.
Can anyone please tell me how to download the complete imagenet dataset on which the pytorch torchvision models are trained on and their Top-1 error is reported on?
I have downloaded Tiny-Imagenet from Imagenet website and used pretrained resnet-101 model which provides only 18% Top-1 accuracy.
Download the ImageNet dataset from http://www.image-net.org/ (you have to sign in)
Then, you should move validation images to labeled subfolders, which could be done automatically using the following shell script:
https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh
I am using Pytorch for image classification. I am looking for CNN models pretrained on a dataset other than ImageNet, I have found a link to a ".ckpt" file. I also found tutorials on loading this file with Tenserflow, but not using pytorch.
How can I load pretrained model using Pytorch from ".ckpt" file ?
I agree with #jodag that in general, PyTorch and Tensorflow are not interoperable. There are some special cases in which you may be able to do this. For example, HuggingFace provides support for converting the transformer model from TensorFlow to PyTorch.
There is a related (though closed) question on DataScience StackExchange where the idea is to rewrite the Tensorflow model into PyTorch and then loads the weights from the checkpoint file. Note that this can get tricky at times.
I have installed darknet in ubuntu and is now trying to implement object detection using yolo v2 on my custom dataset. In the yolo paper, they have told that they have pretrained the network using image net dataset. So, my question is should we also pretrain the network?
Sorry if I'm blunting.
Can someone reply me?
For most cases, if your dataset has lots of similar feature in the pre-trained weight (e.g. person, car), you should use the pre-trained network such as darknet53.conv.74 or darknet19_448.conv.23.
But you can also train the network without using those pre-trained network (training from scratch), for example by removing the weight from the command :
./darknet detector train data/obj.data yolo-obj.cfg
I only want to use pre-trained model in pytorch without installing the whole package.
Can I just copy the model module from pytorch?
I'm afraid you cannot do that: in order to run the model, you need not only the trained weights ('.pth.tar' file) but also the "structure" of the net: that is, the layers, how they are connected to each other etc. This network structure is coded in python and requires pytorch to be installed.
A way of using PyTorch models without Installing PyTorch is if the model is exported in Onnx format. Once the model is in Onnx format the model can be Imported into the Onnx runtime and ca be used for Inferencing. This tutorial should help you out.Pytorch ONNX