stable baseline : Training model with pretrained model - openai-gym

I wanna train a walking robot that could go through a maze using a camera sensor in MUJOCO env.
the RL structure is as follows, and i'm using stable baseline3
camera sensor CNN output -> (model1) -> command -> (model2) -> joint action
Considering this structure, Can i train model1? in this setting(mujoco,gym + stablebaseline)
currently, i have model2

Related

how to use Keras model to predict image?

I have done the train process and got the model with the .hdf5 format
the neural network that I use is the siamese convolutional neural network.
when validating, the predicted image is a random image from my test folder.
i use this when test
test_alphabets = glob('{}/TEST/*'.format(dataset_dirname))
testset={}
for alph in test_alphabets:
dirs = glob('{}/*'.format(alph))
alphabet = {}
for dirname in dirs:
alphabet[dirname] = glob('{}/*'.format(dirname))
testset[alph] = alphabet
then, display the result with
display_validation_test(siamese_model1, testset)
the result is like this
How do I do the test process by inputting the image I want, then displaying the appropriate image using the .h5 model earlier?
You first create your model (keras.Model or keras.Sequential instance) with the same architecture as the one you trained.
load the weights from .h5 file model.load_weights('your_weight_file.h5')
read your image(s). If a single image, make sure to add 1 as the batch dimension.
Call predict: prediction = model.predict(images)

Weight Initialization for Mask RCNN without using pretrained weights from Imagenet / COCO

# define the model
model = MaskRCNN(mode='training', model_dir='./', config=config)
# load weights (mscoco) and exclude the output layers
model.load_weights('mask_rcnn_coco.h5', by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
# train weights (output layers or 'heads')
model.train(train_set, test_set, learning_rate=config.LEARNING_RATE, epochs=2, layers='heads')
I have certain medical images containing fibroids.
I wish to apply instance segmentation or object detection.
I may have to use mask Rcnn for instance segmentation and object detection. Is it possible to design the network from scratch instead of using transfer learning?
What I mean here is random initialization of weights for my data, instead of using weights derived from imagenet data or coco data.
From the command line,instead of training a model starting from pre-trained COCO weights like this
python my_model.py train --dataset=/path/dataset --weights=coco
execute the following line.
python my_model.py train --dataset=/path/dataset
And to start training from the first layer execute the following code.
model.train(dataset_train, dataset_val,learning_rate=config.LEARNING_RATE,epochs=10, layers='all')
Can't you just run the training without doing the model.load_weights() line? It seems to be running fine for me when I do that. I assume that runs it with randomized initial weights. It didn't result in quite as good results as starting with coco does, but I'm sure that's expected behavior for some datasets.

How to use a trained network as a branch in another network keras?

Assume i have network simliar to this:
[ICNet_Keras] (https://github.com/aitorzip/Keras-ICNet/blob/master/model.py)
train procedure in this repo is wrong!
it has three branches.
resolution 1/4 branch is a pretrained network, with saved weights.
resolution 1/2 branch is part of 1/4 network, and weight-shared with 1/4 (i don't know how)
resolution 1 branch is my customization.
train procedure is something like this:
1/4 is trained on Cityscapes (for relaxation) saved and reloaded
the first few layers of 1/4 branch are used for feeding resolution 1/2 images
The last branch is for full resolution image.
these branches are related with CFF (Cascade Feature Fusion module).
how can I load 1/4 pretrained weight and train whole network?
how about weight sharing between some layers in 1/4 and 1/2 branch?
for simplicity you can assume
1/4 has 5 layers trained separately and loaded for finetuning
1/2 has 2 first layers of 1/4
1 has 2 independent layer
and CFFs are just upsample+concat
Have your input tensor:
inputs = Input(size)
If you trained the model yourself, make sure you train it with a variable image size (it's convolutional, right?): input shape = (None, None, channels).
If not, you will need to rebuild the model with variable image size. Make sure you don't use Flatten, it will not support variable image sizes. It will not support weight transfer if you want to use what is after the flatten.
1/4
Load your saved model (no need to compile, you are not training it directly):
lowRes = load_model(filename, compile=False, custom_objects=if_needed)
Pass the inputs through it (maybe do some rescaling first)
lowOut = lowRes(inputs)
1/2
Get the segment from lowRes:
midRes = Model(lowRes.input, lowRes.layers[1].output)
Pass the inputs through it (maybe do some rescaling first)
midOut = midRes(inputs)
1/1
Build whatever it is:
....
....
hiRes = Model(....)
Pass the inputs through it:
hiOut = hiRes(inputs)
Old answer
Layers and models can be used more than once, as many times as you need.
Shared layer:
Create the layer:
layer = Conv2D(....)
Use the layer:
out1 = layer(input1)
out2 = layer(input2)
out3 = layer(input3)
It's the same layer, so, the same weights.
Shared model:
A Model is a Layer, so it works exactly the same:
model = load_some_model()
branch1_out = model(input_branch1)
branch2_out = model(input_branch2)
Final model:
At the end, just create a model defining the input tensors and output tensors:
final_model = Model(inputs = input_or_list_of_inputs,
outputs= output_or_list_of_outputs)

Tensorflow - building a CNN model as described in the tutorial

I just completed the implementation of A Guide to TF Layers: Building a Convolutional Neural Network for the MNIST data set. The training model successfully ran and gave accuracy of 97.3%.
However, the tutorial does not mention how to use this new trained model to supply own images and see the predictions. Does anyone know how to use the output of the training model to make predictions? I see in the tmp/mnist_convnet_model$ folder, there are some output files like .pbtxt , meta files and index files. But I can't find instructions to use them for making predictions on my own images.
y_pred = tf.nn.softmax(your_final_layer)
y_pred_cls = tf.argmax(y_pred, dimension=1)
and for prediction
feed_dict = {x: [your_image]}
classification = tf.run(y_pred_cls, feed_dict)
print classification
This applies to just about any model you create

Is there a direct implementation of multiclass SVM in R(e1071)

I have five classes and I want to use SVM(e1071 package) for the classification. I can see some good examples for binary classification using SVM, however,for Multiclass support, some members have suggested using either of One_Vs_Rest or One_vs_One binary classifier and then combine them to get the final prediction. Is there a direct implementation of Multiclass (either approach is fine for me) available?
Yes, now, I got the solution. I used the basic help file from the R and implemented the One_vs_One Multiclass using e1071 which is very short and to the point with clear comments in it.
library(xlsx)
library(gdata)
data(iris)
library(e1071)
library(caTools)
##---------- Split the overall dataset into two parts:70% for training and 30% for testing-----------
index_iris<-sample.split(iris$Species,SplitRatio=.7)
trainset_iris<-iris[index_iris==TRUE,]
testset_iris<-iris[index_iris==FALSE,]
y <- testset_iris$Species
##---------- Now Create an SVM Model with the training dataset--------------------
model <- svm(Species ~ ., data = trainset_iris)
# print(model)
# summary (model)
##-------------Use the model to predict the test dataset so that we can find the accuracy of the model-----
pred <- predict(model,testset_iris)
table(pred, y)
##-------------- Compute decision values and probabilities--------------
pred <- predict(model, testset_iris, decision.values = TRUE)
attr(pred, "decision.values")

Resources