I ran a reinforcement learning training script which used Pytorch and logged data to tensorboardX and saved checkpoints. Now I want to continue training. How do I tell tensorboardX to continue from where I left off? Thank you!
I figured out how to continue the training plot. While creating the summarywriter, we need to provide the same log_dir that we used while training the first time.
from tensorboardX import SummaryWriter
writer = SummaryWriter('log_dir')
Then inside the training loop step needs to start from where it left (not from 0):
writer.add_scalar('average reward',rewards.mean(),step)
Related
I have saved and loaded the checkpoints as per the pytorch manual, and it all seems OK. Now, usually, when I want to start training, I have something like this in pytorch:
for itr in range(1, args.niters + 1):
optimizer.zero_grad() # should I or should I not when checkpoints are loaded?
I am unsure if I should do zero_grad() here (which I use when I start training from scratch), since I am reloading all my weights and bias.
Apologies if this is a daft question.
I have adapted this retrain.py script to use with several pretraineds model,
after training is done this generates a 'retrained_graph.pb' which I then read and try to use to run predictions on an image using this code:
def get_top_labels(image_data):
'''
Returns a list of labels and their probabilities
image_data: content of image as string
'''
with tf.compat.v1.Session() as sess:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data})
return predictions
This works fine for inception_v3 model because it has a tensor called 'DecodeJpeg', other models I'm using such as inception_v4, mobilenet and inception_resnet_v2 don't.
My question is can I add an ops to the graph, like the one used in add_jpeg_decoding in the retrain.py script so that I can afterwards use that for prediction ?
Would it be possible to do something like this:
predictions = sess.run(softmax_tensor, {image_data_tensor: image_data}) where image_data_tensor is a variable that depends on what model I'm using ?
I looked through stackoverflow and couldn't find a question that solves my problem, I'd really appreciate any help with this, thanks.
I need to at least know if it's possible.
Sorry for repost I got no views on my first one.
So after some research, I figured out a way, leaving an answer here in case someone needs it. What you need to do is do the decoding yourself get a tensor from the image using t = read_tensor_from_image_file found here, then you can run your predictions using this piece of code:
start = time.time()
results = sess.run(output_layer_name,
{input_layer_name: t})
end = time.time()
return results
usually input_layer_name = input:0 and output_layer_name = final_result:0.
I am new to XGBoost and I am currently working on a project where we have built an XGBoost classifier. Now we want to run some feature selection techniques. Is backward elimination method a good idea for this? I have used it in regression but I am not sure if/how to use it in a classification problem. Any leads will be greatly appreciated.
Note: I have already tried permutation line importance and it has yielded good results! Looking for another method to evaluate the features in the model.
Consider asking your question on Cross Validated since feature selection is more about theory/practice than code.
What is your concern ? Remove "noisy" features who drive down your results, obtain a sparse model ? Backward selection is one way to do of course. That being said, not sure if you are aware of this but XGBoost computes its own "variable importance" values.
# plot feature importance using built-in function
from xgboost import XGBClassifier
from xgboost import plot_importance
from matplotlib import pyplot
model = XGBClassifier()
model.fit(X, y)
# plot feature importance
plot_importance(model)
pyplot.show()
Something like this. This importance is based on how many times a feature is used to make a split. You can then define for instance a threshold below which you do not keep the variables. However do not forget that :
This variable importance has been obtained on the training data only
The removal of a variable with high importance may not affect your prediction error, e.g. if it is correlated with another highly important variable. Other tricks such as this one may exist.
I am currently working on the MLPClassifier of the neural_network package in sklearn.
I have fit the model; I want to access the weights given by the classifier to the input features. How do I access them?
Thanks in advance!
Check out the documentation.
See the field coefs_.
Try:
print model.coefs_
Generally, I recommend:
checking the documentation
if that fails, then
print dir(model)
or
help(model)
will tell you what's available for in most cases.
I am using sklearn to train a model. The train dataset is about 3000k, so i use SGDClassifier. The feature is not very good, so i know it may not converge. But i want SGDClassifier to stop early according to my setting just like max_iter = 1000. As far as I am concerned, the function SGDClassifier has no parameter like max_iter. How can i do it?
This is the code.
This is the print information.
Any help will be appreciated...
This is weird, by default in scikit-learn 0.18.2, n_iter is set to 5 epochs. Can you please update your question with a script that makes it possible to reproduce the behavior using a toy dataset (for instance generated with numpy.random.randn or similar).
Note that in scikit-learn master and 0.19 once released, n_iter will be deprecated and replaced by max_iter and a tol (for instance set to 1e-3) to automatically stop when the objective function is no longer making progress.
The 20hours running could be not so strange since you have a dataset of 3000k and you use SGDClassifier that is slow. What processor do you have?
Try stopping it by using CTRL+C if you are in Windows. Then, use n_iter to control the number of iterations that you want. The default is 5 however.
Finally, if you want to save a model see here:
Save and Load Machine Learning Models in Python with scikit-learn