I have some models saved that have dropout layers. Unfortunately, the dropout_keep_dim value was not given as placeholders. Now when I restore the model for test purpose, it gives random output for each run. So, my question is, is it possible to change the dropout_keep_dim of a saved variable? The dropout layer is added the following way:
tf.nn.dropout(layer_no, dropout_keep_dim)
I have already wasted hours on google and didn't find any working solution. Is there even a solution or are my saved models of no use now? Tf.assign doesn't work as, in my case, dropout_keep_dim is not a variable. Any kind of help is appreciated.
NB. I can restore the dropout_keep_dim value and print it. I want to change it if that's possible and then test with the saved weights.
Related
I have created few models in ML and saved them for future use in predicting the outcomes. This time there is a common scenario but unseen for me.
I need to provide this model to someone else to test it out on their dataset.
I had removed few redundant columns from my training data, trained a regression model on it and saved it after validating it. However, when I give this model to someone to use it on their dataset, how do I tell them to drop few columns. I could have manually added the column list in a python file where saved model will be called from but that does not sound too neat.
What is the best way to do this in general. Kindly share some inputs.
One can simply use pickle library to save column list and other things along with the model. In the new session, one can simply use pickle to upload those things in the session again.
I am using the keras and fit_generator to train my model. The current model I am doing is an auto-encoder - which does not render desired results. I would therefore like to create a callback that illustrates the training image and groundtruth image in every 500 batch or so. I therefore want to use on_batch_begin I am however unsure of how I can access the current batch to create a tf.Summary.Image.
Can anybody direct me to some information about this or knows how to get the current batch. Or would it be done in the generator? I just do not see how to attach a callback to that.
I have not been able to find an elegant solution. I have added an array to the callback with the files I want to analyse. Then I randomly choose one image just for illustration, and use the current model and illustrate it on tensorboard. It works but I had hoped it would have been more elegant :)
I am using the class api to subclass a model based on keras.models.Model.
Is there some trick to getting the save_weights working?
Am seeing errors like
ValueError: Layer #0 (named "dense_S") expects 0 weight(s), but the saved weights have 2 element(s).
Am trying by_name=True and False.
EDIT: it seems that calling predict once, with ANY data, is needed to build the layers for some reason. It would be interesting to here a proper explanation from anyone who knows more.
I need to get several outputs from several layers from keras model instead of getting the output from the last layer.
I know how to adjust the code according to what I need, but don't know how I can use it within keras application. I meant how can I import it then. Do I need to infall the setup.py at keras-application again. I did so but nothing happen. My changes doesn't applied. Is there any different way to get the outputs from different layer within the model?
Actually the solution was so simple, you need just to call the output of specified layer.
new_model= tf.keras.Model(base_model.input, base_model.get_layer(layer_name).output)
I'm just beginning to learn TensorFlow and I have some problems with it.In training loop I want to ignore the small weights and stop training them. I've assigned these small weights to zero. I searched the tf API and found tf.Variable(weight,trainable=False) can stop training the weight. If the value of the weight is equal to zero I will use this function. I tried to use .eval() but there occurred an exception ValueError("Cannot evaluate tensor using eval(): No default ". I have no idea how to get the value of the variable when in training loop. Another way is to modify the tf.train.GradientDescentOptimizer(), but I don't know how to do it. Has anyone implemented this code yet or any other methods suggested? Thanks in advance!
Are you looking to apply regularization to the weights?
There is an apply_regularization method in the API that you can use to accomplish that.
See: How to exactly add L1 regularisation to tensorflow error function
I don't know any use-case for stopping training of some variables, probably it's not what you should do.
Anyway, calling tf.Variable() (if I got you right) is not going to help you, because it's called just once when the graph is defined. The first argument is initial_value: as the name suggests, it's assigned only during initialization.
Instead, you can use tf.assign like this:
with tf.Session() as session:
assign_op = var.assign(0)
session.run(assign_op)
It will update the variable during the session, which is what you're asking for.