How to stop training some specific weights in TensorFlow - python-3.x

I'm just beginning to learn TensorFlow and I have some problems with it.In training loop I want to ignore the small weights and stop training them. I've assigned these small weights to zero. I searched the tf API and found tf.Variable(weight,trainable=False) can stop training the weight. If the value of the weight is equal to zero I will use this function. I tried to use .eval() but there occurred an exception ValueError("Cannot evaluate tensor using eval(): No default ". I have no idea how to get the value of the variable when in training loop. Another way is to modify the tf.train.GradientDescentOptimizer(), but I don't know how to do it. Has anyone implemented this code yet or any other methods suggested? Thanks in advance!

Are you looking to apply regularization to the weights?
There is an apply_regularization method in the API that you can use to accomplish that.
See: How to exactly add L1 regularisation to tensorflow error function

I don't know any use-case for stopping training of some variables, probably it's not what you should do.
Anyway, calling tf.Variable() (if I got you right) is not going to help you, because it's called just once when the graph is defined. The first argument is initial_value: as the name suggests, it's assigned only during initialization.
Instead, you can use tf.assign like this:
with tf.Session() as session:
assign_op = var.assign(0)
session.run(assign_op)
It will update the variable during the session, which is what you're asking for.

Related

How can I extract all arguments I am passing to a TensorFlow function?

It is difficult to retrain my models in new data because I never remember my initial optimizer, loss function, and hyperparameters. How can I extract all arguments I am passing to a TensorFlow function? Let's say from the code below, how to extract a list with the arguments learning_rate, beta_1, beta_2, and so on.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001,
beta_1=0.9,beta_2=0.999,
epsilon=1e-07, amsgrad=False,
name="Adam")
I just want to extract names thus I can later on call them by for example:
optimizer.learning_rate
I have try .keys(), .classes(), but nothing work. Of course I can inspect it using dir(optimizer) but the output is not filtered.
I just found a way. The drawback it requires compiling the model first. I will post it because maybe someone has the same issue.
model.optimizer.get_config()

Is there a way to evaluate losses on the test sample using spacy model

I am trying to create a binary classifier with spacy 2.1.3 and in order to perform an overfitting test I would like to evaluate losses on the test sample. In their tutorial losses are used as a parameter and somehow updated:
https://github.com/explosion/spaCy/blob/master/examples/training/train_textcat.py#L90
I cannot find any example of how to evaluate it on my test sample. Ideally I would like to produce plots as show here:
https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/
I tried digging into their code but I didn't find anything useful. Has anyone tried to produce similar plots?
Thank you for your help and comments :)
The variable losses is being set during the training loop, cf. https://github.com/explosion/spaCy/blob/master/spacy/pipeline/pipes.pyx#L931.
What you want to do, is after each iteration (epoch), print out this training loss, but also perform your own evaluation on a held-out dev test set. When you apply your model-in-training to the dev set, you can use average model parameters as explained here: https://spacy.io/usage/training#tips-param-avg.
For this dev evaluation, you can implement whatever metric you like, such as accuracy, precision, recall, F-score, or a loss function similar to the one you've been training on, cf. https://github.com/explosion/spaCy/blob/master/spacy/pipeline/pipes.pyx#L950.

Changing tensor value from a saved tensorflow model

I have some models saved that have dropout layers. Unfortunately, the dropout_keep_dim value was not given as placeholders. Now when I restore the model for test purpose, it gives random output for each run. So, my question is, is it possible to change the dropout_keep_dim of a saved variable? The dropout layer is added the following way:
tf.nn.dropout(layer_no, dropout_keep_dim)
I have already wasted hours on google and didn't find any working solution. Is there even a solution or are my saved models of no use now? Tf.assign doesn't work as, in my case, dropout_keep_dim is not a variable. Any kind of help is appreciated.
NB. I can restore the dropout_keep_dim value and print it. I want to change it if that's possible and then test with the saved weights.

What is the difference between parameters and children?

It looks like parameters and children show the same info, so what is the difference between them?
import torch
print('torch.__version__', torch.__version__)
m = torch.load('imagenet_resnet18.pth')
print(m.parameters)
print(m.children)
model.parameters() is a generator that returns tensors containing your model parameters.
model.children() is a generator that returns layers of the model from which you can extract your parameter tensors using <layername>.weight or <layername>.bias
Visit this link for a simple tutorial on accessing and freezing model layers.
The (only, during my writing) current answer is not to the point, and thus misleading in my own opinion. By the current docs(08/23/2022):
children():
Returns an iterator over immediate children modules.
This should mean that it will stop at non-leaf node like torch.nn.Sequential, torch.nn.ModuleList, etc.
parameters(recurse=True):
Returns an iterator over module parameters. This is typically passed to an optimizer.
"Passed to an optimizer" should imply that recursive cases are taken care by the team. Just pass the return value/object to the optimizer.
Since I know you're lazy developers, you must read this answer from PyTorch forum to see the output of children() done by someone: https://discuss.pytorch.org/t/module-children-vs-module-modules/4551/4?u=raining_day513

Does Theano support variable split?

In my Theano program, I want to split the tensor matrix into two parts, with each of them making different contributions to the error function. Can anyone tell me whether automatic differentiation support this?
For example, for a tensor matrix variable M, I want to split it into M1=M[:300,] and M2=M[300:,], then the cost function is defined as 0.5* M1 * w + 0.8*M2*w. Is it still possible to get the gradient with T.grad(cost,w)?
Or more specifically, I want to construct an Autoencoder with different features having different weights in contribution to the total cost.
Thanks for anyone who answers my question.
Theano support this out of the box. You have nothing particular to do. If Theano don't support something in the crash, it should raise an error. But you won't have it for this, if there isn't problem in the way you call it. But the current pseudo-code should work.

Resources