Effective method for accumulating gradients in TensorFlow - python-3.x

It appears that there are already a couple questions on 'how to' accumulate gradients in TensorFlow. Here's the original and a duplicate.
The accepted recommendation, taken from this issue, is to do the following:
opt = tf.train.AdamOptimizer()
tvs = tf.trainable_variables()
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
gvs = opt.compute_gradients(rmse, tvs)
accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]
train_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)])
In the training loop we have:
while True:
sess.run(zero_ops)
for i in xrange(n_minibatches):
sess.run(accum_ops, feed_dict=dict(X: Xs[i], y: ys[i]))
sess.run(train_step)
I managed to implement a minimal example of this in a Jupyter notebook but I'm bothered by the ad-hoc nature of the solution. Moreover, as shown in the notebook, when training is run a second time the accumulator poses a problem. It's not clear to me right now how I should address this problem.

So I found the solution to my problem and posted the solution in a public gist. The key thing is to reset the default graph when compiling a new graph and running training for a second time in the same notebook.
So we have:
tf.reset_default_graph()
model = mnist_network(seed=42)

Related

Why do genetic algorithms converge to end up with a population that is identical?

I was implementing a genetic algorithm with tf keras, where i manualy modify the weight, make the gene cross over, all that. Ive found that after a few docen generations, the predictions of all the network are essentialy identical, and after a few more generations the predictions are exactly the same. trying to google the problem i found this page
that mentions the problem in a conceptual level but i cant understand how this would happen if im manualy creating genetic diverity every generation.
def model_mutate(weights,var):
for i in range(len(weights)):
for j in range(len(weights[i])):
if( random.uniform(0,1) < 0.2): #learing rate of 15%
change = np.random.uniform(-var,var,weights[i][j].shape)
weights[i][j] += change
return weights
def crossover_brains(parent1, parent2):
global brains
weight1 = parent1.get_weights()
weight2 = parent2.get_weights()
new_weight1 = weight1
new_weight2 = weight2
gene = random.randint(0,len(new_weight1)-1) #we change a random weight
#or set of weights
new_weight1[gene] = weight2[gene]
new_weight2[gene] = weight1[gene]
q=np.asarray([new_weight1,new_weight2],dtype=object)
return q
def evolve(best_fit1,best_fit2):
global generation
global best_brain
global best_brain2
mutations=[]
for i in range(total_brains//2):
cross_weights=model_crossover(best_fit1,best_fit2)
mutation1=model_mutate(cross_weights[0],0.5)
mutation2=model_mutate(cross_weights[1],0.5)
mutations.append(mutation1)
mutations.append(mutation2)
for i in range(total_brains):
brains[i].set_weights(mutations[i])
generation+=1
def find_best_fit():
fitness=np.loadtxt("fitness.txt")
print(f"fitness average {np.mean(fitness)} in generation {generation}")
print(f"fitness max is {np.max(fitness)} in generation {generation} ")
fitness_t.append(np.mean(fitness))
maxfit1=np.max(fitness)
best_fit1=np.where(fitness==maxfit1)[0]
fitness[best_fit1]=0
maxfit2=np.max(fitness)
best_fit2=np.where(fitness==maxfit2)[0]
if len(best_fit1)>1: #this is a band_aid for when several indiviuals are the same
# this would lead to best_fit(1,2) being an array of indeces
best_fit1=best_fit1[0]
if len(best_fit2)>1:
best_fit2=best_fit2[0]
return int(best_fit1),int(best_fit2)
bf1,bf2=find_best_fit()
evolve(bf1,bf2)
This is the code im using to set the modified weights to the existing keras models (mostly not mine, i dont understand it enough to have created this myself)
if keras is working how i think its working, then i dont see how this would converge to anything that does not maximize fitness, further more, it seems to be decreasing over time.

Unable to Reproduce Results while using Scikit-learn RFECV

I am trying to use Recursive Feature Elimination with CV and produce reproducible results. Even though I have tried fixing the randomness by random_state = SEED as arguments of the components used as well as tried setting the random seed globally as well using np.random.seed(SEED). However, I am unable to control for the randomness and am unable to reproduce results. Attached is the code segment.
estimator = GradientBoostingClassifier(random_state=SEED, n_estimators=2*df.shape[1])
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=3, random_state=SEED)
selector = RFECV(estimator, n_jobs=-1,step=STEP, cv=cv)
selector = selector.fit(df, y)
df = df.loc[:, selector.support_]
print("Shape of final data AFTER FEATURE SELECTION")
print(df.shape, y.shape)
Specifically, if I run this segment of code it returns different number of features selected at each run. Any help would be appreciated

how to apply model developped in fast.ai / pytorch?

I've trained a model which i'm trying to apply onto new data. I'm totally new to fast.ai
i'm creating my databunch as below (ds being the data i want to score):
bs = 64
data_lm = (TextList.from_df(df, path, cols='comment_text')
.split_by_rand_pct(0.1)
.label_for_lm()
.databunch(bs=bs))
The problem being that I cannot ommit the .split_by_rand_pct(0.1), so I cannot score the whole data
I then go and load/apply the model as below
data_clas = load_data(path, 'data_clas.pkl', bs=bs)
learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5)
learn.load_encoder('fine_tuned_enc')
learn.load('third');
preds, target = learn.get_preds(DatasetType.Test, ordered=True)
labels = preds.numpy()
But the problem is i'm only scoring 0.1 pct of my data as the first piece of code when I create the databunch is not correct...i'm wanting to apply the saved/loaded model onto the overall DF.
Many thanks in advance
a colleague of mine actually provided me with the solution, I'm posting it here in case it's useful to anyone.
learn.data.add_test(df['Contact_Text'])
preds,y = learn.get_preds(ds_type=DatasetType.Test)
preds

How to resolve KeyError: 'val_mean_absolute_error' Keras 2.3.1 and TensorFlow 2.0 From Chollet Deep Learning with Python

I am on section 3.7 of Chollet's book Deep Learning with Python.
The project is to find the median price of homes in a given Boston suburbs in the 1970's.
https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/3.7-predicting-house-prices.ipynb
At section "Validating our approach using K-fold validation" I try to run this block of code:
num_epochs = 500
all_mae_histories = []
for i in range(k):
print('processing fold #', i)
# Prepare the validation data: data from partition # k
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
# Prepare the training data: data from all other partitions
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
# Build the Keras model (already compiled)
model = build_model()
# Train the model (in silent mode, verbose=0)
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=1, verbose=0)
mae_history = history.history['val_mean_absolute_error']
all_mae_histories.append(mae_history)
I get an error KeyError: 'val_mean_absolute_error'
mae_history = history.history['val_mean_absolute_error']
I am guessing the solution is figure out the correct parameter to replace val_mean_absolute_error. I've tried looking into some Keras documentation for what would be the correct key value. Anyone know the correct key value?
The problem in your code is that, when you compile your model, you do not add the specific 'mae' metric.
If you wanted to add the 'mae' metric in your code, you would need to do like this:
model.compile('sgd', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.compile('sgd', metrics=['mean_absolute_error'])
After this step, you can try to see if the correct name is val_mean_absolute_error or val_mae. Most likely, if you compile your model like I demonstrated in option 2, your code will work with "val_mean_absolute_error".
Also, you should also put the code snippet where you compile your model, it is missing in the question text from above(i.e. the build_model() function)
I replaced 'val_mean_absolute_error' with 'val_mae' and it worked for me
FYI, I had the same problem that persisted even after changing the line history.history['val_mae'] as described in the answer.
In my case, in order for the val_mae dict object to be present in history.history object, I needed to ensure that the model.fit() code included the 'validation_data = (val_data, val_targets)' argument. I neglected to do this initially.
I update it by below code line:
mae_history = history.history["mae"]
History object should contain the same names as what you compile.
For example:
mean_absolute_error gives val_mean_absolute_error
mae gives val_mae
accuracy gives val_accuracy
acc gives val_acc

New to theano. Trying to add a term to a loss function to penalize negative weights

To be clear, by weights I mean the entries in the matrices (Ws) of the affine transformation in a node of a neural net.
I start with categorical_crossentropy as my loss function. And I want to add an additional term to penalize negative weights.
To this end I want to introduce a term of the form
theano.tensor.sum(theano.tensor.exp(-10 * ws))
Where "ws" are the weights.
If I follow the source code of categorical_crossentropy:
if true_dist.ndim == coding_dist.ndim:
return -tensor.sum(true_dist *tensor.log(coding_dist), axis=coding_dist.ndim - 1)
elif true_dist.ndim == coding_dist.ndim - 1:
return crossentropy_categorical_1hot(coding_dist, true_dist)
else:
raise TypeError('rank mismatch between coding and true distributions')
Seems like I should update the third line (from the bottom) to read
crossentropy_categorical_1hot(coding_dist, true_dist) + theano.tensor.sum(theano.tensor.exp(- 10 * ws))
And change the declaration of the function to be
my_categorical_crossentropy(coding_dist, true_dist, ws) Where in calling for my_categorical_crossentropy I write
loss = my_categorical_crossentropy(net_output, true_output, l_layers[1].W)
with, for a start, l_layers[1].W to be the weights coming from the first layer of my neural net.
With those updates, I go on writing:
loss = aggregate(loss, mode = 'mean')
updates = sgd(loss, all_params, learning_rate = 0.005)
train = theano.function([l_input.input_var, true_output], loss, updates = updates)
[...]
This passes the compiler and everything runs smoothly, the training of the network completes. However, for some reason the additional term " theano.tensor.sum(theano.tensor.exp(- 10 * ws)) is ignored, it seems not to effect the loss value.
I was trying to look into Theano documentation, but so far I could not figure out what might be wrong? The weighs l_layers[1].W are shared variables, so I could not pass those as
train = theano.function([l_input.input_var, true_output, l_layers[1].W], loss, updates = updates)
Any comments are welcome. Thanks!
Solution
Though, I didn't find why what I did, didn't work, adding the penalty term outside the 'categorical_crossentropy' as suggested in the comments did solve the problem:
loss = aggregate(categorical_crossentropy(net_output, true_output) + theano.tensor.sum(theano.tensor.exp(- 10 * l_layers[1].W))

Resources