I have a PyTorch model (class Net), together with its saved weights / state dict (net.pth), and I want to perform inference in a multiprocessing environment.
I noticed that I cannot simply create a model instance, load the weights, then share the model with a child process (though I'd have assumed this is possible due to copy-on-write). What happens is that the child hangs on y = model(x), and finally the whole program hangs (due to parent's waitpid).
The following is a minimal reproducible example:
def handler():
with torch.no_grad():
x = torch.rand(1, 3, 32, 32)
y = model(x)
return y
model = Net()
model.load_state_dict(torch.load("./net.pth"))
pid = os.fork()
if pid == 0:
# this doesn't get printed as handler() hangs for the child process
print('child:', handler())
else:
# everything is fine here
print('parent:', handler())
os.waitpid(pid, 0)
If the model loading is done independently for parent & child, i.e. no sharing, then everything works as expected. I have also tried calling share_memory_ on model's tensors, but to no avail.
Am I doing something obviously wrong here?
Seems that sharing the state dict and performing the loading operation in each process solves the problem:
LOADED = False
def handler():
global LOADED
if not LOADED:
# each process loads state independently
model.load_state_dict(state)
LOADED = True
with torch.no_grad():
x = torch.rand(1, 3, 32, 32)
y = model(x)
return y
model = Net()
# share the state rather than loading the state dict in parent
# model.load_state_dict(torch.load("./net.pth"))
state = torch.load("./net.pth")
pid = os.fork()
if pid == 0:
print('child:', handler())
else:
print('parent:', handler())
os.waitpid(pid, 0)
Related
I'm trying to get a hang of reinforcement learning, so I'm following a guide at:
pytorch.org/tutorials/
They've implemented DQN that solves CartPole with computer vision. Basically, I've copied their code and modified it to solve the LunarLander environment without computer vision. But I'm getting weird results. The model seems to be learning as it improves its score (with a lot of hiccups) until it fails spectacularly and gets stuck, doing weird movements and not learning.
Learning progress graph
Another learning progress graph of the different model
You can see both models failing in the same way at the end of the learning.
I cannot figure out why this solution is not working. Could you have a look at my code and perhaps find and point out errors?
Global variables:
BATCH_SIZE = 1000
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 1000
TARGET_UPDATE = 10
LEARNING_RATE = 0.01
MOMENTUM = 0.9
MEMORY_SIZE = 10000
env = gym.make('LunarLander-v2')
n_actions = env.action_space.n
n_observation_space = env.observation_space.shape[0]
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
policy_net = DQN(n_observation_space, n_actions).to(device)
target_net = DQN(n_observation_space, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
optimizer = optim.Adam(policy_net.parameters(), lr=LEARNING_RATE)
memory = ReplayMemory(MEMORY_SIZE)
Learning loop:
def learn(num_episodes=50, render=False):
for i_episode in range(num_episodes):
# Initialize the environment and state
state = torch.tensor([env.reset()], device=device, dtype=torch.float32)
episode_reward = 0
for t in count():
# Select and perform an action
action = select_action(state)
next_state, reward, done, _ = env.step(action.item())
episode_reward += reward
reward = torch.tensor([reward], device=device, dtype=torch.float32)
next_state = torch.tensor([next_state], device=device, dtype=torch.float32)
# Store the transition in memory
memory.push(state, action, next_state, reward)
# Move to the next state
state = next_state
# Perform one step of the optimization (on the target network)
optimize_model()
if render:
env.render()
if done:
break
all_rewards.append(episode_reward)
# Update the target network, copying all weights and biases in DQN
if i_episode % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
Optimization methods:
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
batch = Transition(*zip(*transitions))
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.bool)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
state_action_values = policy_net(state_batch).gather(1, action_batch)
next_state_values = torch.zeros(BATCH_SIZE, device=device)
next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
loss = nn.MSELoss(state_action_values, expected_state_action_values.unsqueeze(1))
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
Model:
class DQN(nn.Module):
def __init__(self, input_size, output_size):
super(DQN, self).__init__()
self.l1 = nn.Linear(input_size, 512)
self.l2 = nn.Linear(512, 512)
self.l3 = nn.Linear(512, 256)
self.l4 = nn.Linear(256, output_size)
def forward(self, x):
x = F.leaky_relu(self.l1(x))
x = F.leaky_relu(self.l2(x))
x = F.leaky_relu(self.l3(x))
return self.l4(x)
If anyone's willing to run my code locally, please let me know. I'll clean up the code and share it via Github.
Looking through your code, I can't seem to find any standing-out bugs (but you didn't post everything). There are a few weird things though:
A BATCH_SIZE of 1000 is quite massive. Of course you should try with what works best for you but next time try with 32/64/128 and around.
Since you didn't post the functions for the choice of the action and the EPS decay, I assume you're decaying your EPS at every time step with a 1/1000 decay rate. Given that you're using a very big network, try and make your epsilon decay slower.
As pointed out above, the envronment is easy enough to be solved by a much smaller network. Bigger networks have more weights, more weights need more time to train. I would reduce the size of the network by removing one hidden layer at least, and you can also try and reduce the number of units you have.
I am aware that, while employing loss.backward() we need to specify retain_graph=True if there are multiple networks and multiple loss functions to optimize each network separately. But even with (or without) specifying this parameter I am getting errors. Following is an MWE to reproduce the issue (on PyTorch 1.6).
import torch
from torch import nn
from torch import optim
torch.autograd.set_detect_anomaly(True)
class GRU1(nn.Module):
def __init__(self):
super(GRU1, self).__init__()
self.brnn = nn.GRU(input_size=2, bidirectional=True, num_layers=1, hidden_size=100)
def forward(self, x):
return self.brnn(x)
class GRU2(nn.Module):
def __init__(self):
super(GRU2, self).__init__()
self.brnn = nn.GRU(input_size=200, bidirectional=True, num_layers=1, hidden_size=1)
def forward(self, x):
return self.brnn(x)
gru1 = GRU1()
gru2 = GRU2()
gru1_opt = optim.Adam(gru1.parameters())
gru2_opt = optim.Adam(gru2.parameters())
criterion = nn.MSELoss()
for i in range(100):
gru1_opt.zero_grad()
gru2_opt.zero_grad()
vector = torch.randn((15, 100, 2))
gru1_output, _ = gru1(vector) # (15, 100, 200)
loss_gru1 = criterion(gru1_output, torch.randn((15, 100, 200)))
loss_gru1.backward(retain_graph=True)
gru1_opt.step()
gru1_output, _ = gru1(vector) # (15, 100, 200)
gru2_output, _ = gru2(gru1_output) # (15, 100, 2)
loss_gru2 = criterion(gru2_output, torch.randn((15, 100, 2)))
loss_gru2.backward(retain_graph=True)
gru2_opt.step()
print(f"GRU1 loss: {loss_gru1.item()}, GRU2 loss: {loss_gru2.item()}")
With retain_graph set to True I get the error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 300]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The error without the parameter is
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.
which is expected.
Please point at what needs to be changed in the above code for it to begin training. Any help is appreciated.
In such a case, one can detach the computation graph to exclude the parameters that don't need to be optimized. In this case, the computation graph should be detached after the second forward pass with gru1 i.e.
....
gru1_opt.step()
gru1_output, _ = gru1(vector)
gru1_output = gru1_output.detach()
....
This way, you won't "try to backward through the graph a second time" as the error mentioned.
I am trying to implement truncated backpropagation through time in PyTorch, for the simple case where K1=K2. I have an implementation below that produces reasonable output, but I just want to make sure it is correct. When I look online for PyTorch examples of TBTT, they do inconsistent things around detaching the hidden state and zeroing out the gradient, and the ordering of these operations. Please let me know if I have made a mistake.
In the code below, H maintains the current hidden state, and model(weights, H, x) outputs the prediction and the new hidden state.
while i < NUM_STEPS:
# Grab x, y for ith datapoint
x = data[i]
target = true_output[i]
# Run model
output, new_hidden = model(weights, H, x)
H = new_hidden
# Update running error
error += (output - target)**2
if (i+1) % K == 0:
# Backpropagate
error.backward()
opt.step()
opt.zero_grad()
error = 0
H = H.detach()
i += 1
So the idea of your code is to isolate the last variables after each Kth step. Yes, your implementation is absolutely correct and this answer confirms that.
# truncated to the last K timesteps
while i < NUM_STEPS:
out = model(out)
if (i+1) % K == 0:
out.backward()
out.detach()
out.backward()
You can also follow this example for your reference.
import torch
from ignite.engine import Engine, EventEnum, _prepare_batch
from ignite.utils import apply_to_tensor
class Tbptt_Events(EventEnum):
"""Aditional tbptt events.
Additional events for truncated backpropagation throught time dedicated
trainer.
"""
TIME_ITERATION_STARTED = "time_iteration_started"
TIME_ITERATION_COMPLETED = "time_iteration_completed"
def _detach_hidden(hidden):
"""Cut backpropagation graph.
Auxillary function to cut the backpropagation graph by detaching the hidden
vector.
"""
return apply_to_tensor(hidden, torch.Tensor.detach)
def create_supervised_tbptt_trainer(
model, optimizer, loss_fn, tbtt_step, dim=0, device=None, non_blocking=False, prepare_batch=_prepare_batch
):
"""Create a trainer for truncated backprop through time supervised models.
Training recurrent model on long sequences is computationally intensive as
it requires to process the whole sequence before getting a gradient.
However, when the training loss is computed over many outputs
(`X to many <https://karpathy.github.io/2015/05/21/rnn-effectiveness/>`_),
there is an opportunity to compute a gradient over a subsequence. This is
known as
`truncated backpropagation through time <https://machinelearningmastery.com/
gentle-introduction-backpropagation-time/>`_.
This supervised trainer apply gradient optimization step every `tbtt_step`
time steps of the sequence, while backpropagating through the same
`tbtt_step` time steps.
Args:
model (`torch.nn.Module`): the model to train.
optimizer (`torch.optim.Optimizer`): the optimizer to use.
loss_fn (torch.nn loss function): the loss function to use.
tbtt_step (int): the length of time chunks (last one may be smaller).
dim (int): axis representing the time dimension.
device (str, optional): device type specification (default: None).
Applies to batches.
non_blocking (bool, optional): if True and this copy is between CPU and GPU,
the copy may occur asynchronously with respect to the host. For other cases,
this argument has no effect.
prepare_batch (callable, optional): function that receives `batch`, `device`,
`non_blocking` and outputs tuple of tensors `(batch_x, batch_y)`.
.. warning::
The internal use of `device` has changed.
`device` will now *only* be used to move the input data to the correct device.
The `model` should be moved by the user before creating an optimizer.
For more information see:
* `PyTorch Documentation <https://pytorch.org/docs/stable/optim.html#constructing-it>`_
* `PyTorch's Explanation <https://github.com/pytorch/pytorch/issues/7844#issuecomment-503713840>`_
Returns:
Engine: a trainer engine with supervised update function.
"""
def _update(engine, batch):
loss_list = []
hidden = None
x, y = batch
for batch_t in zip(x.split(tbtt_step, dim=dim), y.split(tbtt_step, dim=dim)):
x_t, y_t = prepare_batch(batch_t, device=device, non_blocking=non_blocking)
# Fire event for start of iteration
engine.fire_event(Tbptt_Events.TIME_ITERATION_STARTED)
# Forward, backward and
model.train()
optimizer.zero_grad()
if hidden is None:
y_pred_t, hidden = model(x_t)
else:
hidden = _detach_hidden(hidden)
y_pred_t, hidden = model(x_t, hidden)
loss_t = loss_fn(y_pred_t, y_t)
loss_t.backward()
optimizer.step()
# Setting state of engine for consistent behaviour
engine.state.output = loss_t.item()
loss_list.append(loss_t.item())
# Fire event for end of iteration
engine.fire_event(Tbptt_Events.TIME_ITERATION_COMPLETED)
# return average loss over the time splits
return sum(loss_list) / len(loss_list)
engine = Engine(_update)
engine.register_events(*Tbptt_Events)
return engine
Let us say that we create a small network:
tf.reset_default_graph()
layers = [5, 3, 1]
activations = [tf.tanh, tf.tanh, None]
inp = tf.placeholder(dtype=tf.float32, shape=(None, 2 ), name='inp')
out = tf.placeholder(dtype=tf.float32, shape=(None, 1 ), name='out')
isTraining = tf.placeholder(dtype=tf.bool, shape=(), name='isTraining')
N = inp * 1 # I am lazy
for i, (l, a) in enumerate(zip(layers, activations)):
N = tf.layers.dense(N, l, None)
#N = tf.layers.batch_normalization( N, training = isTraining) # comment this line
if a is not None:
N = a(N)
err = tf.reduce_mean((N - out)**2)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
opt = tf.train.AdamOptimizer(0.05).minimize(err)
# insert vectors from the batch normalization
tVars = tf.trainable_variables()
graph = tf.get_default_graph()
for v in graph.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
if all([
('batch_normalization' in v.name),
('optimizer' not in v.name),
v not in tVars ]):
tVars.append(v)
init = tf.global_variables_initializer()
saver = tf.train.Saver(var_list= tVars)
This is a simple NN generated for optimization. The only thing that I am currently interested in is batch optimization (the line that has been commented out). Now, we train this network, save it, restore its and calculate the error again, we do ok:
# Generate random data
N = 1000
X = np.random.rand(N, 2)
y = 2*X[:, 0] + 3*X[:, 1] + 3
y = y.reshape(-1, 1)
# Run the session and save it
with tf.Session() as sess:
sess.run(init)
print('During Training')
for i in range(3000):
_, errVal = sess.run([opt, err], feed_dict={inp:X, out:y, isTraining:True})
if i %500 == 0:
print(errVal)
shutil.rmtree('models1', ignore_errors=True)
os.makedirs('models1')
path = saver.save( sess, 'models1/model.ckpt' )
# restore the session
print('During testing')
with tf.Session() as sess:
saver.restore(sess, path)
errVal = sess.run(err, feed_dict={inp:X, out:y, isTraining:False})
print( errVal )
Here is the output:
During Training
24.4422
0.00330666
0.000314223
0.000106421
6.00441e-05
4.95262e-05
During testing
INFO:tensorflow:Restoring parameters from models1/model.ckpt
5.5899e-05
On the other hand, when we uncomment the batch normalization line, and redo the above calculation:
During Training
31.7372
1.92066e-05
3.87879e-06
2.55274e-06
1.25418e-06
1.43078e-06
During testing
INFO:tensorflow:Restoring parameters from models1/model.ckpt
0.041519
As you can see, the restored value is far from what the model is predicting. Is there anything that I am doing wrong?
Note: I know that for batch-normalization I need to generate mini batches. I have skipped all of that to keep the code simple and yet complete.
Batch normalization layer, as defined in Tensorflow, needs to have access to the placeholder isTraining (https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization). Make sure you include it when you define the layer: tf.layers.batch_normalization(..., training=isTraining, ...).
The reason for this is that Batch Normalization Layers have 2 trainable parameters (beta and gamma) that are trained normally with the rest of the network, but they also have 2 extra parameters (batch mean and variance) that require you to tell them to train. you do this simply by aplying the recipe above.
Right now your code seems not to be training mean and variance. Instead, they are randomly fixed and the network is optimized with those. Later on, when you save and restore, they are reinitialized with different values, hence the network doesn't perform as it used to.
I want to make a Neural Network, which would have recurrency (for example, LSTM) at some layers and normal connections (FC) at others.
I cannot find a way to do it in Tensorflow.
It works, if I have only FC layers, but I don't see how to add just one recurrent layer properly.
I create a network in a following way :
with tf.variable_scope("autoencoder_variables", reuse=None) as scope:
for i in xrange(self.__num_hidden_layers + 1):
# Train weights
name_w = self._weights_str.format(i + 1)
w_shape = (self.__shape[i], self.__shape[i + 1])
a = tf.multiply(4.0, tf.sqrt(6.0 / (w_shape[0] + w_shape[1])))
w_init = tf.random_uniform(w_shape, -1 * a, a)
self[name_w] = tf.Variable(w_init,
name=name_w,
trainable=True)
# Train biases
name_b = self._biases_str.format(i + 1)
b_shape = (self.__shape[i + 1],)
b_init = tf.zeros(b_shape)
self[name_b] = tf.Variable(b_init, trainable=True, name=name_b)
if i+1 == self.__recurrent_layer:
# Create an LSTM cell
lstm_size = self.__shape[self.__recurrent_layer]
self['lstm'] = tf.contrib.rnn.BasicLSTMCell(lstm_size)
It should process the batches in a sequential order. I have a function for processing just one time-step, which will be called later, by a function, which process the whole sequence :
def single_run(self, input_pl, state, just_middle = False):
"""Get the output of the autoencoder for a single batch
Args:
input_pl: tf placeholder for ae input data of size [batch_size, DoF]
state: current state of LSTM memory units
just_middle : will indicate if we want to extract only the middle layer of the network
Returns:
Tensor of output
"""
last_output = input_pl
# Pass through the network
for i in xrange(self.num_hidden_layers+1):
if(i!=self.__recurrent_layer):
w = self._w(i + 1)
b = self._b(i + 1)
last_output = self._activate(last_output, w, b)
else:
last_output, state = self['lstm'](last_output,state)
return last_output
The following function should take sequence of batches as input and produce sequence of batches as an output:
def process_sequences(self, input_seq_pl, dropout, just_middle = False):
"""Get the output of the autoencoder
Args:
input_seq_pl: input data of size [batch_size, sequence_length, DoF]
dropout: dropout rate
just_middle : indicate if we want to extract only the middle layer of the network
Returns:
Tensor of output
"""
if(~just_middle): # if not middle layer
numb_layers = self.__num_hidden_layers+1
else:
numb_layers = FLAGS.middle_layer
with tf.variable_scope("process_sequence", reuse=None) as scope:
# Initial state of the LSTM memory.
state = initial_state = self['lstm'].zero_state(FLAGS.batch_size, tf.float32)
tf.get_variable_scope().reuse_variables() # THIS IS IMPORTANT LINE
# First - Apply Dropout
the_whole_sequences = tf.nn.dropout(input_seq_pl, dropout)
# Take batches for every time step and run them through the network
# Stack all their outputs
with tf.control_dependencies([tf.convert_to_tensor(state, name='state') ]): # do not let paralelize the loop
stacked_outputs = tf.stack( [ self.single_run(the_whole_sequences[:,time_st,:], state, just_middle) for time_st in range(self.sequence_length) ])
# Transpose output from the shape [sequence_length, batch_size, DoF] into [batch_size, sequence_length, DoF]
output = tf.transpose(stacked_outputs , perm=[1, 0, 2])
return output
The issue is with a variable scopes and their property "reuse".
If I run this code as it is I am getting the following error:
' Variable Train/process_sequence/basic_lstm_cell/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? '
If I comment out the line, which tell it to reuse variables ( tf.get_variable_scope().reuse_variables() ) I am getting the following error:
'Variable Train/process_sequence/basic_lstm_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope?'
It seems, that we need "reuse=None" for the weights of the LSTM cell to be initialized and we need "reuse=True" in order to call the LSTM cell.
Please, help me to figure out the way to do it properly.
I think the problem is that you're creating variables with tf.Variable. Please, use tf.get_variable instead -- does this solve your issue?
It seems that I have solved this issue using the hack from the official Tensorflow RNN example (https://www.tensorflow.org/tutorials/recurrent) with the following code
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
The hack is that when we run LSTM first time, tf.get_variable_scope().reuse is set to False, so that the new LSTM cell is created. When we run it next time, we set tf.get_variable_scope().reuse to True, so that we are using the LSTM, which was already created.