How can generator get input noise z? - pytorch

Hi I'm looking this GAN implementation code. code here
My question is generator class has no input parameter when defining class Generator(#38 from the link)
But when training, generator gets input z(#141 from the link).
I looked into the nn.Module class which is parent of class Generator but I can't find input parameter for noise z. Can anyone help?
class Generator(nn.Module): #38
def __init__(self):
super(Generator, self).__init__()
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(opt.latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh()
)
def forward(self, z):
img = self.model(z)
img = img.view(img.size(0), *img_shape)
return img
generator = Generator() #88
gen_imgs = generator(z) #141
I tried looking for nn.Module, variable() in pytorch docs and still can't get what I wanted.

Consider every quoted line (38, 88 and 141):
On line 38 is a definition of class, by putting nn.Module in brackets it's declaring inheritance of class Generator from class nn.Module (which is common way to define your own neural network).
On line 88 instance of class Generator is created -- for parameters it needs all what's inside brackets of __init__ (line 39), besides self, that's why brackets on line 88 is empty.
And on line 141 there is a calling of genearator, behavior here defined by method forward (line 58) and there is one parameter to be passed -- z.
Again, line 88 creates an instance, line 141 calls forward method of an instance.

Related

Tracing Tensor Sizes in TorchScript

I'm exporting a PyTorch model via TorchScript tracing, but I'm facing issues. Specifically, I have to perform some operations on tensor sizes, but the JIT compilers hardcodes the variable shapes as constants, braking compatibility with tensor of different sizes.
For example, create the class:
class Foo(nn.Module):
"""Toy class that plays with tensor shape to showcase tracing issue.
It creates a new tensor with the same shape as the input one, except
for the last dimension, which is doubled. This new tensor is filled
based on the values of the input.
"""
def __init__(self):
nn.Module.__init__(self)
def forward(self, x):
new_shape = (x.shape[0], 2*x.shape[1]) # incriminated instruction
x2 = torch.empty(size=new_shape)
x2[:, ::2] = x
x2[:, 1::2] = x + 1
return x2
and run the test code:
x = torch.randn((3, 5)) # create example input
foo = Foo()
traced_foo = torch.jit.trace(foo, x) # trace
print(traced_foo(x).shape) # obviously this works
print(traced_foo(x[:, :4]).shape) # but fails with a different shape!
I could solve the issue by scripting, but in this case I really need to use tracing. Moreover, I think that tracing should be able to handle tensor size manipulations correctly.
but in this case I really need to use tracing
You can freely mix torch.script and torch.jit wherever needed. For example one could do this:
import torch
class MySuperModel(torch.nn.Module):
def __init__(self, *args, **kwargs):
super().__init__()
self.scripted = torch.jit.script(Foo(*args, **kwargs))
self.traced = Bar(*args, **kwargs)
def forward(self, data):
return self.scripted(self.traced(data))
model = MySuperModel()
torch.jit.trace(model, (input1, input2))
You could also move part of the functionality dependent on shape to separate function and decorate it with #torch.jit.script:
#torch.jit.script
def _forward_impl(x):
new_shape = (x.shape[0], 2*x.shape[1]) # incriminated instruction
x2 = torch.empty(size=new_shape)
x2[:, ::2] = x
x2[:, 1::2] = x + 1
return x2
class Foo(nn.Module):
def forward(self, x):
return _forward_impl(x)
There is no other way than script for that as it has to understand your code. With tracing it merely records operations you perform on the tensor and has no knowledge of control flow dependent on data (or shape of data).
Anyway, this should cover most of the cases and if it doesn't you should be more specific.
This bug has been fixed on 1.10.2

PyTorch: Different Forward Methods for Train and Test/Validation

I'm currently trying to extend a model that is based on FairSeq/PyTorch. During training I need to train two encoders: one with the target sample, and the original one with the source sample.
So the current forward function looks like this:
def forward(self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs):
encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs)
decoder_out = self.decoder(prev_output_tokens, encoder_out=encoder_out, **kwargs)
return decoder_out
And based on this this idea i want something like this:
def forward_test(self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs):
encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs)
decoder_out = self.decoder(prev_output_tokens, encoder_out=encoder_out, **kwargs)
return decoder_out
def forward_train(self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs):
encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs)
autoencoder_out = self.encoder(tgt_tokens, src_lengths=src_lengths, **kwargs)
concat = some_concatination_func(encoder_out, autoencoder_out)
decoder_out = self.decoder(prev_output_tokens, encoder_out=concat, **kwargs)
return decoder_out
Is there any way to do this?
Edit:
These are the constraints that I have, since I need to extend FairseqEncoderDecoderModel:
#register_model('transformer_mass')
class TransformerMASSModel(FairseqEncoderDecoderModel):
def __init__(self, encoder, decoder):
super().__init__(encoder, decoder)
Edit 2:
The parameters passed to the forward function in Fairseq can be altered by implementing your own Criterion, see for example CrossEntropyCriterion, where sample['net_input'] is passed to the __call__ function of the model, which invokes the forward method.
First of all you should always use and define forward not some other methods that you call on the torch.nn.Module instance.
Definitely do not overload eval() as shown by trsvchn as it's evaluation method defined by PyTorch (see here). This method allows layers inside your model to be put into evaluation mode (e.g. specific changes to layers like inference mode for Dropout or BatchNorm).
Furthermore you should call it with __call__ magic method. Why? Because hooks and other PyTorch specific stuff is registered that way properly.
Secondly, do not use some external mode string variable as suggested by #Anant Mittal. That's what train variable in PyTorch is for, it's standard to differentiate by it whether model is in eval mode or train mode.
That being said you are the best off doing it like this:
import torch
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
...
# You could split it into two functions but both should be called by forward
def forward(
self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs
):
encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs)
if self.train:
return self.decoder(prev_output_tokens, encoder_out=encoder_out, **kwargs)
autoencoder_out = self.encoder(tgt_tokens, src_lengths=src_lengths, **kwargs)
concat = some_concatination_func(encoder_out, autoencoder_out)
return self.decoder(prev_output_tokens, encoder_out=concat, **kwargs)
You could (and arguably should) split the above into two separate methods, but that's not too bad as the function is rather short and readable that way. Just stick to PyTorch's way of handling things if easily possible and not some ad-hoc solutions. And no, there will be no problem with backpropagation, why would there be one?
By default, calling model() invoke forward method which is train forward in your case, so you just need to define new method for your test/eval path inside your model class, smth like here:
Code:
class FooBar(nn.Module):
"""Dummy Net for testing/debugging.
"""
def __init__(self):
super().__init__()
...
def forward(self, x):
# here will be train forward
...
def evaltest(self, x):
# here will be eval/test forward
...
Examples:
model = FooBar() # initialize model
# train time
pred = model(x) # calls forward() method under the hood
# test/eval time
test_pred = model.evaltest(x)
Comment:
I would like to recommend you to split these two forward paths into 2 separate methods, because it easier to debug and to avoid some possible problems when backpropagating.

Pytorch Question from 'Deep Reinforcement Learning: Hands-On'

I'm reading Maxim Lapan's Deep Learning Hands On. I came across this code in chapter 2 and I don't understand a few things. Could anybody explain why the output of print(out) gives three parameters instead of the single float tensor we put in. Also, why is the super function necessary here? Finally, what is the x parameter that forward is accepting? Thank you.
class OurModule(nn.Module):
def __init__(self, num_inputs, num_classes, dropout_prob=0.3): #init
super(OurModule, self).__init__() #Call OurModule and pass the net instance (Why is this necessary?)
self.pipe = nn.Sequential( #net.pipe is the nn object now
nn.Linear(num_inputs, 5),
nn.ReLU(),
nn.Linear(5, 20),
nn.ReLU(),
nn.Linear(20, num_classes),
nn.Dropout(p=dropout_prob),
nn.Softmax(dim=1)
)
def forward(self, x): #override the default forward method by passing it our net instance and (return the nn object?). x is the tensor? This is called when 'net' receives a param?
return self.pipe(x)
if __name__ == "__main__":
net = OurModule(num_inputs=2, num_classes=3)
print(net)
v = torch.FloatTensor([[2, 3]])
out = net(v)
print(out) #[2,3] put through the forward method of the nn? Why did we get a third param for the output?
print("Cuda's availability is %s" % torch.cuda.is_available()) #find if gpu is available
if torch.cuda.is_available():
print("Data from cuda: %s" % out.to('cuda'))
OurModule.__mro__
OurModule defined a PyTorch nn.Module that accepts 2 inputs (num_inputs) and produces 3 outputs (num_classes).
It consists of:
A Linear layers that accepts 2 inputs and produces 5 outputs
A ReLU
A Linear layer that accepts 5 inputs and produces 20 outputs
A ReLU
A Linear layer that accepts 20 inputs and produces 3 (num_classes) outputs
A Dropout layer
A Softmax layer
You create v which consists of 2 inputs and pass it through this network's forward() method when you call net(v). The result of running this network (3 outputs) is then stored in out.
In your example, x takes on the value of v, torch.FloatTensor([[2, 3]])
Although #JoshVarty has provided a great answer, I would like to add a little bit.
why is the super function necessary here
The class OurModule inherits nn.Module. The super function means you want to use the parent's (nn.Module) function, namely init. You can refer to the source code to see what the parent class exactly does in the init function.

How to develop a layer that works with arbitrary size input

I'm trying to develop a layer in Keras which works with 3D tensors. To make it flexible, I would like to postpone the code that relies on the input's exact shape as much as possible.
My layer is overriding 5 methods:
from tensorflow.python.keras.layers import Layer
class MyLayer(Layer):
def __init__(self, **kwargs):
pass
def build(self, input_shape):
pass
def call(self, inputs, verbose=False):
second_dim = K.int_shape(inputs)[-2]
# Do something with the second_dim
def compute_output_shape(self, input_shape):
pass
def get_config(self):
pass
And I'm using this layer like this:
input = Input(batch_shape=(None, None, 128), name='input')
x = MyLayer(name='my_layer')(input)
model = Model(input, x)
But I'm facing an error since the second_dim is None. How can I develop a layer that relies on the dimensions of the input but it's ok with it being provided by the actual data and not the input layer?
I ended up asking the same question differently, and I've got a perfect answer:
What is the right way to manipulate the shape of a tensor when there are unknown elements in it?
The gist of it is, don't treat the dimensions directly. Use them by reference and not by value. So, do not use K.int_shape and instead use K.shape. And use Keras operations to compose and come up with a new shape:
shape = K.shape(x)
newShape = K.concatenate([
shape[0:1],
shape[1:2] * shape[2:3],
shape[3:4]
])

cannot assign 'torch.nn.modules.container.Sequential' as parameter

I was following this method
(https://discuss.pytorch.org/t/dynamic-parameter-declaration-in-forward-function/427) to dynamically assign parameters in forward function.
However, my parameter is not just one single weight tensor but it is nn.Sequential.
When I implement below:
class MyModule(nn.Module):
def __init__(self):
# you need to register the parameter names earlier
self.register_parameter('W_di', None)
def forward(self, input):
if self.W_di is None:
self.W_di = nn.Sequential(
nn.Linear(mL_n * 2, 1024),
nn.ReLU(),
nn.Linear(1024, self.hS)).to(device)
I get the following error.
TypeError: cannot assign 'torch.nn.modules.container.Sequential' as parameter 'W_di' (torch.nn.Parameter or None expected)
Is there any way that I can register nn.Sequential as a whole param? Thanks!
If you or other users still have this problem, one solution to consider is using nn.ModuleList instead of nn.Sequential.
While nn.Sequential is useful for defining a fixed sequence of layers in PyTorch, nn.ModuleList is a more flexible container that allows direct access and modification of individual layers within the list. This can be especially helpful when dealing with dynamic models or architectures that require more complex layer arrangements.
My gut feeling is that you cannot do it. Even in the static model declaration, nn.Module also specifies the parameters of every sub-modules (e.g., nn.Conv2d or nn.Linear) in a nested way. That is, every kernel or bias is registered one by one and independently.
One workaround might be to introduce dynamic sub-modules. Here is my brief implementation. One can define desired dynamic behaviors inside the function DynamicLinear.
import torch
import torch.nn as nn
class DynamicLinear(nn.Module):
def __init__(self):
super(DynamicLinear, self).__init__()
# you need to register the parameter names earlier
self.register_parameter('W_di', None)
def forward(self, x):
if self.W_di is None:
# dynamically define a linear function here
self.W_di = nn.Parameter(torch.ones(1, 1)).to(x.device)
return self.W_di # x
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.net = nn.Sequential(
DynamicLinear(),
nn.ReLU(),
DynamicLinear())
def forward(self, x):
return self.net(x)
m = MyModule()
x = torch.ones(1, 1)
y = m(x)
# output: 1
print(y)

Resources