is the model regenerated at every invokation of nn.Sequential? - pytorch

I am dealing with pytorch, I would like to know wheter if nn.Sequential is called more than once it means the model gets regenerated.
Waiting for your answer,
Thank you

nn.Sequential is not a function, it's a class.
When you "call" nn.Sequential you create an instance of this class. You can now use the model to do whatever you want. If you create another instance ("call" it again) it will simply create another unrelated model

Related

In Python, is what are the differences between a method outside a class definition, or a method in it using staticmethod?

I have been working a a very dense set of calculations. It all is to support a specific problem I have.
But the nature of the problem is no different than this. Suppose I develop a class called 'Matrix' that has the machinery to implement matrices. Instantiation would presumably take a list of lists, which would be the matrix entries.
Now I want to provide a multiply method. I have two choices. First, I could define a method like so:
class Matrix():
def __init__(self, entries)
# do the obvious here
return
def determinant(self):
# again, do the obvious here
return result_of_calcs
def multiply(self, b):
# again do the obvious here
return
If I do this, the call signature for two matrix objects, a and b, is
a.multiply(b)...
The other choice is a #staticmethod. Then, the definition looks like:
#staticethod
def multiply(a,b):
# do the obvious thing.
Now the call signature is:
z = multiply(a,b)
I am unclear when one is better than the other. The free-standing function is not truly part of the class definition, but who cares? it gets the job done, and because Python allows "reaching into an object" references from outside, it seems able to do everything. In practice they'll (the class and the method) end up in the same module, so they're at least linked there.
On the other hand, my understanding of the #staticmethod approach is that the function is now part of the class definition (it defines one of the methods), but the method gets no "self" passed in. In a way this is nice because the call signature is the much better looking:
z = multiply(a,b)
and the function can access all the instances' methods and attributes.
Is this the right way to view it? Are there strong reasons to do one or the other? In what ways are they not equivalent?
I have done quite a bit of Python programming since answering this question.
Suppose we have a file named matrix.py, and it has a bunch of code for manipulating matrices. We want to provide a matrix multiply method.
The two approaches are:
define a free:standing function with the signature multiply(x,y)
make it a method of all matrices: x.multiply(y)
Matrix multiply is what I will call a dyadic function. In other words, it always takes two arguments.
The temptation is to use #2, so that a matrix object "carries with it everywhere" the ability to be multiplied. However, the only thing it makes sense to multiply it with is another matrix object. In such cases there are two equally good ways to do that, viz:
z=x.multiply(y)
or
z=y.multiply(x)
However, a better way to do it is to define a function inside the file that is:
multiply(x,y)
multiply(), as such, is a function any code using the 'library' expects to have available. It need not be associated with each matrix. And, since the user will be doing an 'import', they will get the multiply method. This is better code.
What I was wrongly confounding was two things that led me to the method attached to every object instance:
Functions which need to be generally available inside the file that should be
exposed outside it; and
Functions which are needed only inside the file.
multiply() is an example of type 1. Any matrix 'library' ought to likely define matrix multiplication.
What I was worried about was needing to expose all the 'internal' functions. For example, suppose we want to make externally available matrix add(), multiple() and invert(). Suppose, however, we did not want to make externally available - but needed inside - determinant().
One way to 'protect' users is to make determinant a function (a def statement) inside the class declaration for matrices. Then it is protected from exposure. However, nothing stops a user of the code from reaching in if they know the internals, by using the method matrix.determinant().
In the end it comes down to convention, largely. It makes more sense to expose a matrix multiply function which takes two matrices, and is called like multiply(x,y). As for the determinant function, instead of 'wrapping it' in the class, it makes more sense to define it as __determinant(x) at the same level as the class definition for matrices.
You can never truly protect internal methods by their declaration, it seems. The best you can do is warn users. the "dunder" approach gives warning 'this is not expected to be called outside the code in this file'.

Keyword arguments in torch.nn.Sequential (pytroch)

a question regarding keywords in torch.nn.Sequential, it is possible in some way to forward keywords to specific models in a sequence?
model = torch.nn.Sequential(model_0, MaxPoolingChannel(1))
res = model(input_ids_2, keyword_test=mask)
here, keyword_test should be forwarded only to the first model.
Thank a lot and best regards!
my duplicate from - https://discuss.pytorch.org/t/keyword-arguments-in-torch-nn-sequential/53282
No; you cannot. This is only possible if all models passed to the nn.Sequential expects the argument you are trying to pass in their forward method (at least at the time of writing this).
Two workarounds could be (I'm not aware of the whole case, but anticipated from the question):
If your value is static, why not initializing your first model with that value, and access it during computation with self.keyword_test.
In case the value is dynamic, you could have it as an inherent property in the input; hence, you can access it, also, during computation with input_ids_2.keyword_test

how do I avoid having to redefine outputs whenever calling a module?

I have a module with outputs and that looks like this:
output "myvpc" { value = "${aws_vpc.myvpc.id}" }
I call that module in another tf file.
Now I want the tf file that calls the module to have those outputs i defined in the module.
But the only way I have been able to do this is to redefine the outputs again in the calling tf file and that looks like this:
output "myvpc" { value = "${module.myvpc.myvpc}" }
So now I have this redundant line of config and another layer of abstraction for doing something I'd expect would be a pretty necessary thing when using terraform.
Pretty sure I'm doing this wrong because it feels redundant/wrong. The whole purpose of a module is code reuse but having to redefine outputs redundantly and, even worse, mask them with another layer of abstraction like this takes away some of that value.
The behaviour you are experiencing is the expected reason, precisely, because modules abstract away the details of implementation.
In writing a module, you are minimizing the contact surface by specifying the variables (parameters) that the model needs, but hiding all implementation details. The same argument applies to the outputs. Instead of all variables being output from the module, you are only exposing the semantically useful values.
If you can accept the above behaviour as valid, then having the same logic apply to the next semantic level seems intuitive as well.
If you need high-level access to values down the chain of abstractions, then you need to write this 'duplicate code'. Note however, that you can rename und change the values according to your semantic needs.

Is there a compelling reason to call type.mro() rather than iterate over type.__mro__ directly?

Is there a compelling reason to call type.mro() rather than iterate over type.__mro__ directly? It's literally ~13 times faster to access (36ns vs 488 ns)
I stumbled upon it while looking to cache type.mro(). It seems legit, but it makes me wonder: can I rely on type.__mro__, or do I have to call type.mro()? and under what conditions can I get away with the former?
More importantly, what set of conditions would have to occur for type.__mro__ to be invalid?
For instance, when a new subclass is defined/created that alters an existing class's mro, is the existing class' .__mro__ immediately updated? Does this happen on every new class creation? that makes it part of class type? Which part? ..or is that what type.mro() is about?
Of course, all that is assuming that type.__mro__ is, in fact, a tuple of cached names pointing to the objects in a given type's mro. If that assumption is incorrect; then, what is it? (probably a descriptor or something..) and why can/can't I use it?
EDIT: If it is a descriptor, then I'd love to learn its magic, as both: type(type.__mro__) is tuple and type(type(type).__mro__) is tuple (ie: probably not a descriptor)
EDIT: Not sure how relevant this is, but type('whatever').mro() returns a list whereas type('whatever').__mro__ returns a tuple. (Un?)fortunately, appending to that list doesn't change the __mro__ or subsequent calls to .mro() of/on the type in question (in this case, str).
Thanks for the help!
According to the docs:
class.__mro__
This attribute is a tuple of classes that are considered when looking for base classes during method resolution.
class.mro()
This method can be overridden by a metaclass to customize the method resolution order for its instances. It is called at class instantiation, and its result is stored in __mro__.
So yes, your assumption about __mro__ being a cache is correct. If your metaclass' mro() always returns the same thing, or if you don't have any metaclasses, you can safely use __mro__.

Apply Visitor Pattern as an Extensibilty Mechanism for a single class

Let's say in my program I have a class called Robot that inherits from some other class.
Until now I have some methods inside Robot like addPart or getCost. Now I'm asked to add a new module of functionality to it (a few methods that use it's parts) but they explicitly ask the new module to be added with little to no impact to the current class.
I thought a Visitor could solve this but the thing is I won't be applying the pattern to a hierarchy. Is this a correct thing to do? (as you can see my Robot is part of a composite)
Fundamentally, I agree with your approach. You have successfully identified an approach that allows you to extend Robot (a parts composite) without having to actually modify the Robot class. The only changes I would make are the following:
I would introduce a new interface named something like IPartsComposite that would define the Accept method. This interface would be implemented by Robot since it is composed of Part instances.
The base Visitor would be a base generic class or interface i.e.Visitor<T>. This type would define a single method Visit(T). Then, in your case, you would have three concrete implementations of Visitor<IPartsComposite>.
PartsVisitorService
PartsVisitorCosts
PartsVisitorProduction
In each of these concrete classes you would implement Visit(IPartsComposite).

Resources