Keyword arguments in torch.nn.Sequential (pytroch) - pytorch

a question regarding keywords in torch.nn.Sequential, it is possible in some way to forward keywords to specific models in a sequence?
model = torch.nn.Sequential(model_0, MaxPoolingChannel(1))
res = model(input_ids_2, keyword_test=mask)
here, keyword_test should be forwarded only to the first model.
Thank a lot and best regards!
my duplicate from - https://discuss.pytorch.org/t/keyword-arguments-in-torch-nn-sequential/53282

No; you cannot. This is only possible if all models passed to the nn.Sequential expects the argument you are trying to pass in their forward method (at least at the time of writing this).
Two workarounds could be (I'm not aware of the whole case, but anticipated from the question):
If your value is static, why not initializing your first model with that value, and access it during computation with self.keyword_test.
In case the value is dynamic, you could have it as an inherent property in the input; hence, you can access it, also, during computation with input_ids_2.keyword_test

Related

Specify fixed parameters and parameters to be search in optuna (lightgbm)

I just found Optuna and it seems they are integrated with lightGBM, but I struggle to see where I can fix parameters, e.g scoring="auc" and where I can define a gridspace to search, e.g num_leaves=[1,2,5,10].
Using https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_simple.py as example, they just define a params dict with some fixed parameters (are all parameters not specified in that dict tuned?), and the documentation states that
It tunes important hyperparameters (e.g., min_child_samples and feature_fraction) in a stepwise manner
How can I controll which parameters are tuned and in what space, and how can I fix some parameters?
I have no knowledge of LightGBM, but since this is the first result for fixing parameters in optuna, I'll answer that part of the question:
In optuna, the search space is defined within the code of the objective function. This function should take a 'trials' object as an input, and you can create parameters by calling the suggest_float(), suggest_int() etc. functions on that trials object. For more information, see the documentation at 10_key_features/002_configurations.html
Generally, fixing a parameter is done by hardcoding it instead of calling a suggest function, but it is possible to fix specific parameters externally using the PartialFixedSampler

How can I pass a set of instances to a function or predicate in Alloy Analyzer's Evaluator?

BLUF: I have a predicate which takes as arguments an instance of a signature and a set of instances of the same signature. Upon generating instances of the model, I'd like to pass instances of the signature to the predicate, but am at a loss for how to pass a set of instances, if it is even possible.
Alloy's Evaluator does not appear to be well-documented, unless I've missed it. I have Daniel Jackson's book, have done the tutuorial, and found various other resources online, but no one really seems to address how to use the Evaluator.
I've tried notation like:
myPred[instance$0,set(instance$1,instance$2)]
and
myPred[instance$0,set[instance$1,instance$2]]
and
myPred[instance$0,(instance$1,instance$2)]
and
myPred[instance$0,[instance$1,instance$2]]
The Evaluator doesn't like any of them. Is it possible to pass a set of instances? If so, how do I do it? Thanks for the help!
So, in usual fashion for me, almost as soon as I asked the question, I realized the answer (or at least a way to do what I wanted to). I simply used the union operator "+" to pass the set.
myPred[instance$0, instance$1 + instance$2]
Sorry for the inconvenience, but maybe this will help someone else!

Is there a compelling reason to call type.mro() rather than iterate over type.__mro__ directly?

Is there a compelling reason to call type.mro() rather than iterate over type.__mro__ directly? It's literally ~13 times faster to access (36ns vs 488 ns)
I stumbled upon it while looking to cache type.mro(). It seems legit, but it makes me wonder: can I rely on type.__mro__, or do I have to call type.mro()? and under what conditions can I get away with the former?
More importantly, what set of conditions would have to occur for type.__mro__ to be invalid?
For instance, when a new subclass is defined/created that alters an existing class's mro, is the existing class' .__mro__ immediately updated? Does this happen on every new class creation? that makes it part of class type? Which part? ..or is that what type.mro() is about?
Of course, all that is assuming that type.__mro__ is, in fact, a tuple of cached names pointing to the objects in a given type's mro. If that assumption is incorrect; then, what is it? (probably a descriptor or something..) and why can/can't I use it?
EDIT: If it is a descriptor, then I'd love to learn its magic, as both: type(type.__mro__) is tuple and type(type(type).__mro__) is tuple (ie: probably not a descriptor)
EDIT: Not sure how relevant this is, but type('whatever').mro() returns a list whereas type('whatever').__mro__ returns a tuple. (Un?)fortunately, appending to that list doesn't change the __mro__ or subsequent calls to .mro() of/on the type in question (in this case, str).
Thanks for the help!
According to the docs:
class.__mro__
This attribute is a tuple of classes that are considered when looking for base classes during method resolution.
class.mro()
This method can be overridden by a metaclass to customize the method resolution order for its instances. It is called at class instantiation, and its result is stored in __mro__.
So yes, your assumption about __mro__ being a cache is correct. If your metaclass' mro() always returns the same thing, or if you don't have any metaclasses, you can safely use __mro__.

is not bound to a legal value during translation

I would like to know a bit more about the bounding this error relates to.
I have an alloy model for which I create an instance manually (by writting it down in XML).
This instance is readable and the A4Solution can be correctly displayed.
But when I try to evaluate an expression in this instance using the eval() function, I receive this error message, though the field name and type of the exprvar retrieved from the model is exactly the same as the one in the instance..
I would like to know what does this bounding consist of. What are the properties taken into consideration to tell that one element of the instance is bounded to one element of the model.
Is the hidden ID figuring in the XML somewhere taken into consideration ?
I would like to know what does this bounding consist of.
It means that every free variable (i.e., relation in Kodkod) has to be given a bound before asking the solver to solve the formula.
What are the properties taken into consideration to tell that one element of the instance is bounded to one element of the model.
The instance XML file contains and exact value (a tuple set) for each and every sig and field from the model; those tuple sets are exactly the bounds that should be used when evaluating expressions. I'm not sure how exactly you are trying to use the Alloy API; but recreating the bounds from the XML file should (mostly) be handled by the API behind the scene.
Judging by your last comment to my previous answer, your problem might be related to this post.
In a nutshell, if you are trying to evaluate AST expression objects obtained from one "alloy world" (CompModule) against a solution that was entirely recreated from an XML file, you will get exactly that error. For example, if you have two PrimSig objects, s1 and s2, and they both have the same name (e.g., both were obtained by parsing sig S {...}), they are not "Java equals" (s1.equals(s2) returns false); the same holds for the underlying Kodkod relations/variables. So if you then try to evaluate s1 in a context which has a bound only for s2, you'll get an error saying that s1 is not bound.
Providing the set of Signatures defined in the metamodel while reading the A4Solution solved my problem.
Thus changing:
solution = A4SolutionReader.read(null, new XMLNode(f));
by
solution = A4SolutionReader.read(mm.getAllReachableSigs(), new XMLNode(f));
solved this illegal bounding issue

Options for representing string input as an object

I am receiving as input a "map" represented by strings, where certain nodes of the map have significance (s). For example:
---s--
--s---
s---s-
s---s-
-----s
My question is, what reasonable options are there for representing this input as an object.
The only option that really comes to mind is:
(1) Each position translated to node with up,down,left,right pointers. The whole object contains a pointer to top right node.
This seems like just a graph representation specific to this problem.
Thanks for the help.
Additionally, if there are common terms for this type of input, please let me know
Well, it depends a lot on what you need to delegate to those objects. OOP is basically about asking objects to perform things in order to solve a given problem, so it is hard to tell without knowing what you need to accomplish.
The solution you mention can be a valid one, as can also be having a matrix (in this case of 6x5) where you store in each matrix cell an object representing the node (just as an example, I used both approaches once to model the Conway's game of life). If you could give some more information on what you need to do with the object representation of your map then a better design can be discussed.
HTH

Resources