How to assign a name to a customModule in Pytorch? - pytorch

There was a similar question in How to assign a name for a pytorch layer?, and the answer gives two ways by using Sequential or OrderedDict. But what I hope is to add a name paramter to my custom module, namely
class MyModule(nn.Module):
def __init__(self, name=None):
...
and later I can use
class AnotherModule(nn.Module):
def __init__(self):
self.mymodules = ModuleList(MyModule(name=f'my{_}') for _ in range(2))
Is there a way to achieve this or this is just impossible?

Related

How to write a Python class with different methods in different cases?

Suppose we have two classes A and B
class A:
def __init__(self):
pass
def print_abc(self):
print('abc')
class B:
def __init__(self):
pass
def print_def(self):
print('def')
We use class A in the case when we want to work with storageA, and class B when we want to work with storageB. Now, is there a way to write one class, named C, which uses method print_abc if storageA is in question and print_def if storageB is in question? In some sense hypothetical something like this:
class C:
def __init__(self, storage):
self.storage = storage
#storage=storageA
def print_abc(self):
print('abc')
#storage=storageB
def print_def(self):
print('def')
Is something like this possible and how?

Specify class variable in Python to be a numpy array of not yet known size

I have a class like
class MyClass:
def __init__(self):
self.will_be_a_numpy_array = None
def compute():
tmp = receive_data()
self.will_be_a_numpy_array = np.zeros(len(tmp))
# process each item in tmp, save result in corresponding element of self.will_be_a_numpy_array
Here __init__ method is vague regarding the type of self.will_be_a_numpy_array variable. It is unclear to fellow developer or compiler what type of variable should be expected. I cannot initialize variable with self.will_be_a_numpy_array = np.zeros(len(tmp)) because I haven't received data yet. What is the right way to articulate variable type in this case?
You can use the strategy that scikit-learn uses for their estimators, namely, you create the attribute when you receive the data and you use a trailing underscore to warn that this is an attribute that is not created at initialisation:
class MyClass:
def __init__(self):
pass
def process(self, data):
self.data_ = np.array(data)
def is_processed(self):
return hasattr(self, 'data_')

Python subclass that takes superclass as argument on instantiation?

I am trying to create a wrapper class in Python with the following behaviour:
It should take as an argument an existing class from which it should inherit all methods and attributes
The wrapper class methods should be able to use Python super() to access methods of the superclass (the one passed as an argument)
Because of my second requirement I think the solution here will not suffice (and in any case I am having separate issues deepcopying some of the methods of the superclass' I am trying to inherit from).
I tried this but it's not correct...
class A:
def shout(self):
print("I AM A!")
class B:
def shout(self):
print("My name is B!")
class wrapper:
def __init__(self, super_class):
## Some inheritance thing here ##
# I initially tried this but no success...
super(super_class).__init__() # or similar?
def shout(self):
print('This is a wrapper')
super().shout()
And this is the behaviour I require...
my_wrapper = wrapper(A)
my_wrapper.shout()
# Expected output:
# > This is a wrapper
# > I AM A
my_wrapper = wrapper(B)
my_wrapper.shout()
# Expected output:
# > This is a wrapper
# > My name is B!
Is inheritance the correct approach here, if so am I sniffing in the right direction? Any help is appreciated, thanks :)
Edit for context:
I intend to build multiple wrappers so that all of my ML models have the same API. Generally, models from the same package (sklearn for example) have the same API and should be able to be wrapped by the same wrapper. In doing this I wish to modify/add functionality to the existing methods in these models whilst keeping the same method name.
If wrapper has to be a class then a composition solution would fit much better here.
Keep in mind that I turned the shout methods to staticmethod because in your example you pass the class to wrapper.shout, not an instance.
class A:
#staticmethod
def shout():
print("I AM A!")
class B:
#staticmethod
def shout():
print("My name is B!")
class wrapper:
def __init__(self, super_class):
self._super_class = super_class
def __getattr__(self, item):
try:
return self.__dict__[item].__func__
except KeyError:
return self._super_class.__dict__[item].__func__
def a_wrapper_method(self):
print('a wrapper attribute can still be used')
my_wrapper = wrapper(A)
my_wrapper.shout()
my_wrapper = wrapper(B)
my_wrapper.shout()
my_wrapper.a_wrapper_method()
Outputs
This is a wrapper
I AM A!
This is a wrapper
My name is B!
a wrapper attribute can still be used
So I went for a function in the end. My final solution:
class A:
def shout(self):
print("I AM A!")
class B:
def shout(self):
print("My name is B!")
def wrap_letter_class(to_wrap):
global letterWrapper
class letterWrapper(to_wrap):
def __init__(self):
super().__init__()
def shout(self):
print('This is a wrapper')
super().shout()
def __getstate__(self):
# Add the wrapper to global scope before pickling
global letterWrapper
letterWrapper = self.__class__
return self.__dict__
return letterWrapper()
Which produces the desired behaviour...
In [2]: wrapped = wrap_letter_class(A)
In [3]: wrapped.shout()
This is a wrapper
I AM A!
In [4]: wrapped = wrap_letter_class(B)
In [5]: wrapped.shout()
This is a wrapper
My name is B!
Something not mentioned in my initial question was that I intended to pickle my custom class, this is not possible if the class is not defined in the global scope, hence the __getstate__ and global additions.
Thanks!

Create class object with a tuple having tensorflow objects

I have a parametersTheta class which creates neural network as follows:
class parametersTheta:
def __init__(self, weight1, weight2,....):
self.weightName1 = weight1
self.weightName2 = weight2
...
self.sess = tf.Session()
def makeWorkerTheta(self, param):
return parametersTheta(self.sess.run(functionCalculatingTensorWeights, feed_dict={...}))
self.sess.run creates a tuple of all the weight tensors. However, error pops up saying you need to input weight2 and onwards, i.e. the tuple goes into weight1
How can I solve this? Basically, how can I create an instance of class parametersTheta with a tuple?
You can instantiate class with tuple expanded to arguments like this.
parametersTheta(*(weight1, weight2, ...))
An asterisk before a tuple expand it to a corresponding arguments list.

Using Super to break class into separate classes (and separate files)

I am trying to break a large class into separate subclasses, which I then intend to break into separate files by function. I thought the code below would work, but it seems I have the class/subclass logic incorrect. My example code is:
class MyParentClass():
def __init__(self, x):
self.x = x
class SubClass(MyParentClass):
def __init__(self, x):
super().__init__(x)
def test(self):
print("Test inside of SubClass")
def test2(self):
print(self.x)
z = MyParentClass("hello")
z.test()
z.test2()
The end-goal is to have:
1) MyParentClass the main class that is called.
2) Have multiple SubClasses that exist in separate files (e.g. ACL, Policy, Routes in acl.py, policy.py, and routes.py)
I don't know if this is possible, but what I envision is:
z = MyParentClass("172.16.16.1")
# would exist in acl.py
z.acl("permit any any")
# would exist in policy.py
z.policy("permit any any")
# would exist in route.py
z.route("route 0/0 next-hop 172.16.16.2")
# would exist in MyParentClass
z.save()
Thanks in advance.
MyParentClass shouldn't be a parent class, but a Registry, and have a __call__() method that is a Factory Method.

Resources