For exportation into a dbdump, I need to create a table that is a exact clone of my model but with a "summary" column.
Given that the model is concrete, not abstract, to subclass it is a failure, as
class AnnotatedModel(MyModel):
summary = m.TextField(null=True, blank=True)
creates a new table with only the new field.
I have attempted to use metaclass inheritance instead, but I am stuck because of the model.Meta subclass of Django BaseModel. Other attemps to completely clone the model with copy deepcopy etc have been also unsuccessful. I have got some success using add_to_class but I am not sure if it is a documented user level function and it modifies deeply the class, so I have not been able to produce two different, separated models.
The goal is be to be able to run a loop, say
for x in MyModel.objects.using('input').all():
y = cast_to_AnnotatedModelInstance(x)
y.pk = None
y.summary = Foo(x)
y.save(using='output')
without modifying the original model which is in a separate package. Ideally, I would prefer x to be objects of MyModel and then casting to AnnotatedModel and save them.
At the moment, what I am doing is to expand the model with add_to_class
from foo.bar.models import MyModel
MyModel.add_to_class('summary',m.TextField(null=True, blank=True))
then create the export database explicitly
with c['output'].schema_editor() as editor:
editor.create_model(MyModel)
and then loop as in the question, with using('input').defer("summary") to access the original model of the application.
for x in MyModel.objects.using('input').defer("summary").all():
x.pk = None
x.summary = Foo(x)
x.save(using='output')
Note that because of the add_to_class, the model tries to read the column summary even in the original database, fortunately it can be skipped using defer.
Related
I've built a class to ask a user a question, based on a type.
class Question:
def __init__(self, subject):
self.subject = subject
self.question = f"Enter the {subject} to be created. You may end this by typing 'DONE':\n"
self.still_needed = True
def ask_question(self):
ans_list = []
running = True
while running:
var = input(f"Enter {self.subject}?\n")
if var.lower() == 'done':
running = False
else:
ans_list.append(var)
return ans_list
The idea is to have a question model, to create lists of items.
This seems to work well with the following code in main.
roles = Question(subject="role").ask_question()
This creates a list from the Queue Class and uses it's method ask question to generate the list. As far as I can tell the object is then destroyed, as it's not saved to a variable.
My question, being new to Python and OOP is, does this seem like a solid and non-confusing way, or should I refractor? If so, what does the community suggest?
MY OPINION
I guess it depends on you. For one, one of the main purposes of using a class is to create an instance with it later on. Classes are objects ,or "categories" as I like to call them, that you use when there are distinctive types of instances in your project.
Given your code snippet, I can't really suggest anything, I don't know the usage of self.question and self.still_needed. However, if I were to base my opinion on just this part: roles = Question(subject="role").ask_question(), then I'd definitely go with using a function instead. As you've said,
As far as I can tell the object is then destroyed, as it's not saved
to a variable.
ALTERNATIVE SOLUTION
Use decorators → the one with # symbol
In this case, #staticmethod is the way to go!
What are staticmethods? The staticmethod decorator is a way to create a function in a class. So instead of it becoming a method, it can be treated as a function (without self parameter). This also means that a static method bounds to the class rather than its object. Consequently, static methods do not depend on objects (hence, you don't need to create an object for you to use it). Example:
class SomeMathStuff():
#staticmethod
def AddSomeNumbers(iterable):
return sum(iterable)
result = SomeMathStuff.AddSomeNumbers([1, 2, 3])
# result = 6
As you can see, I did not need to create an object, instead I just needed to call its class to use it. Word of warning, most Python programmers argue that this is the un-Pythonic way, but I wouldn't worry too much about it. Hell, even I use these methods sometimes. In my defense, this is a good and efficient way to organize your project. With this, you can apply class methods globally and you can "categorize" them in certain classes you find suitable.
Anyway, this is all I have! I apologize if I misinformed you in any way.
ADDITIONAL INFROMATION ... in case I wasn't the best teacher
https://www.programiz.com/python-programming/methods/built-in/staticmethod
Difference between staticmethod and classmethod
https://softwareengineering.stackexchange.com/questions/171296/staticmethod-vs-module-level-function
So I have a Nested Many Schema (eg Users) inside another Schema (eg Computer). My input object to be deserialised by the Schema is complex and does not allow for assignment, and to modify it to allow for assignment is impractical.
The input object (eg ComputerObject) itself does not contain an a value called "Users", but nested in a few other objects is a function that can get the users (eg ComputerObject.OS.Accounts.getUsers()), and I want the output of that function to be used as the value for that field in the schema.
Two possible solutions exist that I know of, I could either define the field as field.Method(#call the function here) or I could do a #post_dump function to call the function and add it to the final output JSON as it can provide both the initial object and the output JSON.
The issue with both of these is that it then doesn't serialise it through the nested Schema Users, which contains more nested Schemas and so on, it would just set that field to be equal to the return value of getUsers, which I don't want.
I have tried to define it in a pre-dump so that it can then be serialised in the dump (note: this schema is used only for dumping and not for loading), but as that takes in the initial object I cannot assign to it.
Basically, I have a thing I am trying to do, and a bunch of hacky workarounds that could make it work but not without breaking other things or missing out on the validation altogether, but no actual solution it seems, anybody know how to do this properly?
For further info, the object that is being input is a complex Django Model, which might give me some avenues Im not aware of, my Django experience is somewhat lacking.
So figured this out myself eventually:
Instead of managing the data-getting in the main schema, you can define the method used in the sub-schema using post_dump with many=True, thus the following code would work correctly:
class User(Schema):
id = fields.UUID
#pre_dump(pass_many=True)
def get_data(self, data, **kwargs):
data = data.Accounts.getUsers()
return data
class Computer(Schema):
#The field will need to be called "OS" in order to correctly look in the "OS" attribute for further data
OS = fields.Nested(User, many=True, data_key="users")
I am currently developing a piece of software where the I have class instamces that are generated from dictionaries. The way these dictionariea file are structured is as follows:
layer_dict = {
"layer_type": "Conv2D",
"name": "conv1",
"kernel_size": 3,
...
}
Then, the following code is ran
def create_layer(layer_dict):
LayerType = getattr(layers, layer_dict['layer_type']
del layer_dict['layer_type']
return LayerType(**layer_dict)
Now, I want to support the creation of new layer types (by subclassing the BaseLayer class). I've thought of a few ways to do this and thought I'd ask which way is best and why as I don't have much experience developing software (finishing an MSc in comp bio).
Method 1: Metaclasses
The first method I thought of was to have a metaclass that registers every subclass of BaseLayer in a dict and do a simple lookup of this dict instead of using getattr.
class MetaLayer(type)
layers = {}
def __init__(cls, name, bases, dct):
if name in MetaLayer.layers:
raise ValueError('Cannot have more than one layer with the same name')
MetaLayer.layers[name] = cls
Benefit: The metaclass can make sure that no two classes have the same name. The user doesn't need to think about anything but subclassing when creating new layers.
Downside: Metaclasses are difficult to understand and often frowned upon
Method 2: Traversing the __subclasses__ tree
The second method I thought of was to use the __subclassess__ function of BaseLayer to get a list of all subclasses, then create a dict with Layer.__name__ as keys and Layer as values. See example code below:
def get_subclasses(cls):
"""Returns all classes that inherit from `cls`
"""
subclasses = {
sub.__name__: sub for sub in cls.__subclasses__()
}
subsubclasses = (
get_subclasses(sub) for sub in subclasses.values()
)
subsubclasses = {
name: sub for subs in subsubclasses for name, sub in subs.items()
}
return {**subclasses, ** subsubclasses}
Benefit: Easy to explain how this works.
Downside: We might end up with two layers having the same name.
Method 3: Using a class decorator
The final method is my favourite as it doesn't hide any implementation details in a metaclass, and still manages to prevent multiple classes with the same name.
Here the layers module has a global variable named layers and a decorator named register_layer, which simply adds the decorated classes to the layers dict. See code below.
layers = {}
def register_layer(cls):
if cls.__name__ in layers:
raise ValueError('Cannot have two layers with the same name')
layers[cls.__name__] = cls
return cls
Benefit: No metaclasses and no way of having two layers with the same name.
Downside: Requires a global variable, which is often frowned upon.
So, my question is, which method is preferable? And more importantly, why?
Actually - that is the kind of things metaclases are designed for. As you can see from the options you stated above, it is the simpler and more straightforward design.
They are sometimes "frowned upon" because of two things: (1) people don't understand then and don't care for understanding; (2) people misuse then when they are actually not needed; (3) they are hard to combine - so if any of your classes is to be used with a mixn that have a different metaclass (say abc.ABC), you have also to produce a combining metaclass.
Method 4: __init_subclass__
Now, that said, from Python 3.6, there is a new feature that can cover your usecase without the need for metaclasses: the class __init_subclass__ method:
it is called as a classmethod on the base class when subclasses of it are created.
All you need is to write a proper __init_subclass__ method on your BaseLayer class and have all the benefits you'd have from the implementation in the metaclasses and none of the downsides
Like you, I like the class decorator approach as it is more readable.
You can avoid using a global variable by making the class decorator itself a class, and making layers a class variable instead. You can also avoid possible name collision by joining the target class' name with its module name:
class register_layer:
layers = {}
def __new__(cls, target):
cls.layers['.'.join((target.__module__, target.__name__))] = target
return target
I am trying to create a nested class to perform sum or multiplication of the arguments passed in each subclass.
The below example helps me perform action within the class, however I am unable to find any documentation which would help me with inheriting the attributes from the Parent Class to child.
Recently I came across an article which highlights "nested classes can't access any members of their outer classes at compile-time.". Is there a better way to pass the values between Classes? I tried using global variables, but would like to avoid setting many global variables while I scale this logic to extract my entire datacenter's inventory, perform some calculations and again pass to another class.
class Class1:
firstnumber=0
def __init__(self,arg):
self.firstnumber=arg
class Class2:
def __init__(self,arg):
self.secondnumber=arg
def sumit(self):
return Class1.firstnumber+Class1.Class2.secondnumber
print(Class1(5).firstnumber)
print(Class1(6).Class2(4).secondnumber)
print(Class1(4).Class2(10).sumit())
I would like to perform calculations with
Class1(variable1).Class2(variable2).Class3(variable3).sum() or
Class1(variable1).Class2(variable2).Class3(variable3).multiple() and eventually be able to do following
Datacenter('DC1').GetServer('ServerName').GetStorageCapacity('NFS').Used()
Datacenter('DC1').GetServer('ServerName').GetStorageCapacity('NFS').Free()
http://momentaryfascinations.com/programming/bound.inner.classes.for.python.html
i may be wrong but to my understanding anything you put in between the class() and the init statement is permanent and unchangable. you shouldn't need to create seperate classes for each number. create different instances of the same class.
class numbers:
def __init__(self,arg):
self.arg = arg
c1 = numbers(3)
c2 = numbers(5)
i don't know how you would add the arg variables together maybe someone else can fill in what i'm missing.
I'm trying to figure out how to serialize an object with Pickle to a save file. My example is an object called World and this object has a list (named objects) of potentially hundreds of instantiated objects of different class types.
The problem is that Pickle won't let me serialize the items within the World.objects list because they aren't instantiated as attributes of World.
When I attempt to serialize with:
with open('gsave.pkl', 'wb') as output:
pickle.dump(world.objects, output, pickle.DEFAULT_PROTOCOL)
I get the following error:
_pickle.PicklingError: Can't pickle <class 'world.LargeHealthPotion'>:
attribute lookup LargeHealthPotion on world failed
So, my question is: what is an alternative way of storing the world.objects list items so that they are attributes of world rather than list items that don't get saved?
UPDATE
I think my issue isn't where the objects are stored; but rather that the class LargeHealthPotion (and many others) are dynamically created within the World class by operations such as this:
def __constructor__(self, n, cl, d, c, h, l):
# initialize super of class type
super(self.__class__, self).__init__(name=n, classtype=cl, description=d, cost=c,
hp=h, level=l)
# create the object class dynamically, utilizing __constructor__ for __init__ method
item = type(item_name,
(eval("{}.{}".format(name,row[1].value)),),
{'__init__':__constructor__})
# add new object to the global _objects object to be used throughout the world
self._objects[item_name] = item(obj_name, obj_classtype, obj_description, obj_cost,
obj_hp, obj_level)
When this finishes, I will have a new object like <world.LargeHealthPotion object at 0x103690ac8>. I do this dynamically because I don't want to explicitly have to create hundreds of different types of classes for each different type of object in my world. Instead, I create the class dynamically while iterating over the item name (with it's stats) that I want to create.
This introduces a problem though, because when pickling, it can't find the static reference to the class in order to deconstruct, or reconstruct the object...so it fails.
What else can I do? (Besides creating literal class references for each, and every, type of object I'm going to instantiate into my world.)
Pickle does not pickle classes, it instead relies on references to classes which doesn't work if the class was dynamically generated. (this answer has appropriate exert and bolding from documentation)
So pickle assumes that if your object is from the class called world.LargeHealthPotion then it check that that name actually resolves to the class that it will be able to use when unpickling, if it doesn't then you won't be able to reinitialize the object since it doesn't know how to reference the class. There are a few ways of getting around this:
Define __reduce__ to reconstruct object
I'm not sure how to demo this method to you, I'd need much more information about your setup to suggest how to implement this but I can describe it:
First you'd make a function or classmethod that could recreate one object based on the arguments (probably take class name, instance variables etc.) Then define __reduce__ on the object base class that would return that function along with the arguments needed to pass to it when unpickling.
Put the dynamic classes in the global scope
This is the quick and dirty solution. Assuming the class names do not conflict with other things defined in the world module you could theoretically insert the classes into the global scope by doing globals()[item_name] = item_type, but I do not recommend this as long term solution since it is very bad practice.
Don't use dynamic classes
This is definitely the way to go in my opinion, instead of using the type constructor, just define your own class named something like ObjectType that:
Is not a subclass of type so the instances would be pickle-able.
When an instance is it called constructs a new game-object that has a reference to the object type.
So assuming you have a class called GameObject that takes cls=<ObjectType object> you could setup the ObjectType class something like this:
class ObjectType:
def __init__(self, name, description):
self.item_name = name
self.base_item_description = description
#other qualities common to all objects of this type
def __call__(self, cost, level, hp):
#other qualities that are specific to each item
return GameObject(cls=self, cost=cost, level=level, hp=hp)
Here I am using the __call__ magic method so it uses the same notation as classes cls(params) to create instances, the cls=self would indicate to the (abstracted) GameObject constructor that the class (type) of GameObject is based on the ObjectType instance self. It doesn't have to be a keyword argument, but I'm not sure how else to make a coherent example code without knowing more about your program.