Django REST Serializer permanently changed in init - python-3.x

I'm facing a weird issue with DRF: I have several serializers for which I want to display certain fields only under specific conditions, such as url parameters there being in the request or the user having certain permissions.
To decouple my serializer's presentation logic from the business logic, I decided to add a conditional_fields attribute to the serializer's Meta class: it's a dict in which the keys are strings representing conditions, such as "SHOW_HIDDEN_FIELDS" and their values are lists of field names that need to be removed if the key isn't present in the serializer context. Then I override my viewsets' get_serializer_context method to get the desired values inside of context.
I made a remove_unsatisfied_condition_fields method which does the following:
def remove_unsatisfied_condition_fields(self):
conditional_fields = self.Meta.conditional_fields
for condition, fields in conditional_fields.items():
if not self.context.get(condition, False):
for field in fields:
self.fields.pop(field, None)
The serializers making use of it look like this:
class ExerciseSerializer(serializers.ModelSerializer):
class Meta:
model = Exercise
fields = [
"id",
"text",
"solution",
"correct_choices"
]
conditional_fields = {
"EXERCISE_SHOW_HIDDEN_FIELDS": ["solution", "correct_choices"],
}
Here comes the problem: if I manually call this method inside my serializers, in their __init__ method, like this:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.remove_unsatisfied_condition_fields()
everything works fine.
However, if I make them inherit from a class that does it automatically, like this:
class ConditionalFieldsMixin:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.remove_unsatisfied_condition_fields()
def remove_unsatisfied_condition_fields(self):
conditional_fields = self.Meta.conditional_fields
for condition, fields in conditional_fields.items():
if not self.context.get(condition, False):
for field in fields:
self.fields.pop(field, None)
and make my serializers inherit from it, the following happens:
as soon as a request comes in that doesn't satisfy one of the serializer's conditional_fields condition, the field appears to be "permanently" removed from the serializer: any subsequent requests that involve it, even if they lead to having a context with the proper conditions to show that field, are responded to without that field. It is as if the serializer ceases to have that field forever---only redeploying my application makes it come back.
This is very weird and I have no idea why this only happens if the removal is done inside of a class in the inheritance chain of the serializer vs doing it in the serializer itself.
Does this have to do with some weird Pythonic inheritance rule I'm not aware of or am I missing something about serializers?

Related

Django's Get Queryset as Generic Function

Can It be possible if I create a common get_queryset and use it for all the class? If yes, what can be the drawback of this.
This --> def get_queryset(self):
Instead of defining it for each class, can I make it generic so it can be used for all the classes
You can work with a mixin, for example filter to retrieve only records with a field active that is set to True.
class MyMixin:
def get_queryset(self, *args, **kwargs):
super().get_queryset(*args, **kwargs).filter(
active=True
)
Then we can mix this into other views:
class MyListAPView(MyMixin, ListAPIView):
# …

How to avoid instance attribute key defined outside __init__ when inheriting?

What is the best practice when it comes to overloading attributes you inherit from another class? My IDE and linters going bonkers a bit over the fact the attribute I'm overloading does not exist in the init.
class MeleeCombatSession(Script):
def at_script_creation(self):
self.key = "melee_combat_session" # Warning
self.desc = "Session for melee combat." # Warning
self.interval = 5 # Warning
self.persistent = True # Warning
self.db.characters = {} # No Warning
self.obj.ndb.meelee_combat_session = self # No Warning
I can't really quote the inherited classes because there is about 5 or so classes being inheriting here where the key attribute for example is defined within some methods of other classes. But they all define to this class __init__:
def __init__(self, *args, **kwargs):
typeclass_path = kwargs.pop("typeclass", None)
super().__init__(*args, **kwargs)
self.set_class_from_typeclass(typeclass_path=typeclass_path)
I've tried a number of things, but ideally how I'm supposed to define my MeleeCombatSession class is to set the key, description, interval of this script, and if it's persistent or not, which is overloading the children/parent classes it inherits. I can't for example, pop these into it's own init.
One suggestion was to super() it like:
super().__init__("melee_combat_session")
The code all works fine. There are no errors or issues. This is how you define the class in the engine I'm working in. Just want to avoid linting issues, which everyone is saying to ignore.

add dynamic field to serializer class

In my serializer class, I have defined two properties, and the third property could be derived from those two properties. Please see the code below
class ItemNameSerializer(NestedCreateUpdateMixin, ModelSerializer):
nested_child_field_name = 'attribute_names'
nested_child_serializer = AttributeNameSerializer
attribute_names = AttributeNameSerializer(many=True)
class Meta:
model = ItemName
fields = '__all__'
From the above code, we can see that
attribute_names = AttributeNameSerializer(many=True)
can be derived by
[nested_child_field_name] = nested_child_serializer(many=true)
So my question is
can I add a dynamic field which will be derived from other fields (to avoid writing redundant code) ?
if yes then how ?
the possible solutions can be of two types
A. overriding some ModelSerializer method.
B. generalized solution for any python class.
please try to provide both type of solutions (if possible)(and may be of some another type ?)
Well I found the Answer myself.
The serializer specific answer:
Turns out django rest frame work initialise the fields from deepcopy of instance (irrelevant)
But you can override __init__ method of the serializer and add field in self.fields. In my case I did it in the NestedCreateUpdateMixin where nested_child_field_name and nested_child_serializer already available
please see following code
def __init__(self, *args, **kwargs):
super(NestedCreateUpdateMixin, self).__init__(*args, **kwargs)
self.fields[self.nested_child_field_name] = self.nested_child_serializer(many=True)

How to define the same field for load_only and dump_only params at the Marshmallow scheme?

I am trying to build a marshmallow scheme to both load and dump data. And I get everything OK except one field.
Problem description
(If you understand the problem, you don't have to read this).
For load data its type is Decimal. And I used it like this before. Now I want to use this schema for dumping and for that my flask API responses with: TypeError: Object of type Decimal is not JSON serializable. OK, I understand. I changed the type to Float. Then my legacy code started to get an exception while trying to save that field to database (it takes Decimal only). I don't want to change the legacy code so I looked for any solution at the marshmallow docs and found load_only and dump_only params. It seems like those are what I wanted, but here is my problem - I want to set them to the same field. So I just wondered if I can define both fields and tried this:
class PaymentSchema(Schema):
money = fields.Decimal(load_only=True)
money = fields.Float(dump_only=True)
I have been expected for a miracle, of course. Actually I was thinking that it will skip first definition (correctly, re-define it). What I got is an absence of the field at all.
Workaround solution
So I tried another solution. I created another schema for dump and inherit it from the former schema:
class PaymentSchema(Schema):
money = fields.Decimal(load_only=True)
class PaymentDumpSchema(PaymentSchema):
money = fields.Float(dump_only=True)
It works. But I wonder if there's some another, native, "marshmallow-way" solution for this. I have been looking through the docs but I can't find anything.
You can use the marshmallow decorator #pre_load in this decorator you can do whatever you want and return with your type
from marshmallow import pre_load
import like this and in this you will get your payload and change the type as per your requirement.
UPD: I found a good solution finally.
NEW SOLUTION
The trick is to define your field in load_fields and dump_fields inside __init__ method.
from marshmallow.fields import Integer, String, Raw
from marshmallow import Schema
class ItemDumpLoadSchema(Schema):
item = Raw()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not (self.only and 'item' not in self.only) and \
not (self.exclude and 'item' in self.exclude):
self.load_fields['item'] = Integer(missing=0)
self.dump_fields['item'] = String()
Usage:
>>> ItemDumpLoadSchema().load({})
{'item': 0}
>>> ItemDumpLoadSchema().dump({'item': 0})
{'item': '0'}
Don't forget to define field in a schema with some field (Raw in my example) - otherwise it may raise an exception in some cases (e.g. using of only and exclude keywords).
OLD SOLUTION
A little perverted one. It based on #prashant-suthar answer. I named load field with suffix _load and implemented #pre_load, #post_load and error handling.
class ArticleSchema(Schema):
id = fields.String()
title = fields.String()
text = fields.String()
class FlowSchema(Schema):
article = fields.Nested(ArticleSchema, dump_only=True)
article_load = fields.Int(load_only=True)
#pre_load
def pre_load(self, data, *args, **kwargs):
if data.get('article'):
data['article_load'] = data.pop('article')
return data
#post_load
def post_load(self, data, *args, **kwargs):
if data.get('article_load'):
data['article'] = data.pop('article_load')
return data
def handle_error(self, exc, data, **kwargs):
if 'article_load' in exc.messages:
exc.messages['article'] = exc.messages.pop('article_load')
raise exc
Why the old solution is not a good solution?
It doesn't allow to inheritate schemas with different handle_error methods defined. And you have to name pre_load and post_load methods with different names.
pass data_key argument to the field definition
Documentation mentions, data_key parameter can be used along with dump_only or load_only to be able to have same field with different functionality.
So you can write your schema as...
class PaymentSchema(Schema):
decimal_money = fields.Decimal(data_key="money", load_only=True)
money = fields.Float(dump_only=True)
This should solve your problem. I am using data_key for similar problem in marshmallow with SQLAlchemyAutoSchema and this fixed my issue.
Edit
Note: The key in ValidationError.messages (error messages) will be decimal_money by default. You may tweak the handle_error method of Schema class to replace decimal_money with money but it is not recommended as you yourself may not be able to differentiate between the error messages fields.
Thanks.

Singleton only containing a dict, statically accesible

I want to provide a class, that contains a dictionary, that should be accessible all over my package. This class should be initialized by another class, which is a database connector.
From the database I retrieve the mapping but I want to do this only once on initialization of the database connector. Furthermore this mapping than should be availabe to all other modules in my package without getting the instance of the database connector passed through all function calls.
I thought about using a Singleton pattern and tried some stuff from this SO post. But I can't find a working solution.
I tried it this way with metaclass:
The mapping class:
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class CFMapping(metaclass=Singleton):
def __init__(self, cf_mapping: dict):
self._cf_mapping = cf_mapping
#classmethod
def get_cf_by_name(cls, name: str) -> str:
return cls._cf_mapping.get(name)
The database connector
class DBConnector:
def __init__(....):
# some init logic, connection to db etc...
self.cf_mapping = self.get_cf_mapping() # just returning a dict from a rest call
Now i expect the mapping to be accesible via the DBConnector instance.
But at other scripts where I don't have this instance I would like to access the mapping just over a static/class method like this:
CFMapping.get_cf_by_name("someName")
# leads to AttributeError as CFMapping has no attribute _cf_mapping
Is there a way to get this construct to work the way I want it to or is there some better approach for some problem like this?

Resources