How do I effectively use SQLAlchemy with multiple DDD Repositories? - domain-driven-design

I have been trying to find some examples of how to implement the Repository pattern with SQLAlchemy. Specifically, implementing more than one Repository.
In the case of multiple Repositories, I believe each Repository would be best implemented by maintaining a separate SQLAlchemy session. However, I have been running into a problem trying to move an instance of an object bound to one session to another session.
First, does this make sense to do? Should each Repository maintain its own UoW separate from any other Repository or should it be considered safe to have the entire context share the same Session?
Second, what is the best way to detach an instance from one Session and bind it to another?
Third, are there any solid DDD Repository examples written with SQLAlchemy in mind?

I'm not familar with DDD Repository pattern, but below is an exmaple showing how to move an object from one session to another:
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
metadata = MetaData()
Base = declarative_base(metadata=metadata, name='Base')
class Model(Base):
__tablename__ = 'models'
id = Column(Integer, primary_key=True)
engine1 = create_engine('sqlite://')
metadata.create_all(engine1)
engine2 = create_engine('sqlite://')
metadata.create_all(engine2)
session1 = sessionmaker(bind=engine1)()
session2 = sessionmaker(bind=engine2)()
# Setup an single object in the first repo.
obj = Model()
session1.add(obj)
session1.commit()
session1.expunge_all()
# Move object from the first repo to the second.
obj = session1.query(Model).first()
assert session2.query(Model).count()==0
session1.delete(obj)
# You have to flush before expunging, otherwise it won't be deleted.
session1.flush()
session1.expunge(obj)
obj = session2.merge(obj)
# An optimistic way to bind two transactions is flushing before commiting.
session2.flush()
session1.commit()
session2.commit()
assert session1.query(Model).count()==0
assert session2.query(Model).count()==1

Related

Building a good class method

I've built a class to ask a user a question, based on a type.
class Question:
def __init__(self, subject):
self.subject = subject
self.question = f"Enter the {subject} to be created. You may end this by typing 'DONE':\n"
self.still_needed = True
def ask_question(self):
ans_list = []
running = True
while running:
var = input(f"Enter {self.subject}?\n")
if var.lower() == 'done':
running = False
else:
ans_list.append(var)
return ans_list
The idea is to have a question model, to create lists of items.
This seems to work well with the following code in main.
roles = Question(subject="role").ask_question()
This creates a list from the Queue Class and uses it's method ask question to generate the list. As far as I can tell the object is then destroyed, as it's not saved to a variable.
My question, being new to Python and OOP is, does this seem like a solid and non-confusing way, or should I refractor? If so, what does the community suggest?
MY OPINION
I guess it depends on you. For one, one of the main purposes of using a class is to create an instance with it later on. Classes are objects ,or "categories" as I like to call them, that you use when there are distinctive types of instances in your project.
Given your code snippet, I can't really suggest anything, I don't know the usage of self.question and self.still_needed. However, if I were to base my opinion on just this part: roles = Question(subject="role").ask_question(), then I'd definitely go with using a function instead. As you've said,
As far as I can tell the object is then destroyed, as it's not saved
to a variable.
ALTERNATIVE SOLUTION
Use decorators → the one with # symbol
In this case, #staticmethod is the way to go!
What are staticmethods? The staticmethod decorator is a way to create a function in a class. So instead of it becoming a method, it can be treated as a function (without self parameter). This also means that a static method bounds to the class rather than its object. Consequently, static methods do not depend on objects (hence, you don't need to create an object for you to use it). Example:
class SomeMathStuff():
#staticmethod
def AddSomeNumbers(iterable):
return sum(iterable)
result = SomeMathStuff.AddSomeNumbers([1, 2, 3])
# result = 6
As you can see, I did not need to create an object, instead I just needed to call its class to use it. Word of warning, most Python programmers argue that this is the un-Pythonic way, but I wouldn't worry too much about it. Hell, even I use these methods sometimes. In my defense, this is a good and efficient way to organize your project. With this, you can apply class methods globally and you can "categorize" them in certain classes you find suitable.
Anyway, this is all I have! I apologize if I misinformed you in any way.
ADDITIONAL INFROMATION ... in case I wasn't the best teacher
https://www.programiz.com/python-programming/methods/built-in/staticmethod
Difference between staticmethod and classmethod
https://softwareengineering.stackexchange.com/questions/171296/staticmethod-vs-module-level-function

Export django model to database adding extra fields

For exportation into a dbdump, I need to create a table that is a exact clone of my model but with a "summary" column.
Given that the model is concrete, not abstract, to subclass it is a failure, as
class AnnotatedModel(MyModel):
summary = m.TextField(null=True, blank=True)
creates a new table with only the new field.
I have attempted to use metaclass inheritance instead, but I am stuck because of the model.Meta subclass of Django BaseModel. Other attemps to completely clone the model with copy deepcopy etc have been also unsuccessful. I have got some success using add_to_class but I am not sure if it is a documented user level function and it modifies deeply the class, so I have not been able to produce two different, separated models.
The goal is be to be able to run a loop, say
for x in MyModel.objects.using('input').all():
y = cast_to_AnnotatedModelInstance(x)
y.pk = None
y.summary = Foo(x)
y.save(using='output')
without modifying the original model which is in a separate package. Ideally, I would prefer x to be objects of MyModel and then casting to AnnotatedModel and save them.
At the moment, what I am doing is to expand the model with add_to_class
from foo.bar.models import MyModel
MyModel.add_to_class('summary',m.TextField(null=True, blank=True))
then create the export database explicitly
with c['output'].schema_editor() as editor:
editor.create_model(MyModel)
and then loop as in the question, with using('input').defer("summary") to access the original model of the application.
for x in MyModel.objects.using('input').defer("summary").all():
x.pk = None
x.summary = Foo(x)
x.save(using='output')
Note that because of the add_to_class, the model tries to read the column summary even in the original database, fortunately it can be skipped using defer.

Django Python 3.x - OneToOneField override delete() with ContentTypes

Scenario: there is different service types, i.e. clothes_washing, room_cleaning and room_maintenance.
Each one of these services have common fields like service_date for example.
Upon scenario I have model for each service and a Service model with common fields. the relation between Service model and each service model is OneToOneField.
I've tried to override the delete() function in Service model following this answer and It works for me but for only one service (like wheel in self.wheel.delete()). but if I want to delete upon the service type? how to achieve that approach?
my models.py:
class ClothesWashing(models.Model):
# special fields for clothes_washing
service = models.OneToOneField(Service, on_delete=models.DO_NOTHING, null=True)
class RoomCleaning(models.Model):
# special fields for room_cleaning
service = models.OneToOneField(Service, on_delete=models.DO_NOTHING, null=True)
class Service(models.Model):
# common fields for all services
def delete(self, *args, **kwargs):
#here I wanna "roomcleaning" attribute to be dynamic upon content type
self.roomcleaning.delete()
return super(self.__class__, self).delete(*args, **kwargs)
You can set the on_delete parameters to CASCADE:
class ClothesWashing(models.Model):
# special fields for clothes_washing
service = models.OneToOneField(Service, on_delete=models.CASCADE, null=True)
class RoomCleaning(models.Model):
# special fields for room_cleaning
service = models.OneToOneField(Service, on_delete=models.CASCADE, null=True)
The on_delete=… parameter [Django-doc] specifies what should happen when the item to which it refers is removed. So if Service is removed, and there is a ClothesWashing model that refers to it, then you can specify what to do.
By using CASCADE [Django-doc], you will remove the related ClothesWashing object as well, or as specified in the documentation:
Cascade deletes. Django emulates the behavior of the SQL constraint ON DELETE CASCADE and also deletes the object containing the ForeignKey.
It is better to implement it with this triggers, since methods like .delete() are not always called by the ORM when deleting in bulk. So Service.objects.all().delete() will delete all services, but will never call the .delete() method of your Service. By defining the triggers, you specify to Django what should happen with items that relate to it.
In this specific case, you perhaps might want to work with model inheritance [Django-doc]. Django can implement some boilerplate logic itself (like OneToOneFields to the parent model, etc.).
EDIT: If you want to delete the service if the given ClothesWashing, RoomCleaning, etc. are removed, you can override the .delete() method to delete that one too, you can for example make an abstract base class with:
class ServiceBase(models.Model):
# special fields for clothes_washing
service = models.OneToOneField(Service, on_delete=models.CASCADE, null=True)
def delete(self, *args, **kwargs):
service = self.service
super().delete(*args, **kwargs)
self.service.delete()
class Meta:
abstract = True
class ClothesWashing(ServiceBase):
# …
pass
class RoomCleaning(ServiceBase):
# …
pass
But likely if you use the ORM eventually some objects will not be removed, because of said ways to circumvent this.

Is it safe to rely on module caching by Nodejs for single instances

I use the following pattern when I need to make sure that I have a single instance of a class:
class DB {
constructor() {
// create db connection
}
}
const myDB = new DB();
export default myDB;
So that even if I import it multiple other modules, I get the same instance. Is this a reliable pattern? I know that module caching comes with the file case sensitivity caveat. But are there any edge cases that could cause my multiple imports to create multiple db objects?
I know that I can alternatively create a concrete singleton, with a getInstance() method but this pattern looks simpler.
Modules are cached by file name in node. But caching is case sensitive. require("./Bar") and require("./bar") returns two different object
https://nodejs.org/api/modules.html#modules_module_caching_caveats

Plone 4 search members with extended profiles

There is a need to extend memberdata on Plone 4 with certain schema and at the same time provide an efficient (that is, much better than linear) search among those profiles.
collective.examples.userdata seems to be an example on how to make userdata The Right Way, but what about searches? Are there any efficient search solutions, for example, using the catalog?
There is such thing as membrane, which can map users to content, but uses
Archetypes and quite old a product (maybe, my impression is wrong).
Still, for example, mapping userdata to Dexterity type instances could be fine.
The question is, is there any ready code out there or custom solution will be needed?
No, the only ready solution out there, as you said, is membrane. But IMO it's a complex and specific product so I don't think you really need it.
To reach your goal, you'll need a bit of development. More or less the way would be:
insert your users into the catalog
add all needed new indexes
create your custom search form with z3c.form
This is an overview (not detailed howto) of an implementation:
Catalog tool done similarly to the reference_catalog from Archetypes. The most essential parts:
from Products.ZCatalog.ZCatalog import ZCatalog
class MemberdataCatalog(UniqueObject, ZCatalog):
implements(IMemberdataCatalog)
...
security.declareProtected(ManageZCatalogEntries, 'catalog_object')
def catalog_object(self, obj, uid=None, idxs=[],
update_metadata=1, pghandler=None):
w = obj
if not IIndexableObject.providedBy(obj):
wrapper = component.queryMultiAdapter((obj, self), IIndexableObject)
if wrapper is not None:
w = wrapper
ZCatalog.catalog_object(self, w, w and str("/".join(w.getPhysicalPath())), idxs,
update_metadata, pghandler=pghandler)
(with all GenericSetup things, also can be done similarly to Archetypes)
Subscribers for IPrincipalCreatedEvent, IPrincipalDeletedEvent, IConfigurationChangedEvent
(the latter one needs event.context.class._name_ in ('UserDataConfiglet', 'PersonalPreferencesPanel', 'UserDataPanel') to be handled - unfortunately, Plone has no specific events for profile data changes). See PAS on how those work and which parameters
event handlers receive.
A view /memberdata/username for the catalog to address and reindex those users. The "username" done by bobo traverse and returns an wrapped user with properties,
needed for indexes and metadata.
The http://plone.org/products/collective.examples.userdata is a good guide how to actually extend the user profile.
Apart from that, an adapter is needed
class IndexableAdapter(EnhancedUserDataPanelAdapter):
implements(IIndexableObject)
adapts(IMemberData, IMemberdataCatalog)
def __init__(self, context, catalog):
self.context = context
self.userid = context.getId()
self.catalog = catalog
def getPhysicalPath(self):
return make_physical_path(self.userid) # path to the view, which "fakes" an object
def __getattr__(self, name):
""" Proxing attribute accesses. """
return getattr(self.context, name)
# Specific indexer
def SearchableTextIntra(self):
...
Here EnhancedUserDataPanelAdapter has been derived and extended from UserDataPanelAdapter.
The IMemberdataCatalog is the interface of the catalog.
It is important to put everything into metadata, even width/height of the portrait,
because using .getObject() made the whole thing hundreds of times (!) slower.
The group memberships were handled separately, because there are no events, which
signify changes in the groups, needed to reindex some or all memebrs.

Resources