Trying to understand dependency injection - python-3.x

I am trying to learn the concept of "dependency injection" in Python. First, if anyone has a good reference, please point me at it.
As a project I took the use case of changing logic and formatting based on options passed to the linux command "mtr"
The dependency client class is MtrRun. The initial dependency injection service is DefaultRgx (I plan to add a couple more). The injection interface is MtrOptions. And the injector class is just called Injector.
class MtrRun(MtrOptions): # Dependency injection client
def __init__(self, MtrOptions, options, out):
self.MtrOptions = MtrOptions
self.options = options
self.out = out
def mtr_options(self, options, out):
return self.MtrOptions.mtr_options(options, out)
class DefaultRgx(MtrOptions): # Dependency injection service
def __init__(self, options):
self.options = None
def mtr_options(self, options, out):
pass # code abbreviated for clarity
class MtrOptions(): # Dependency injection interface
def __init__(self, svc):
self.svc = svc
def mtr_options(self, options, out):
return self.svc.mtr_options(options, out)
class Injector(): # Dependency injection injector
def inject(self):
MtrOptions = MtrOptions(DefaultRgx())
mtr_result = MtrRun(MtrOptions)
This snippet will not clear a lint. My IDE claims that the MtrOptions class passed into the injection client and service are not defined. When I try to resolve it, a new MtrOptions class is created, but the error persists. I am certain I just don't know what I am doing. Conceptually I admit a weak grasp. Help is appreciated.

So I messed up several ways. I did not understand the declarative way to establish inheritance. Nothing in my example actually was an "object". Second, order appears to matter. The object classes seem to need to appear before their children. Third, the injector class needs to include both sides of the classes to be injected.
class DefaultRgx(object): # Dependency injection service
def __init__(self, options):
self.options = None
def mtr_options(self, options, out):
mtr_result = ['Do stuff the old way']
return mtr_result
class MtrRun(DefaultRgx): # Dependency injection client
def __init__(self, host, count, options):
self.count = count
self.host = host
self.options = options
def mtr_module(self, host, count, options):
mtr_result = super().mtr_options(options, out)
return mtr_result
class MtrOptions(DefaultRgx): # Dependency injection interface
def mtr_options(self, options, out):
mtr_result = ['I was injected instead of DefaultRgx']
return mtr_result
class Injector(MtrOptions, MtrRun): # Dependency injection injector
pass
def main():
mtr = Injector(os.getenv('HOST'), os.getenv('COUNT'), None)
mtr_result = mtr.mtr_module()
This linted correctly. I have not run it yet, but conceptually that YouTube really helped things click. Thank you so much.

Related

Python, How to implement a decoretor which have access to 'self' with a class?

I'm new to python3 and I'm working on a project relative with spider with python3.
I want to implement a decorator like this:
class baseSpider:
def __init__(self):
self.registered_funcs = []
#some_magic_decorator
def parse_func_1(self, response):
# body...
#some_magic_decorator
def parse_func_2(self, response):
# body...
I wish the decorator can automatically add the parse_func_* into the registed_funcs without implicity declare in init, since this is just an abstract class.
At very first, I wrote the code like this:
class baseSpider:
registed_funcs = [] # I declare the funcs list in the class instead of __init__;
#classmethod
def my_bad_decorator(cls, func):
cls.registed_funcs.append(func)
return func
Later I realized it's wrong, since the functions are registed into the list of the class, not the instance.
The key seems to be that I need to access the instance before it was created. I have no idea at all!
Can anyone help?
(English is not my native language, if I have any error making the question ambiguous, just tell me and I will fix it. Thank u)

What is the correct way to annotate types of an injected dependency in Python

Could anyone enlighten me as to the best way to type-annotate an injected dependency in Python such that a member is correctly identified by the type-hinting of an IDE?
Take the following example:
class Type: pass
class Type1(Type):
def __init__(self, value):
self.some_member = value
class Type2(Type):
def __init__(self, value):
self.other_member = value
class Base:
def __init__(self, injected: Type):
self.injected = injected
class Derived1(Base):
def __init__(self, injected: Type1):
super().__init__(injected)
class Derived2(Base):
def __init__(self, injected: Type2):
super().__init__(injected)
What I would like is for the IDE (VS Code/PyCharm etc.) to be able to provide type hinting support when accessing the members of the injected dependency:
instance1 = Derived1(Type1(5))
instance1.injected.some_member = 2 # IDE knows nothing about `some_member`
The docs imply the best way to address this would be using TypeVar
MyType = TypeVar('MyType', bound=Type)
class Base:
def __init__(self, injected: MyType):
self._injected = injected
This unfortunately doesn't help. Any suggestions greatly appreciated.
Note that the linter itself is perfectly happy with the usage, so this seems more like a possible limitation of the type-hinting introspection in the IDE.

How to avoid instance attribute key defined outside __init__ when inheriting?

What is the best practice when it comes to overloading attributes you inherit from another class? My IDE and linters going bonkers a bit over the fact the attribute I'm overloading does not exist in the init.
class MeleeCombatSession(Script):
def at_script_creation(self):
self.key = "melee_combat_session" # Warning
self.desc = "Session for melee combat." # Warning
self.interval = 5 # Warning
self.persistent = True # Warning
self.db.characters = {} # No Warning
self.obj.ndb.meelee_combat_session = self # No Warning
I can't really quote the inherited classes because there is about 5 or so classes being inheriting here where the key attribute for example is defined within some methods of other classes. But they all define to this class __init__:
def __init__(self, *args, **kwargs):
typeclass_path = kwargs.pop("typeclass", None)
super().__init__(*args, **kwargs)
self.set_class_from_typeclass(typeclass_path=typeclass_path)
I've tried a number of things, but ideally how I'm supposed to define my MeleeCombatSession class is to set the key, description, interval of this script, and if it's persistent or not, which is overloading the children/parent classes it inherits. I can't for example, pop these into it's own init.
One suggestion was to super() it like:
super().__init__("melee_combat_session")
The code all works fine. There are no errors or issues. This is how you define the class in the engine I'm working in. Just want to avoid linting issues, which everyone is saying to ignore.

Python: why do I need super().__init__() call in metaclasses?

I have got one question: why do I need to call super().--init--() in metaclasses? Because metaclass is factory of classes, I think we don`t need to call initialization for making objects of class Shop. Or with using super().--init-- we initializing the class? (Because my IDE says, that I should call it. But without super().--init-- nothing happens, my class working without mistakes).
Can you explane me, why?
Thanks in advance!
class Descriptor:
_counter = 0
def __init__(self):
self.attr_name = f'Descriptor attr#{Descriptor._counter}'
Descriptor._counter += 1
def __get__(self, instance, owner):
return self if instance is None else instance.__dict__[self.attr_name]
def __set__(self, instance, value):
if value > 0:
instance.__dict__[self.attr_name] = value
else:
msg = 'Value must be > 0!'
raise AttributeError(msg)
class Shop():
weight = Descriptor()
price = Descriptor()
def __init__(self, name, price, weight):
self.name = name
self.price = price
self.weight = weight
def __repr__(self):
return f'{self.name}: price - {self.price} weight - {self.weight}'
def buy(self):
return self.price * self.weight
class Meta(type):
def __init__(cls, name, bases, attr_dict):
super().__init__(name, bases, attr_dict) # <- this is that func. call
for key, value in attr_dict.items():
if isinstance(value, Descriptor): # Here I rename attributes name of descriptor`s object.
value.attr_name = key
#classmethod
def __prepare__(metacls, name, bases):
return OrderedDict()
You don't "need" to - and if your code use no other custom metaclasses, not calling the metaclass'__init__.super() will work just the same.
But if one needs to combine your metaclass with another, through inheritance, without the super() call, it won't work "out of the box": the super() call is the way to ensure all methods in the inheritance chain are called.
And if at first it looks like that a metaclass is extremely rare, and combining metaclasses would likely never take place: a few libraries or frameworks have their own metaclasses, including Python's "abc"s (abstract base classes), PyQT, ORM frameworks, and so on. If any metaclass under your control is well behaved with proper super() calls on the __new__, __init__ and __call__ methods, (if you override those), what you need to do to combine both superclasses and have a working metaclass can be done in a single line:
CompatibleMeta = type("CompatibleMeta", (meta, type(OtherClassBase)), {})
This way, for example, if you want to use the mechanisms in your metaclass in a class using the ABCMeta functionalities in Python, you just do it. The __init__ method in your Meta will call the other metaclass __init__. Otherwise it would not run, and some subtle unexpectd thing would not be initialized in your classes, and this could be a very hard to find bug.
On a side note: there is no need to declare __prepare__ in a metaclass if all it does is creating an OrderedDict on a Python newer than 3.6: Since that version, dicitionaries used as the "locals()" while executing class bodies are ordered by default. Also, if another metaclass you are combining with also have a __prepare__, there is no way to make that work automatically by using "super()" - you have to check the code and verify which of the two __prepare__s should be used, or create a new mapping type with features to attend both metaclasses.

Python nose unit tests generating too many clients already

I'm using python 3.3, pyramid, sqlalchemy, psygopg2. I'm using a test postgres db for the unit tests. I have 101 unit tests set up for nose to run. On test 101 I get:
nose.proxy.OperationalError: (OperationalError) FATAL: sorry, too many clients already
It seems from the traceback that the exception is being thrown in
......./venv/lib/python3.3/site-packages/SQLAlchemy-0.8.2-py3.3.egg/sqlalchemy/pool.py", line 368, in __connect
connection = self.__pool._creator()
Perhaps tearDown() is not running after each test? Isn't the connection pool limit for Postgresql 100 at one time?
Here's my BaseTest class:
class BaseTest(object):
def setup(self):
self.request = testing.DummyRequest()
self.config = testing.setUp(request=self.request)
self.config.scan('../models')
sqlalchemy_url = 'postgresql://<user>:<pass>#localhost:5432/<db>'
engine = create_engine(sqlalchemy_url)
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
DBSession.configure(bind=engine)
Base.metadata.bind = engine
Base.metadata.create_all(engine)
self.dbsession = DBSession
def tearDown(self):
testing.teardown()
My test classes inherit from BaseTest:
class TestUser(BaseTest):
def __init__(self, dbsession = None):
if dbsession:
self.dbsession = dbsession
def test_create_user(self):
......
......
One of the test classes tests a many-to-many relationship, so in that test class I first create the records needed to satisfy the foreign key relationships:
from tests.test_user import TestUser
from tests.test_app import TestApp
class TestAppUser(BaseTest):
def __init__(self, dbsession = None):
if dbsession:
self.dbsession = dbsession
def create_app_user(self):
test_app = TestApp(self.dbsession)
test_user = TestUser(self.dbsession)
test_app.request = testing.DummyRequest()
test_user.request = testing.DummyRequest()
app = test_app.create_app()
user = test_user.create_user()
......
I'm passing the dbsession into the TestApp and TestUser classes...I'm thinking that is the source of the problem, but I'm not sure.
Any help is greatly appreciated. Thanks.
Pyramid has nothing to do with SQLAlchemy. There is nowhere in Pyramid's API where you would link any of your SQLAlchemy configuration in a way that Pyramid would actually care. Therefore, Pyramid's testing.tearDown() does not do anything with connections. How could it? It doesn't know they exist.
You're using scoped sessions with a unit test, which really doesn't make a lot of sense because your unit tests are probably not threaded. So now you're creating threadlocal sessions and not cleaning them up. They aren't garbage collected because they're threadlocal. You also aren't manually closing those connections so the connection pool thinks they're still being used.
Is there a reason you need the ZopeTransactionExtension in your tests? Are you using the transaction package in your tests, or pyramid_tm? In a test if you don't know what something does then it shouldn't be there. You're calling create_all() from your setUp() method? That's going to be slow as hell introspecting the database and creating tables on every request. Ouch.
class BaseTest(object):
def setUp(self):
self.request = testing.DummyRequest()
self.config = testing.setUp(request=self.request)
self.config.scan('../models')
sqlalchemy_url = 'postgresql://<user>:<pass>#localhost:5432/<db>'
self.engine = create_engine(sqlalchemy_url)
Base.metadata.create_all(bind=self.engine)
self.sessionmaker = sessionmaker(bind=self.engine)
self.sessions = []
def makeSession(self, autoclose=True):
session = self.sessionmaker()
if autoclose:
self.sessions.append(session)
def tearDown(self):
for session in self.sessions:
session.close()
self.engine.dispose()
testing.teardown()

Resources