Can a class own a content manager in python? - python-3.x

I have a class that manages some shared resources and currently provides a get_X() and put_X() method to access the resources. A better interface for this would be to use a context manager for each resource as in
with ResourceManager.X() as x:
# do stuff
But my use is a QT5 Widget class that grabs the resource then it gets configured and has to release the resource when the widget is destroyed:
class MyWidget(QtGui.Widgets.QTableWidget):
def conf(self):
self.x = ResourceManager.get_x()
def closeEvent(self, event):
ResourceManager.put_x()
super().closeEvent()
So is there a more pythonic way analog to the context manager "with" construct to keep a resource allocated for the livetime of a class?
Note: Qt5 doesn't allow multiple inheritance
Update:
The Qt5 UI file loader doesn't allow passing arguments to __init__ so I have added the conf() method to configure custom widgets after the UI files are loaded. That's why the ResourceManager.get_x() isn't in __init__.
What I would like to do is to get rid of the put_x() call. I want the resource to be automatically freed by python when the class is deleted. There are many widgets and resources and it's easy to forget a put_x() somewhere so I don't want to balance get/put calls manually.
Another concern is exceptions when multiple resources are used. If the exception happens after say the 3rd resource then only those 3 should be released. Another thing I don't want to track manually.

Never used QT before, but the first thing that comes to my mind are the __init__ and __del__ methods, called respectively when a class instance is created and destroyed.
You could try the following:
class MyWidget(QtGui.Widgets.QTableWidget):
def __init__(self):
super().__init__()
self.x = ResourceManager.get_x()
def __del__(self):
ResourceManager.put_x()
super().__del__()
My unique concern is that QT actually deletes the class if it's not used anymore.
Try it and let me know
super().closeEvent()

Related

Keep a class running in a python subprocess or thread or process

I am using Squish to automate a Qt-based GUI application. I look up qt objects in the application recursively. Since it's time-intensive, I would like to cache objects once found for later reuse. I have the below class to maintain a cache of objects in a dictionary -
def __init__(self):
self.object_store = {}
#staticmethod
def instance():
if '_instance' not in ObjectCache.__dict__:
ObjectCache._instance = ObjectCache()
return ObjectCache._instance
def set(self, object_name, obj):
self.object_store[object_name] = obj
def remove(self, object_name):
del self.object_store[object_name]
def exists(self, object_name):
if object_name in self.object_store:
return True
return False
def get(self, object_name):
return self.object_store[object_name]
def get_all(self):
return self.object_store
I have below decorator for functions in my automation scripts to add/access/delete from this dictionary -
def object_caching_decorator(func):
def wrapper(*args, **kwargs):
object_cache = ObjectCache.instance()
if object_cache.exists(func.__name__):
try:
if waitForObject(object_cache.get(func.__name__)):
return object_cache.get(func.__name__)
except LookupError:
object_cache.remove(func.__name__)
obj = func(*args, **kwargs)
object_cache.set(func.__name__, obj)
return obj
return wrapper
One might ask why can't all scripts share this class object? because the Squish tool resets the global symbol table before starting every test script hence I need a way to persist this object.
How do I keep this class running so that the scripts running on another process (Squish runner) can access it seamlessly?
Each Squish test case gets executed in a new instance (process) of the squishrunner and the script interpreter hosted within.
The object references that Squish gives you in the test script are actually proxy objects that transparently (behind the scenes) access the actual object inside of the application process for you, without you having to do anything for this "magic" to happen (or becoming aware of it, most of the time). Caching/persisting these objects across test case will not work and is not possible.
Also, caching object references is a notorious problem as the life-time of the objects referenced by these proxy objects may change if the AUT (Application Under Test) gets changed, or used in a different manner.
Instead of this, you should revisit the way that you look up objects. It is very likely that there is a better way that allows ad-hoc object lookup (as intended) that is fast enough. (If in doubt, I recommend to contact the vendor of Squish, since your maintenance contract or subscription of their product entitles you to technical support.)
May be you transfer the object to it's "real name"-dictionary.
object_dictionary = objectMap.realName(object)
So you don't need the volatil object and have a persistant real-name-Object you can transfer.

Need to pass arguments to method decorator from inherited class

I am working in pycharm, and am trying to pass arguments to a pagination decorator which i used to decorate a method of a class.
I have created a class where to abstract the attributes of a given resource the API returns. I also have a dictionary resource_vars where I keep track of the different variables that particular resource has been given by the api developer (for example, the name I want to give to that resource, what is the pagination limit for that resource etc.).
class Resource:
def __init__(self, resource, resource_vars):
self.resource = resource
self.resource_vars = resource_vars
self.resource_name = resource_vars["name"]
self.total_results = resource["total_results"] if "total_results" in resource else None
self.max_items_allowed = resource_vars["items_per_page"] if "items_per_page" in resource_vars else None
I then have a second class, where I am trying to implement the factory design pattern, because every Resource returned has a certain set of conditionals and mutation that needs to happen before inserting the object in the database.
class ResourceDispatcher(Resource): # My understanding is that i am inheriting the class Resource attributes here.
def dispatch(self, resource, resource_name):
dispatcher = get_dispatcher(resource_name) # factory function defined outside the class.
return dispatcher(resource)
# pycharm underlines total_results and max_items_allowed in red with
# warning "unresolved reference total_results"
#paginate(pages=total_results, items_per_page=max_items_allowed)
def dispatch_object_1(...) :
...
The warning pops up when I am trying to decorate the method dispatch_object_1.
I am wondering if it is at all possible to pass arguments to a decorator using attributes of an inherited class. If someone has any idea of how I could refactor to obtain this results I'd be happy to take them into account!

Parent class to expose standard methods, child class to provide sub-methods to do the work

I'd like to set up a parent class that defines a standard interface and performs common things for all children instances. However, each child will have different specifics for how these methods get the job done. For example, the parent class would provide standard methods as follows:
class Camera():
camera_type = None
def __init__(self, save_to=None):
self.file_loc = save_to
def connect(self):
self.cam_connect()
with open(self.file_loc, 'w'):
# do something common to all cameras
def start_record(self):
self.cam_start_record()
# do something common to all cameras
Each of these methods refers to another method located only in the child. The child classes will have the actual details on how to perform the task required, which may include the combination of several methods. For example:
class AmazingCamera(Camera):
camera_type = 'Amazing Camera'
def __init__(self, host_ip='10.10.10.10', **kwargs):
super(AmazingCamera, self).__init__(**kwargs)
self.host_ip = host_ip
def cam_connect(self):
print('I are connectifying to {}'.format(self.host_ip))
# do a bunch of custom things including calling other
# local methods to get the job done.
def cam_start_record(self):
print('Recording from {}'.format(self.host_ip)
# do a bunch more things specific to this camera
### etc...
With the outcome of the above providing an interface such as:
mycamera = AmazingCamera(host_ip='1.2.3.4', save_to='/tmp/asdf')
mycamera.connect()
mycamera.start_record()
I understand fully that I can simply override the parent methods, but in cases where the parent methods do other things like handling files and such I'd prefer to not have to do that. What I have above seems to work just fine so far but before I continue creating this I'd like to know if there is there a better, more pythonic way to achieve what I'm after.
TIA!
I opted to keep the standard methods identical between the parent and child and minimize the use of child-specific helper methods. Just seemed cleaner.
As an example:
class Camera():
camera_type = None
def connect(self):
with open(self.file_loc, 'w'):
# do something common to all cameras
Then in the child I'm overriding the methods, but calling the method of the parent in the override as follows:
class AmazingCamera(Camera):
camera_type = 'Amazing Camera'
def cam_connect(self):
print('I are connectifying to {}'.format(self.host_ip))
# call the parent's method
super().connect()
# do a bunch of custom things specific to
# AmazingCamera

APscheduler and Pyramid python

I'm trying to use the wonder apscheduler in a pyarmid api. The idea is to have a background job run regularly, while we still query the api for the result from time to time. Basically I use the job in a class as:
def my_class(object):
def __init__(self):
self.current_result = 0
scheduler = BackGroundScheduler()
scheduler.start()
scheduler.add_job(my_job,"interval", id="foo", seconds=5)
def my_job():
print("i'm updating result")
self.current_result += 1
And outside of this class (a service for me), the api has a POST endpoint that returns my_class instance's current result:
class MyApi(object):
def __init__(self):
self.my_class = MyClass()
#view_config(request_method='POST')
def my_post(self):
return self.my_class.current_result
When everything runs, I see the prints and incrementation of value inside the service. But current_result stays as 0 when gathered from the post.
From what I know of the threading, I guess that the update I do is not on the same object my_class but must be on a copy passed to the thread.
One solution I see would be to update the variable in a shared intermediate (write on disk, or in a databse). But I wondered if that would be possible to do in memory.
I manage to do exactly this when I do this in a regular script, or with one script and a very simple FLASK api (no class for the API there) but I can't manage to have this logic function inside the pyramid api.
It must be linked to some internal of Pyramid spawning my api endpoint on a different thread but I can't get right on the problem.
Thanks !
=== EDIT ===
I have tried several things to solve the issue. First, the instance of MyClass used is intitialized in another script, follow a container pattern. That container is by default contained in all MyApi instances of pyramid, and supposed to contain all global variables linked to my project.
I also define a global instance of MyClass just to be sure, and print its current result value to compare
global_my_class = MyClass()
class MyApi(object):
def __init__(self):
pass
#view_config(request_method='POST')
def my_post(self):
print(global_my_class.current_result)
return self.container.my_class.current_result
I check using debug that MyClass is only spawned twice during the api execution (one for the global variable, one inside the container. However.
So what I see in logging are two values of current_result getting incremented, but at each calls of my_post I only get 0s.
An instance of view class only lives for the duration of the request - request comes in, a view class is created, produces the result and is disposed. As such, each instance of your view gets a new copy of MyClass() which is separate from the previous requests.
As a very simple solution you may try defining a global instance which will be shared process-wide:
my_class = MyClass()
class MyApi(object):
#view_config(request_method='POST')
def my_post(self):
return my_class.current_result

How to use the same context manager across different methods?

I am trying to implement a class which uses python context manager ..
Though i understand the general concept of enter and exit i dont see how to use the same context manager across multiple code blocks.
for example take the below case
#contextmanager
def backupContext(input)
try:
return xyz
finally
revert (xyz)
class do_something:
def __init__(self):
self.context = contextVal
def doResourceOperation_1():
with backupContext(self.context) as context:
do_what_you_want_1(context)
def doResourceOperation_2():
with backupContext(self.context) as context:
do_what_you_want_2(context)
I am invoking the context managers twice..Suppose i want to do only once.. during the init and use the same context manager object to do all my operations and then finally when the object is deleted i want to do the revert operation how should i go about it?
Should i call enter and exit manually instead of using the with statement?

Resources