I am trying to implement a class which uses python context manager ..
Though i understand the general concept of enter and exit i dont see how to use the same context manager across multiple code blocks.
for example take the below case
#contextmanager
def backupContext(input)
try:
return xyz
finally
revert (xyz)
class do_something:
def __init__(self):
self.context = contextVal
def doResourceOperation_1():
with backupContext(self.context) as context:
do_what_you_want_1(context)
def doResourceOperation_2():
with backupContext(self.context) as context:
do_what_you_want_2(context)
I am invoking the context managers twice..Suppose i want to do only once.. during the init and use the same context manager object to do all my operations and then finally when the object is deleted i want to do the revert operation how should i go about it?
Should i call enter and exit manually instead of using the with statement?
Related
I am using Squish to automate a Qt-based GUI application. I look up qt objects in the application recursively. Since it's time-intensive, I would like to cache objects once found for later reuse. I have the below class to maintain a cache of objects in a dictionary -
def __init__(self):
self.object_store = {}
#staticmethod
def instance():
if '_instance' not in ObjectCache.__dict__:
ObjectCache._instance = ObjectCache()
return ObjectCache._instance
def set(self, object_name, obj):
self.object_store[object_name] = obj
def remove(self, object_name):
del self.object_store[object_name]
def exists(self, object_name):
if object_name in self.object_store:
return True
return False
def get(self, object_name):
return self.object_store[object_name]
def get_all(self):
return self.object_store
I have below decorator for functions in my automation scripts to add/access/delete from this dictionary -
def object_caching_decorator(func):
def wrapper(*args, **kwargs):
object_cache = ObjectCache.instance()
if object_cache.exists(func.__name__):
try:
if waitForObject(object_cache.get(func.__name__)):
return object_cache.get(func.__name__)
except LookupError:
object_cache.remove(func.__name__)
obj = func(*args, **kwargs)
object_cache.set(func.__name__, obj)
return obj
return wrapper
One might ask why can't all scripts share this class object? because the Squish tool resets the global symbol table before starting every test script hence I need a way to persist this object.
How do I keep this class running so that the scripts running on another process (Squish runner) can access it seamlessly?
Each Squish test case gets executed in a new instance (process) of the squishrunner and the script interpreter hosted within.
The object references that Squish gives you in the test script are actually proxy objects that transparently (behind the scenes) access the actual object inside of the application process for you, without you having to do anything for this "magic" to happen (or becoming aware of it, most of the time). Caching/persisting these objects across test case will not work and is not possible.
Also, caching object references is a notorious problem as the life-time of the objects referenced by these proxy objects may change if the AUT (Application Under Test) gets changed, or used in a different manner.
Instead of this, you should revisit the way that you look up objects. It is very likely that there is a better way that allows ad-hoc object lookup (as intended) that is fast enough. (If in doubt, I recommend to contact the vendor of Squish, since your maintenance contract or subscription of their product entitles you to technical support.)
May be you transfer the object to it's "real name"-dictionary.
object_dictionary = objectMap.realName(object)
So you don't need the volatil object and have a persistant real-name-Object you can transfer.
I have a class that manages some shared resources and currently provides a get_X() and put_X() method to access the resources. A better interface for this would be to use a context manager for each resource as in
with ResourceManager.X() as x:
# do stuff
But my use is a QT5 Widget class that grabs the resource then it gets configured and has to release the resource when the widget is destroyed:
class MyWidget(QtGui.Widgets.QTableWidget):
def conf(self):
self.x = ResourceManager.get_x()
def closeEvent(self, event):
ResourceManager.put_x()
super().closeEvent()
So is there a more pythonic way analog to the context manager "with" construct to keep a resource allocated for the livetime of a class?
Note: Qt5 doesn't allow multiple inheritance
Update:
The Qt5 UI file loader doesn't allow passing arguments to __init__ so I have added the conf() method to configure custom widgets after the UI files are loaded. That's why the ResourceManager.get_x() isn't in __init__.
What I would like to do is to get rid of the put_x() call. I want the resource to be automatically freed by python when the class is deleted. There are many widgets and resources and it's easy to forget a put_x() somewhere so I don't want to balance get/put calls manually.
Another concern is exceptions when multiple resources are used. If the exception happens after say the 3rd resource then only those 3 should be released. Another thing I don't want to track manually.
Never used QT before, but the first thing that comes to my mind are the __init__ and __del__ methods, called respectively when a class instance is created and destroyed.
You could try the following:
class MyWidget(QtGui.Widgets.QTableWidget):
def __init__(self):
super().__init__()
self.x = ResourceManager.get_x()
def __del__(self):
ResourceManager.put_x()
super().__del__()
My unique concern is that QT actually deletes the class if it's not used anymore.
Try it and let me know
super().closeEvent()
Please consider the follow code:
class Task1(TaskSet):
#task
def task1_method(self):
pass
class Task2(TaskSet):
#task
def task2_method(self):
pass
class UserBehaviour(TaskSet):
tasks = [Task1, Task2]
class LoggedInUser(HttpUser):
host = "http://localhost"
wait_time = between(1, 5)
tasks = [UserBehaviour]
When I execute the code above with just one user, the method Task2.Method never gets executed, only the method from Task1.
What can I do to make sure the code from both tasks gets executed for the same user?
I would like to do it this way because I want to separate the tasks into different files for better organizing the project. If that is not possible, how can I have tasks defined into different files in an way that I can have tasks defined for each od my application modules?
I think I got it. To solve the problem I had to add a method at the end of each taskset to stop the execution of the task set:
def stop(self):
self.interrupt()
In addition to that, I had to change the inherited class to SequentialTaskSet so all tasks get executed in order.
This is the full code:
class Task1(SequentialTaskSet):
#task
def task1_method(self):
pass
#task
def stop(self):
self.interrupt()
class Task2(SequentialTaskSet):
#task
def task2_method(self):
pass
#task
def stop(self):
self.interrupt()
class UserBehaviour(SequentialTaskSet):
tasks = [Task1, Task2]
class LoggedInUser(HttpUser):
host = "http://localhost"
wait_time = between(1, 5)
tasks = [UserBehaviour]
Everything seems to be working fine now.
At first I thought this was a bug, but it is actually just as intended (although I dont really understand WHY it was implemented that way)
One important thing to know about TaskSets is that they will never
stop executing their tasks, and hand over execution back to their
parent User/TaskSet, by themselves. This has to be done by the
developer by calling the TaskSet.interrupt() method.
https://docs.locust.io/en/stable/writing-a-locustfile.html#interrupting-a-taskset
I would solve this issue with inheritance: Define a base TaskSet or User class that has the common tasks, and then subclass it, adding the user-type-specific tasks/code.
If you define a base User class, remember to set abstract = True if you dont want Locust to run that user as well.
I'm trying to use the wonder apscheduler in a pyarmid api. The idea is to have a background job run regularly, while we still query the api for the result from time to time. Basically I use the job in a class as:
def my_class(object):
def __init__(self):
self.current_result = 0
scheduler = BackGroundScheduler()
scheduler.start()
scheduler.add_job(my_job,"interval", id="foo", seconds=5)
def my_job():
print("i'm updating result")
self.current_result += 1
And outside of this class (a service for me), the api has a POST endpoint that returns my_class instance's current result:
class MyApi(object):
def __init__(self):
self.my_class = MyClass()
#view_config(request_method='POST')
def my_post(self):
return self.my_class.current_result
When everything runs, I see the prints and incrementation of value inside the service. But current_result stays as 0 when gathered from the post.
From what I know of the threading, I guess that the update I do is not on the same object my_class but must be on a copy passed to the thread.
One solution I see would be to update the variable in a shared intermediate (write on disk, or in a databse). But I wondered if that would be possible to do in memory.
I manage to do exactly this when I do this in a regular script, or with one script and a very simple FLASK api (no class for the API there) but I can't manage to have this logic function inside the pyramid api.
It must be linked to some internal of Pyramid spawning my api endpoint on a different thread but I can't get right on the problem.
Thanks !
=== EDIT ===
I have tried several things to solve the issue. First, the instance of MyClass used is intitialized in another script, follow a container pattern. That container is by default contained in all MyApi instances of pyramid, and supposed to contain all global variables linked to my project.
I also define a global instance of MyClass just to be sure, and print its current result value to compare
global_my_class = MyClass()
class MyApi(object):
def __init__(self):
pass
#view_config(request_method='POST')
def my_post(self):
print(global_my_class.current_result)
return self.container.my_class.current_result
I check using debug that MyClass is only spawned twice during the api execution (one for the global variable, one inside the container. However.
So what I see in logging are two values of current_result getting incremented, but at each calls of my_post I only get 0s.
An instance of view class only lives for the duration of the request - request comes in, a view class is created, produces the result and is disposed. As such, each instance of your view gets a new copy of MyClass() which is separate from the previous requests.
As a very simple solution you may try defining a global instance which will be shared process-wide:
my_class = MyClass()
class MyApi(object):
#view_config(request_method='POST')
def my_post(self):
return my_class.current_result
I'm having trouble getting my head around assigning a function to a variable when the function uses arguments. The arguments appear to be required but no matter what arguments I enter it doesn't work.
The scenario is that I'm creating my first GUI which has been designed in QT Designer. I need the checkbox to be ticked before the accept button allows the user to continue.
Currently this is coded to let me know if ticking the checkbox returns anything (which is does) however I don't know how to pass that result onto the next function 'accept_btn'. I thought the easiest way would be to create a variable however it requires positional arguments and that's where I'm stuck.
My code:
class MainWindow(QtWidgets.QMainWindow, Deleter_Main.Ui_MainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.setupUi(self)
self.ConfirmBox.stateChanged.connect(self.confirm_box)
self.Acceptbtn.clicked.connect(self.accept_btn)
def confirm_box(self, state):
if self.ConfirmBox.isChecked():
print("checked")
else:
print("not checked")
checked2 = confirm_box(self, state)
def accept_btn(self):
if checked2 == True:
print("clicked")
else:
print("not clicked")
app = QApplication(sys.argv)
form = MainWindow()
form.show()
app.exec_()
The code gets stuck on 'checked2' with the error:
NameError: name 'self' is not defined
I thought there might be other solutions for running this all within one function but I can't seem to find a way whilst the below is required.
self.ConfirmBox.stateChanged.connect(self.confirm_box)
Would extra appreciate if anyone could help me understand exactly why I need the 'self' argument in the function and variable.
Thanks in advance,
If you just need to enable a button when the checkbox is checked, it can be easily done within the signal connection:
self.ConfirmBox.toggled.connect(self.Acceptbtn.setEnabled)
QWidget.setEnabled requires a bool argument, which is the argument type passed on by the toggled signal, so the connection is very simple in this case.
Apart from this, there are some mistakes in your understanding of classes in Python: it seems like you are thinking in a "procedural" way, which doesn't work well with general PyQt implementations and common python usage, unless you really need some processing to be done when the class is created, for example to define some class attributes or manipulate the way some methods behave. But, even in this case, they will be class attributes, which will be inherited by every new instance.
The line checked2 = confirm_box(self, state) will obviously give you an error, since you are defining checked2 as a class atribute. This means that its value will be processed and assigned when the class is being created: at this point, the instance of the class does not exist yet, Python just executes the code that is not part of the methods until it reaches the end of the class definition (its primary indentation). When it reaches the checked2 line, it will try to call the confirm_box method, but the arguments "self" and "state" do not exist yet, as they have not been defined in the class attributes, hence the NameError exception.
Conceptually, what you have done is something similar to this:
class SomeObject(object):
print(something)
This wouldn't make any sense, since there is no "something" defined anywhere.
self is a python convention used for class methods: it is a keyword commonly used to refer to the instance of a class, you could actually use any valid python keyword at all.
The first argument of any class method is always the reference to the class instance, the only exceptions are classmethod and staticmethod decorators, but that's another story. When you call a method of an instanciated class, the instance object is automatically bound to the first argument of the called method: the self is the instance itself.
For example, you could create a class like this:
class SomeObject(object):
def __init__(Me):
Me.someValue = 0
def setSomeValue(Myself, value):
Myself.someValue = value
def multiplySomeValue(I, multi):
I.setSomeValue(I.someValue * multi)
return I.someValue
But that would be a bit confusing...