I'm making a BlackJack application using Kivy, I basically need to make a sort of delay or even a time.sleep, but of course, it doesn't have to freeze the program. I saw kivy has Clock.whatever to schedule certain actions. What I'd like to do is scheduling multiple actions so that when the first action has finished, the second will be run and so on. What's the best way to achive this? or is there in the Clock module something to perform multiple delays one after another?
This could be an example of what i need to do:
from kivy.clock import Clock
from kivy.uix import BoxLayout
from functools import partial
class Foo(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
for index_time, card in enumerate(cards, 1):
# Schedule this action to be run after 1 sec from the previous one and so on
Clock.schedule_once(partial(self.function, card), index_time)
def function(self, card, *args):
self.add_widget(card)
First, I'm surprised that your question didn't get down-voted since this is not supposed to be a place for opinion questions. So you shouldn't ask for best.
The Clock module doesn't have a specific method to do what you want. Obvioulsy, you could do a list of Clock.schedule_once() calls, as your example code does. Another way is to have each function schedule its successor, but that assumes that the functions will always be called in that order.
Anyway, there are many ways to do what you want. I have used a construct like the following:
class MyScheduler(Thread):
def __init__(self, funcsList=None, argsList=None, delaysList=None):
super(MyScheduler, self).__init__()
self.funcs = funcsList
self.delays = delaysList
self.args = argsList
def run(self):
theLock = threading.Lock()
for i in range(len(self.funcs)):
sleep(self.delays[i])
Clock.schedule_once(partial(self.funcs[i], *self.args[i], theLock))
theLock.acquire()
It is a separate thread, so you don't have to worry about freezing your gui. You pass it a list of functions to be executed, a list of arguments for those functions, and a list of delays (for a sleep before each function is executed). Note that using Clock.schedule_once() schedules the execution on the main thread, and not all functions need to be executed on the main thread. The functions must allow for an argument that is a Lock object and the functions must release the Lock when it completes. Something like:
def function2(self, card1, card2, theLock=None, *args):
print('in function2, card1 = ' + str(card1) + ', card2 = ' + str(card2))
if theLock is not None:
theLock.release()
The MyScheduler class __init__() method could use more checking to make sure it won't throw an exception when it is run.
Related
Apologies for my poor phrasing but here goes.
I need to execute a function every thirty minutes whilst other tasks are running however I have no idea how to do this or to phrase it into google. My goal is to modify my script so that it operates (without a UI) like the task manager program with background services, programs, utils, ect.
I have tried to create this by timing each function and creating functions that execute other functions however no matter what I try it operates in an asynchronous fashion like any script would.
An example of this would include the following.
def function_1():
"""Perform operations"""
pass
def function_2():
"""Perform operations"""
pass
def executeAllFunctions():
function_1()
function_2()
How can I initialize function_1 as a background task whilst function_2 is executed in a normal manner?
There is an excellent answer here.
The main idea is to run an async coroutine in a forever loop inside a thread.
In your case, you have to define function one as a coroutine use a caller function to be in the thread and create a thread.
Sample example heavily inspired to the answer in the link but adapted to your question.
#asyncio.coroutine
def function_1():
while True:
do_stuff
yield from asyncio.sleep(1800)
def wrapper(loop):
asyncio.set_event_loop(loop)
loop.run_until_complete(function_1())
def function_2():
do_stuff
def launch():
loop = asyncio.get_event_loop()
t = threading.Thread(target=wrapper, args=(loop,)) # create the thread
t.start() # launch the thread
function_2()
t.exit() # when function_2 return
Assume I have a function Handle client
def Handle_Client():
print('Hello StackOverFlow user')
one way to call function handel client using threads
Client_Thread=Thread(target=Handle_Client,args=())
Client_Thread.start()
second way
Client_Thread=Handle_Client()
what is the diff in terms of memory ,execution or is it the same?
In terms of execution,
The first way you mentioned
Client_Thread=Thread(target=Handle_Client,args=(1,))
Client_Thread.start()
When you create a Thread, you pass it a function and a list containing the arguments to that function. In this case, you’re telling the Thread to run Handle_Client() and to pass it 1 as an argument.
The second way you mentioned is done by creating a class. For instance
class Handle_Client(threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
self.h = i
def run(self):
print(“ Value send“, self.h)
Client_Thread = Handle_Client(1)
Client_Thread.start()
Here, New class Handle_Client inherits the Python threading.Thread class.
__init__(self [,args]): Override the constructor.
run(): This is the section where you can put your logic part.
start(): The start() method starts a Python thread.
The Handle_Client class overrides the constructor, so the base class constructor (Thread.__init__()) must be invoked.
In terms of memory,
In the first method, you run a function as a separate thread, whereas in the second method you have to create a class, but get more functionality.
That means it would take additional memory since you get more functionality.
I am not a 100% sure about it but that's what I understood.
I am trying to display a loading gif on my PyQt5 QMainWindow while an intensive process is running. Rather than playing normally, the QMovie pauses. As far as I can tell, the event loop shouldn't be blocked as the intensive process is in its own QObject passed to its own QThread. Relevant code below:
QMainWindow:
class EclipseQa(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.initUI()
def initUI(self):
...
self.loadingMovie = QMovie("./loading.gif")
self.loadingMovie.setScaledSize(QSize(149, 43))
self.statusLbl = QLabel(self)
self.statusLbl.setMovie(self.loadingMovie)
self.grid.addWidget(self.statusLbl, 6, 2, 2, 2, alignment=Qt.AlignCenter)
self.statusLbl.hide()
...
def startLoadingGif(self):
self.statusLbl.show()
self.loadingMovie.start()
def stopLoadingGif(self):
self.loadingMovie.stop()
self.statusLbl.hide()
def maskDose(self):
self.startLoadingGif()
# Set up thread and associated worker object
self.thread = QThread()
self.worker = DcmReadWorker()
self.worker.moveToThread(self.thread)
self.worker.finished.connect(self.thread.quit)
self.worker.updateRd.connect(self.updateRd)
self.worker.updateRs.connect(self.updateRs)
self.worker.updateStructures.connect(self.updateStructures)
self.worker.clearRd.connect(self.clearRd)
self.worker.clearRs.connect(self.clearRs)
self.thread.started.connect(lambda: self.worker.dcmRead(caption, fname[0]))
self.thread.finished.connect(self.stopLoadingGif)
self.maskThread.start()
def showDoneDialog(self):
...
self.stopLoadingGif()
...
Worker class:
class DoseMaskWorker(QObject):
clearRd = pyqtSignal()
clearRs = pyqtSignal()
finished = pyqtSignal()
startLoadingGif = pyqtSignal()
stopLoadingGif = pyqtSignal()
updateMaskedRd = pyqtSignal(str)
def __init__(self, parent=None):
QObject.__init__(self, parent)
#pyqtSlot(name="maskDose")
def maskDose(self, rd, rdName, rdId, rs, maskingStructure_dict):
...
self.updateMaskedRd.emit(maskedRdName)
self.finished.emit()
For brevity, '...' indicates code that I figured is probably not relevant.
Your use of lambda to call the slot when the threads started signal is emitted is likely causing it to execute in the main thread. There are a couple of things you need to do to fix this.
Firstly, your use of pyqtSlot does not contain the types of the arguments to the maskDose method. You need to update it so that it does. Presumably you also need to do this for the dcmRead method which you call from the lambda but haven't included in your code. See the documentation for more details.
In order to remove the use of the lambda, you need to define a new signal and a new slot within the EclipseQa class. The new signal should be defined such that the required number of parameters for the dcmRead method are emitted, and the types correctly specified (documentation for this is also in the link above). This signal should be connected to the workers dcmRead slot (make sure to do it after the worker object has been moved to the thread or else you might run into this bug!). The slot should take no arguments, and be connected to the threads started signal. The code in the slot should simply emit your new signal with the appropriate arguments to be passed to dcmRead (e.g. like self.my_new_signal.emit(param1, param2)).
Note: You can check what thread any code is running in using the Python threading module (even when using QThreads) by printing threading.current_thread().name from the location you wish to check.
Note 2: If your thread is CPU bound rather than IO bound, you may still experience performance issues because of the Python GIL which only allows a single thread to execute at any one time (it will swap between the threads regularly though so code in both threads should run, just maybe not at the performance you expect). QThreads (which are implemented in C++ and are theoretically able to release the GIL) do not help with this because they are running your Python code and so the GIL is still held.
I have a background thread that main calls, the background thread can open a number of different scripts but occasionally it will get an infinite print loop like this.
In thing.py
import foo
def main():
thr = Thread(target=background)
thr.start()
thread_list.append(thr)
def background():
getattr(foo, 'bar')()
return
And then in foo.py
def bar():
while True:
print("stuff")
This is what it's supposed to do but I want to be able to kill it when I need to. Is there a way for me to kill the background thread and all the functions it has called? I've tried putting flags in background to return when the flag goes high, but background is never able to check the flags since its waiting for bar to return.
EDIT: foo.py is not my code so I'm hesitant to edit it, ideally I could do this without modifying foo.py but if its impossible to avoid its okay
First of all it is very difficult (if possible) to control threads from other threads, no matter what language you are using. This is due to potential security issues. So what you do is you create a shared object which both threads can freely access. You can set a flag on it.
But luckily in Python each thread has its own Thread object which we can use:
import foo
def main():
thr = Thread(target=background)
thr.exit_requested = False
thr.start()
thread_list.append(thr)
def background():
getattr(foo, 'bar')()
return
And in foo:
import threading
def bar():
th = threading.current_thread()
# What happens when bar() is called from the main thread?
# The commented code is not thread safe.
# if not hasattr(th, 'exit_requested'):
# th.exit_requested = False
while not th.exit_requested:
print("stuff")
Although this will probably be hard to maintain/debug. Treat it more like a hack. Cleaner way would be to create a shared object and pass it around to all calls.
I have some independent threads that would probably gain some speed if I can put them into Process.
I changed class x(Thread): with a def run(self): into a class x(Process)... but the Process doesn't seem to call the run().
What's the correct syntax to set up the Process?
def __init()__:
Process.__init()__(self, Target=self.run, args=(self,)): ???
You might just be lacking to call instance.start() where instance is the variable which you earlier set to an instance of your class x.
It's easier to answer if you provide a bit more (code) context. From what I see you're mixing up two different ways to setup and start a new thread or process. This is not necessarily bad, it may be intentional. But if it's not, then you're typing more than you need to.
One way is:
p = Process(target=f, args=('bob',))
p.start()
p.join()
with f being any function. The first line sets up a new Process instance, the second line forks and thus starts the (sub)process, and the p.join() waits for it to finish. This is the exact example of the documentation
In the second use case, you subclass from class Process, and then you do not usually specify a target when calling the constructor. run is the default method which is invoked as a fork when you actually call the process.start() method.
class MySubProcess(multiprocessing.Process):
def __init__(self, *args, **kwargs):
super().__init__(self)
# some more specific setup for your class using args/kwargs
def run(self):
# here's the code that is going to be run as a forked process
pass
Then run with
p = MySubProcess(any_args_here)
p.start()
p.join()
If you do not need any arguments then there's no need to define an __init__ constructor for your subclass.
Both approaches allow to switch between threading.Thread and multiprocessing.Process datatypes with very few code changes. Of course the way data sharing works changes, and communication is different for threads and processes.