I have here a python program written for an Enigma 2 Linux Set top box:
VirtualZap Python program for Enigma 2 based set top boxes
I want to automatize the execution of the following function every minute:
def aktualisieren(self):
self.updateInfos()
You can find the defined function within line 436 and 437.
My problem is that class VirtualZap contains only one constructor but no main method with the actual program run, therefore it is difficult to implement threads or coroutines. Is there any possibility to automatize the execution of the aktualisieren function?
There is an Advanced Python Scheduler
from apscheduler.schedulers.blocking import BlockingScheduler
def aktualisieren(self):
self.updateInfos()
scheduler = BlockingScheduler()
scheduler.add_job(aktualisieren, 'interval', hours=1)
scheduler.start()
Related
I am measuring the metrics of an encryption algorithm that I designed. I have declared 2 functions and a brief sample is as follows:
import sys, random, timeit, psutil, os, time
from multiprocessing import Process
from subprocess import check_output
pid=0
def cpuUsage():
global running
while pid == 0:
time.sleep(1)
running=true
p = psutil.Process(pid)
while running:
print(f'PID: {pid}\t|\tCPU Usage: {p.memory_info().rss/(1024*1024)} MB')
time.sleep(1)
def Encryption()
global pid, running
pid = os.getpid()
myList=[]
for i in range(1000):
myList.append(random.randint(-sys.maxsize,sys.maxsize)+random.random())
print('Now running timeit function for speed metrics.')
p1 = Process(target=metric_collector())
p1.start()
p1.join()
number=1000
unit='msec'
setup = '''
import homomorphic,random,sys,time,os,timeit
myList={myList}
'''
enc_code='''
for x in range(len(myList)):
myList[x] = encryptMethod(a, b, myList[x], d)
'''
dec_code='''
\nfor x in range(len(myList)):
myList[x] = decryptMethod(myList[x])
'''
time=timeit.timeit(setup=setup,
stmt=(enc_code+dec_code),
number=number)
running=False
print(f'''Average Time:\t\t\t {time/number*.0001} seconds
Total time for {number} Iters:\t\t\t {time} {unit}s
Total Encrypted/Decrypted Values:\t {number*len(myList)}''')
sys.exit()
if __name__ == '__main__':
print('Beginning Metric Evaluation\n...\n')
p2 = Process(target=Encryption())
p2.start()
p2.join()
I am sure there's an implementation error in my code, I'm just having trouble grabbing the PID for the encryption method and I am trying to make the overhead from other calls as minimal as possible so I can get an accurate reading of just the functionality of the methods being called by timeit. If you know a simpler implementation, please let me know. Trying to figure out how to measure all of the metrics has been killing me softly.
I've tried acquiring the pid a few different ways, but I only want to measure performance when timeit is run. Good chance I'll have to break this out separately and run it that way (instead of multiprocessing) to evaluate the function properly, I'm guessing.
There are at least three major problems with your code. The net result is that you are not actually doing any multiprocessing.
The first problem is here, and in a couple of other similar places:
p2 = Process(target=Encryption())
What this code passes to Process is not the function Encryption but the returned value from Encryption(). It is exactly the same as if you had written:
x = Encryption()
p2 = Process(target=x)
What you want is this:
p2 = Process(target=Encryption)
This code tells Python to create a new Process and execute the function Encryption() in that Process.
The second problem has to do with the way Python handles memory for Processes. Each Process lives in its own memory space. Each Process has its own local copy of global variables, so you cannot set a global variable in one Process and have another Process be aware of this change. There are mechanisms to handle this important situation, documented in the multiprocessing module. See the section titled "Sharing state between processes." The bottom line here is that you cannot simply set a global variable inside a Process and expect other Processes to see the change, as you are trying to do with pid. You have to use one of the approaches described in the documentation.
The third problem is this code pattern, which occurs for both p1 and p2.
p2 = Process(target=Encryption)
p2.start()
p2.join()
This tells Python to create a Process and to start it. Then you immediately wait for it to finish, which means that your current Process must stop at that point until the new Process is finished. You never allow two Processes to run at once, so there is no performance benefit. The only reason to use multiprocessing is to run two things at the same time, which you never do. You might as well not bother with multiprocessing at all since it is only making your life more difficult.
Finally I am not sure why you have decided to try to use multiprocessing in the first place. The functions that measure memory usage and execution time are almost certainly very fast, and I would expect them to be much faster than any method of synchronizing one Process to another. If you're worried about errors due to the time used by the diagnostic functions themselves, I doubt that you can make things better by multiprocessing. Why not just start with a simple program and see what results you get?
I am running test cases for a matlab based program. I have several hundred test cases to run and since each test case uses a single core I have been using multiprocessing, Pool, and map to help do this work in parallel
The program takes command line arguments where I execute a bash script. I have written code which creates a csv file of the bash commands that need to be called for each test case. I read each test case from the csv file into variable testcase_to_run which creates a set of individual lists (needed in this format to be fed into the map function I believe
I have a 12 core machine so I run (12-1) instances at a time in parallel. I have noticed that with certain test-cases and their arguments not every test case gets run. I am seeing up to 20% of test cases just not being run (bash script first command is to create a new file to store results)
from multiprocessing import Pool
import subprocess
number_to_run_in_parallel = 11
testcase_to_run = ([testcase_1 arguments], [testcase_2 arguments], ....[testcase_250 arguments])
def execute_test_case(work_data):
subprocess.call(work_data, shell=True)
def pool_handler(number_to_run_in_parallel):
p = Pool(number_to_run_in_parallel)
p.map(execute_test_case, test_cases_to_run)
if __name__ == "__main__":
pool_handler(number_to_run_in_parallel)
This question already has answers here:
Executing periodic actions [duplicate]
(9 answers)
Closed 3 years ago.
I want to read a file line by line and output each line at a fixed interval .
The purpose of the script is to replay some GPS log files whilst updating the time/date fields as the software I'm testing rejects messages if they are too far out from the system time.
I'm attempting to use apscheduler for this as I wanted the output rate to be as close to 10Hz as reasonably possible and this didn't seem achievable with simple sleep commands.
I'm new to Python so I can get a little stuck on the scope of variables with tasks like this. The closest I've come to making this work is by just reading lines from the file object in my scheduled function.
import sys, os
from apscheduler.schedulers.blocking import BlockingScheduler
def emitRMC():
line = route.readline()
if line == b'':
route.seek(0)
line = route.readline()
print(line)
if __name__ == '__main__':
route = open("testfile.txt", "rb")
scheduler = BlockingScheduler()
scheduler.add_executor('processpool')
scheduler.add_job(emitRMC, 'interval', seconds=0.1)
scheduler.start()
However this doesn't really feel like the correct way of proceeding and I'm also seeing each input line being repeated 10 times at the output which I can't explain.
The repetition seemed very consistent and I thought it might be due max_workers although I've since set that to 1 without any impact.
I also changed the interval as 10Hz and the 10x repetition felt like it could be something more than a coincidence.
Usually when I get stuck like this it means I'm heading off in the wrong direction and I need pointing to a smarter approach so all advice will be welcome.
I found a simple solution here Executing periodic actions in Python [duplicate]
with this code from Micheal Anderson which works in a single thread.
import datetime, threading, time
def foo():
next_call = time.time()
while True:
print datetime.datetime.now()
next_call = next_call+1;
time.sleep(next_call - time.time())
timerThread = threading.Thread(target=foo)
timerThread.start()
I want the parallel execution of the task using Scheduler class in Python 3. Please help me with this.
Here is my code.
import sched, time
s = sched.scheduler(time.time)
def print_hello():
print('Hello')
def print_some_times():
s.enter(1, 1,print_hello)
s.run()
print_some_times()
print_some_times()
When I run this code it is showing error.
"RecursionError: maximum recursion depth exceeded while calling a Python object".
Also I want to run another task using above code.
Here is my problem: I'm using APScheduler library to add scheduled jobs in my application. I have multiple jobs executing same code at the same time, but with different parameters. The problem occurs when these jobs access the same method at the same time which causes my program to work incorrectly.
I wanna know if there is a way to lock a method in Python 3.4 so that only one thread may access it at a time? If so, could you please post a simple example code? Thanks.
You can use a basic python locking mechanism:
from threading import Lock
lock = Lock()
...
def foo():
lock.acquire()
try:
# only one thread can execute code there
finally:
lock.release() #release lock
Or with context managment:
def foo():
with lock:
# only one thread can execute code there
For more details see Python 3 Lock Objects and Thread Synchronization Mechanisms in Python.