Thread synchronized time read - python-3.x

i have multiple threads running an infinite while true without them knowing of each other's existence.
Inside their respective loops i need them to check the time and do something based on it before the next iteration, something like this:
Thread:
while True:
now = timedate.now()
# do something
time.sleep(0.2)
these threads are started in my main program in such a way:
Main:
t1.start()
t2.start()
t3.start()
...
...
while True:
#main program does something
Onto the problem, i need all the threads running to receive the same time when they check for it.
I was thinking maybe about creating a class with a lock on it and a variable to store the time, the first thread that acquires the lock saves the time in it so that the following threads can read it but to me this seems quinda a hacky way of doing things (plus i wouldn't know how to check when all the threads have read the time to update it).
What would be the best way, if possible, to implement this?

Related

Python3 : can I lock multiple locks simultaneously?

I have multiple locks that lock different parts of my API.
To lock any method I do something like this :
import threading
class DoSomething:
def __init__():
self.lock = threading.Lock()
def run(self):
with self.lock:
# do stuff requiring lock here
And for most use cases this works just fine.
But, I am unsure if what I am doing when requiring multiple locks works or not :
import threading
class DoSomething:
def __init__():
self.lock_database = threading.Lock()
self.lock_logger = threading.Lock()
def run(self):
with self.lock_database and self.lock_logger:
# do stuff requiring lock here
As it is, the code runs just fine but I am unsure if it runs as I want it to.
My question is : are the locks being obtained simultaneously or is the first one acquired and only then the second is also acquired.
Is my previous code as follows ?
with self.lock1:
with self.lock2:
# do stuff here
As it is, the code currently works but, since the chances of my threads requiring the same lock simultaneously is extremely low to begin with, I may end up with a massive headache to debug later
I am asking the question as I am very uncertain on how to test my code to ensure that it is working as intended and am equally interested in having the answer and knowing how I can test it to ensure that it works ( and not end up with the end users testing it for me )
Yes, you can do that, but beware of deadlocks. A deadlock occurs when one thread is unable to make progress because it needs to acquire a lock that some other thread is holding, but the second thread is unable to make progress because it wants the lock that the first thread already is holding.
Your code example locks lock_database first, and lock_logger second. If you can guarantee that any thread that locks them both will always lock them in that same order, then you're safe. A deadlock can never happen that way. But if one thread locks lock_database before trying to lock lock_logger, and some other thread tries to grab them both in the opposite order, that's a deadlock waiting to happen.
Looks easy. And it is, except...
...In a more sophisticated program, where locks are attached to objects that are passed around to different functions, then it may not be so easy because one thread may call some foobar(a, b) function, while another thread calls the same foobar() on the same two objects, except the objects are switched.

Infinite loop caused by thread lock

I wrote a really simple code in order to mimic a condition that I encountered recently. But I could not quite understand why does this happen exactly. Could anyone explain in detail the reason for an infinite loop because of the usage of locks.
Test Code (that mimics the condition)
import threading
# Declare Lock
jobs_lock = threading.Lock()
while True:
# Acquire the Lock
jobs_lock.acquire()
print("Lock Acquired")
if 1:
continue
else:
print("useless else")
jobs_lock.release()
Output
Lock Acquired
<cursor-blinking>
A naive way which I could think of was to release lock after each continue that I used in the original piece of code. Which looks something like this.
import threading
# Declare Lock
jobs_lock = threading.Lock()
while True:
# Acquire the Lock
jobs_lock.acquire()
print("Lock Acquired")
if 1:
jobs_lock.release()
continue
else:
print("useless else")
jobs_lock.release()
Could anyone explain why exactly is this nature acquired ? Should the thread not know that it has the control of the lock and should proceed to execute the code ?
When you use continue, further processing of the loop body is skipped and the control moves back to the top of the loop, i.e. to the loop condition (in your case it is while True:).
Once the loop body begins execution again, it tries to acquire a lock that had already been acquired in the previous iteration of the loop. Hence your thread will block forever as the lock will never be released, as in the given example we'll never reach the section of the loop that releases the lock. This will cause a deadlock condition, resulting in your code's execution to suspend.
It's worth noting that threads are not smart enough to realise that they are acquiring a lock that they have already acquired sometime earlier, though a wrapper could be written and used to keep a map of the locks and the threads that have acquired them to prevent such conditions.
Your second solution does solve the deadlock problem, correctly.

Multithreading dropping many tasks

I want to do a simple job. I have a list of n elements, and want to split the list into two smaller lists and use threading to perform a simple calculation and append them to a new list. I've written some testcode and it seems to work fine when I have a small amount of elements (say 3000). But when the element list is larger (30,000), over 12-20k tasks are being dropped and the append just doesn't go through.
I've read a lot about what constitutes threadsafe, and queueing. I believe it has something to do with that, but even after experimenting with Lock() I still seem to be unable to get a threadsafe Thread.
Can someone point me in the right direction? Cheers.
# Seperate thread workload
a_genes = genes[0:count_seperator]
b_genes = genes[count_seperator:genes_count]
class GeneThread (Thread):
def __init__(self, genelist):
Thread.__init__(self)
self.genelist = genelist
def run(self):
for gene in self.genelist:
total_reputation = 0
for local_snp in gene:
user_rsid = rsids[0]
if user_rsid is None:
continue
rep = "B"
# If multiplier is 0, don't waste time calculating
if not rep or rep == "G" or rep == "U":
continue
importance = 1
weighted_reputation = importance * mul[rep]
zygosity = "homozygous_minor"
if rep == "B":
weighted_reputation *= z_mul[zygosity]
# Now we apply the spread amplifier, we raise the score to the power of the spread number
rep_square = pow(spread, weighted_reputation)
total_reputation += rep_square
try:
with lock:
UserGeneReputation.append(total_reputation)
except:
pass
start_time = time.time()
# Create new threads
gene_thread1 = GeneThread(genelist=a_genes)
gene_thread2 = GeneThread(genelist=b_genes)
gene_thread1.daemon, gene_thread2.daemon = True, True
# Start new Threads
gene_thread1.start()
gene_thread2.start()
print(len(UserGeneReputation))
print("--- %s seconds ---" % (time.time() - start_time))
You have, broadly speaking, two choices with threads. You can have them be autonomous, do their work, and then terminate themselves quietly. Or you can have them be managed by some other thread that monitors their lifetime and knows when they're done. You have a design that absolutely requires the second option (how else will you know when you have all the results you need?), yet you've chosen the first (set them for self-termination and not monitored).
Don't make the threads daemon threads. Instead, wait for both threads to finish after you start them. That's not the most sophisticated or elegant solution, but it's the one everyone learns first.
The problem with this approach is that it forces your code to be dependent on how work is assigned to threads. This can cause performance problems as you wind up having to create and destroy a thread every time you want to know when work is done, and the only way you can know that work is done is by waiting for it. Ideally, you would treat threads as an abstraction that gets work done somehow and code that has to wait for work to be finished would wait for the work itself to be finished (through some synchronization associated with the work itself) rather than wait for the thread to be finished. That way, you can be flexible about what thread does what work and don't have to keep creating and destroying threads every time you need to assign work.
But everyone learns the create/join method. And sometimes it really is the best choice. Even when you use other methods, you likely still have an outer create/join to create the threads in the first place and, typically, ensure they cleanly finish to shut down your program in an orderly way.

Why is my thread the MainThread?

I'm creating 5 threads for handling various tasks such as reading from sensors (Raspberry Pi), TCP connections and recently recording audio (pyAudio).
I am instantiating all threads in main() identically e.g.:
if __name__ == '__main__':
main()
def main():
global network_thread
network_thread = threading.Thread(name="NET-CONN", target=network_thread_run, args=())
network_thread.start()
I keep a global reference so I can kill the threads at shutdown with join().
Now, I have added thread #5:
global audio_thread
audio_thread = threading.Thread(name="AUDIO", target=audio_thread_run(), args=())
audio_thread.start()
...but my logging indicates it's running on the MainThread. I also double-checked inside the audio_thread_run() function and it is indeed running on MainThread:
if threading.current_thread() is threading.main_thread():
logger.warning("Audio thread is the same as MainThread!")
Why is this thread running on the MainThread? Have I hit a limit on the Pi?
Let's have a look at the two places where you create threads, modified slightly so they'll fit on one line, and with white-space inserted so they line up:
net_thread = threading.Thread(name="NET", target=net_run , args=())
aud_thread = threading.Thread(name="AUD", target=aud_run(), args=())
# Hmmm, what's this suspicious-looking thing here? ---->^^
Enough fun :-) The problem is that you are actually calling the audio_thread_run() function directly from your main thread and presumably, if it ever returned, you would then try to use the result as a callable to start a thread.
If you actually got rid of the thread start stuff altogether, it would boil down to the much simpler:
audio_thread_run()
which will very much run that function from the context of the main thread.
What you need to do is to remove the parentheses so it matches what you've down with the network threads:
audio_thread = threading.Thread(name="AUDIO", target=audio_thread_run, args=())

join() threads without holding the main thread in python

I have a code that calls to threads over a loop, something like this:
def SubmitData(data):
# creating the relevant command to execute
command = CreateCommand(data)
subprocess.call(command)
def Main():
while(True):
# generating some data
data = GetData()
MyThread = threading.Thread(target=SubmitData,args=(data,))
MyThread.start()
obviously, I don't use join() on the threads.
My question is how to join() those threads without making the main thread wait for them?
Do I even need to join() them? what will happend if I won't join() them?
some important points:
the while loop is suppose to for a very long time (couple of days)
the command itself is not very long (few seconds)
I'm using threading for Performance so if someone have a better idea instead, I would like to try it out.
Popen() doesn't block. Unless CreateCommand() blocks, you could call SubmitData() in the main thread:
from subprocess import Popen
processes = []
while True:
processes = [p for p in processes if p.poll() is None] # leave only running
processes.append(Popen(CreateCommand(GetData()))) # start a new one
Do I even need to join() them? what will happend if I won't join() them?
No. You don't need to join them. All non-daemonic threads are joined automatically when the main thread exits.

Resources