Maya threading causing crash - multithreading

I've started an autosave script editor script (using Maya 2014), but it's really unstable and can crash if something happens at the same time as a save. I also just realised crashes will happen even when not saving, so I tried to find what the actual problem was, and ended up with barely any code left but still able to replicate it.
My idea for the code is to run a background thread, where it'll loop and backup the scripts at an interval, but check a value every second to make sure it's not been paused or cancelled (cancelled will stop the loop).
I presume the problem is something to do with how background threads work in Maya, as it can crash if loading/closing the script editor window, or switching tabs on the render view settings (at least with Mental Ray selected, since it seems to take longer loading tabs than the default renderer). I presume there's other ways, but those are the ones that were really easy to find.
After getting it down to just time.sleep() in a while loop, it really doesn't make sense to me as to why it should be causing a crash. I also used a different sleep function that does while time.time()>startTime+1, to make sure it wasn't the time module, but it still caused crashes.
Here is the cut down code if anyone wants to try it, once you start the thread with AutoSave.start(), if you continuously load and close the script editor window, you should eventually get a runtime error (that says R6025 pure virtual function call). It may take multiple attempts, but it always seems to eventually happen.
import threading, time
import pymel.core as pm
class AutoSaveThread(object):
def __init__( self ):
thread = threading.Thread(target=self.run, args=())
thread.daemon = True
thread.start()
def run(self):
while True:
time.sleep(1)
print "Open and close the script editor enough times and this will crash"
class AutoSave:
#classmethod
def start( self ):
AutoSaveThread()
I have a dozen or so tabs open so loading/closing takes a bit longer than if I had none, this could potentially increase the time window in which crashes can happen.
For the record, here is the bit of code built into Maya that will always run whenever the script editor window is closed. I thought it may have something to do with my modified version of it saving, then this trying to save at the same time, but it's still crashing with nothing happening in the loop.
global proc syncExecuterBackupFiles(){
global string $gCommandExecuter[];
global string $executerBackupFileName;
if(`optionVar -q saveActionsScriptEditor`) {
// clear the script editor temp dir first before writing temp files
string $scriptEditorTempDir = (`internalVar -userPrefDir` + "scriptEditorTemp/");
string $tempFiles[] = `getFileList -folder $scriptEditorTempDir`;
string $file;
for ($file in $tempFiles) {
sysFile -delete ($scriptEditorTempDir + $file);
}
// save all the executer control text to files
int $i = 0;
for($i = 0; $i < size($gCommandExecuter); $i++) {
cmdScrollFieldExecuter -e -storeContents $executerBackupFileName $gCommandExecuter[$i];
}
}
}

Try wrapping your call to print in pymel.mayautils.executeDeferred or maya.utils.executeDeferred so that it is executed on the main UI thread.
if you continuously load and close the script editor window, you should eventually get a runtime error (that says R6025 pure virtual function call). It may take multiple attempts, but it always seems to eventually happen.
I was able to confirm this behavior on Maya 2012, and I doubt it's version specific.
My bet is that your test call to print is what is actually causing Maya to crash, because even though print is normally just a python statement, Maya has some sort of hook into it to update the Script Editor's Output Window (and potentially the Command Response bar) with the string you are printing, which are both running on the main UI Thread.
From the Autodesk Knowledge article "Python and threading":
Maya API and Maya Command architectures are not thread-safe. Maya commands throw an exception if they are called outside the main thread, and use of the OpenMaya API from threads other than the main one has unforeseen side effects.
By passing your print statement to pymel.mayautils.executeDeferred I've (at least thus far, who knows with Maya ;-) ) been unable to cause a crash.
import threading, time
import pymel.core as pm
import pymel.mayautils # like maya.utils, for executeDeferred
# Set to False at any time to allow your threads to stop
keep_threads_alive = True
def wrapped_print():
print "Opening and closing the script editor shouldn't make this crash\n"
class AutoSaveThread(object):
def __init__(self):
thread = threading.Thread(target=self.run)
thread.start()
def run(self):
while keep_threads_alive:
time.sleep(1)
pymel.mayautils.executeDeferred(wrapped_print)
...
The only side-effect of wrapping specifically a print statement is that it no longer echoes to the Command Response bar. If preserving that behavior is important to you just use pymel.mel.mprint instead.

import threading
import time
import maya.utils as utils
run_timer = True
run_num = 0
def example(interval = 10):
global run_timer;global run_num;
def your_function_goes_here():
print "hello",run_num
run_num +=1
while run_timer:
time.sleep(interval)
utils.executeDeferred(your_function_goes_here)
t = threading.Thread(None, target = example, args = (1,) )
t.start()
# stop :
# run_timer = False

Related

How to force os.stat re-read file stats by same path

I have a code that is architecturally close to posted below (unfortunately i can't post full version cause it's proprietary). I have an self-updating executable and i'm trying to test this feature. We assume that full path to this file will be in A.some_path after executing input. My problem is that assertion failed, because on second call os.stat still returning the previous file stats (i suppose it thinks that nothing could changed so it's unnecessary). I have tried to launch this manually and self-updating works completely fine and the file is really removing and recreating with stats changing. Is there any guaranteed way to force os.stat re-read file stats by the same path, or alternative option to make it works (except recreating an A object)?
from pathlib import Path
import unittest
import os
class A:
some_path = Path()
def __init__(self, _some_path):
self.some_path = Path(_some_path)
def get_path(self):
return self.some_path
class TestKit(unittest.TestCase):
def setUp(self):
pass
def check_body(self, a):
some_path = a.get_path()
modification_time = os.stat(some_path).st_mtime
# Launching self-updating executable
self.assertTrue(modification_time < os.stat(some_path).st_mtime)
def check(self):
a = A(input('Enter the file path\n'))
self.check_body(a)
def Tests():
suite = unittest.TestSuite()
suite.addTest(TestKit('check'))
return suite
def main():
tests_suite = Tests()
unittest.TextTestRunner().run(tests_suite)
if __name__ == "__main__":
main()
I have found the origins of the problem: i've tried to launch self-updating via os.system which wait till the process done. But first: during the self-updating we launch several detached proccesses and actually should wait unitl all them have ended, and the second: even the signal that the proccess ends doesn't mean that OS really completely realease the file, and looks like on assertTrue we are not yet done with all our routines. For my task i simply used sleep, but normal solution should analyze the existing proccesses in the system and wait for them to finish, or at least there should be several attempts with awaiting.

Can UI interact while program loop through several task

I´m developing a program that handles many task in a sequence. I call this mode "AutoMode". But i need to let the user take control and start using the program manual, through menu options, and finally choose "AutoMode" again.
How can I interrupt the "AutoMode" without stopping the program executing. Guess input() will keep the program wait for ever for user to make an input and the rest of the code will stop executing?
Any suggestions?
Found a solution that solve this issue, but seems to need several keyboard interacts.
In example below this results in many keypress to get the while loop to enter the if statement and take the program back to main menu.
import keyboard
import time
my_counter = 1
while True:
my_counter += 1
if keyboard.is_pressed('h'):
print("Letter H was pushed")
time.sleep(2)
break
print("Something is going on... ", my_counter)
time.sleep(0.5)
print("Simulate user interupt and return to main menu...")

How to redirect the stdout of a multiprocessing.Process

I'm using Python 3.7.4 and I have created two functions, the first one executes a callable using multiprocessing.Process and the second one just prints "Hello World". Everything seems to work fine until I try redirecting the stdout, doing so prevents me from getting any printed values during the process execution. I have simplified the example to the maximum and this is the current code I have of the problem.
These are my functions:
import io
import multiprocessing
from contextlib import redirect_stdout
def call_function(func: callable):
queue = multiprocessing.Queue()
process = multiprocessing.Process(target=lambda:queue.put(func()))
process.start()
while True:
if not queue.empty():
return queue.get()
def print_hello_world():
print("Hello World")
This works:
call_function(print_hello_world)
The previous code works and successfully prints "Hello World"
This does not work:
with redirect_stdout(io.StringIO()) as out:
call_function(print_hello_world)
print(out.getvalue())
With the previous code I do not get anything printed in the console.
Any suggestion would be very much appreciated. I have been able to narrow the problem to this point and I think is related to the process ending after the io.StringIO() is already closed but I have no idea how to test my hypothesis and even less how to implement a solution.
This is the workaround I found. It seems that if I use a file instead of a StringIO object I can get the things to work.
with open("./tmp_stdout.txt", "w") as tmp_stdout_file:
with redirect_stdout(tmp_stdout_file):
call_function(print_hello_world)
stdout_str = ""
for line in tmp_stdout_file.readlines():
stdout_str += line
stdout_str = stdout_str.strip()
print(stdout_str) # This variable will have the captured stdout of the process
Another thing that might be important to know is that the multiprocessing library buffers the stdout, meaning that the prints only get displayed after the function has executed/failed, to solve this you can force the stdout to flush when needed within the function that is being called, in this case, would be inside print_hello_world (I actually had to do this for a daemon process that needed to be terminated if it ran for more than a specified time)
sys.stdout.flush() # This will force the stdout to be printed

Why do I get NSAutoreleasePool double release when using Python/Pyglet on OS X

I'm using Python 3.5 and Pyglet 1.2.4 on OS X 10.11.5. I am very new to this setup.
I am trying to see if I can use event handling to capture keystrokes (without echoing them to the screen) and return them to the main program one at a time by separate invocations of the pyglet.app.run method. In other words I am trying to use Piglet event handling as if it were a callable function for this purpose.
Below is my test program. It sets up the Pyglet event mechanism and then calls it four times. It works as desired but causes the system messages shown below.
import pyglet
from pyglet.window import key
event_loop = pyglet.app.EventLoop()
window = pyglet.window.Window(width=400, height=300, caption="TestWindow")
#window.event
def on_draw():
window.clear()
#window.event
def on_key_press(symbol, modifiers):
global key_pressed
if symbol == key.A:
key_pressed = "a"
else:
key_pressed = 'unknown'
pyglet.app.exit()
# Main Program
pyglet.app.run()
print(key_pressed)
pyglet.app.run()
print(key_pressed)
pyglet.app.run()
print(key_pressed)
pyglet.app.run()
print(key_pressed)
print("Quitting NOW!")
Here is the output with blank lines inserted for readability. The first message is different and appears even if I comment out the four calls to piglet.app.run. The double release messages do not occur after every call to event handling and do not appear in a consistent manner from one test run to the next.
/Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5 "/Users/home/PycharmProjects/Test Event Handling/.idea/Test Event Handling 03B.py"
2016-07-28 16:49:59.401 Python[11419:4185158]ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/8q/bhzsqtz900s742c17gkj_y740000gr/T/org.python.python.savedState
a
2016-07-28 16:50:02.841 Python[11419:4185158] *** -[NSAutoreleasePool drain]: This pool has already been drained, do not release it (double release).
2016-07-28 16:50:03.848 Python[11419:4185158] *** -[NSAutoreleasePool drain]: This pool has already been drained, do not release it (double release).
a
a
2016-07-28 16:50:04.632 Python[11419:4185158] *** -[NSAutoreleasePool drain]: This pool has already been drained, do not release it (double release).
a
Quitting NOW!
Process finished with exit code 0
Basic question: Why is this happening and what can I do about it?
Alternate question: Is there a better way to detect and get a users keystrokes without echoing them to the screen. I will be using Python and Pyglet for graphics so I was trying this using Pyglet's event handling.
Try to play with this simple example. It uses the built-in pyglet event handler to send the key pressed to a function that can then handle it. It shows that pyglet.app itself is the loop. You don't need to create any other.
#!/usr/bin/env python
import pyglet
class Win(pyglet.window.Window):
def __init__(self):
super(Win, self).__init__()
def on_draw(self):
self.clear()
# display your output here....
def on_key_press(self, symbol, modifiers):
if symbol== pyglet.window.key.ESCAPE:
exit(0)
else:
self.do_something(symbol)
# etc....
def do_something(symbol):
print symbol
# here you can test the input and then redraw
window = Win()
pyglet.app.run()

Refresh PyGTK window every minute?

I have a PyGTK app that is supposed to be a desktop monitor of some data source. I have it almost complete but there is just this one problem of auto-refreshing.
In my program I want it to fetch data from database and refresh the window every minute. Here's what I have for the refresh function (it refresh once per second now for testing):
def refresh(self):
cnxn = pyodbc.connect(r'Driver={SQL Server};Server=IL1WP0550DB;Database=Customer_Analytics;Trusted_Connection=yes;')
cursor = cnxn.cursor()
cursor.execute("SELECT * FROM TestGroup_test_group_result")
data = []
while 1:
row = cursor.fetchone()
if not row:
break
#data.append([row.TestGroupName, row.PWF, row.Expires, row.TestGroupID])
data.append([str(datetime.now()), row.PWF, row.Expires, row.TestGroupID])
cnxn.close()
self.fill_text(data)
threading.Timer(1, self.refresh).start()
Using this function I can update my window, but it only works when I drag my window around. When I put a series of print statements around, it looks like it is only executing the script when the window is moving.
Anyone know how to fix it?
Additional info: I realize that it only processes the refresh when there is a signal.
With GTK you need to make sure your widgets are only updated from the main thread. You can do this by using a timeout function with gtk.timeout_add() or gtk.timeout_add_seconds(). If you use python's threading functions the widgets are not updated from the main thread and so it does not work. For timeouts of greater than one second you should use gtk.timeout_add_seconds() as it avoids unnecessary wake ups where the timing isn't that critical.
Use gtk.timeout_add(1000, self.refresh) instead of threading.Timer(1, self.refresh).start() did the trick :)
Don't know why though
python3:
from gi.repository import GObject
def refresh_data(self):
[do stuff here]
if self.running:
GObject.timeout_add(1000, self.refresh_data)
Set self.running to False to stop.

Resources