I have a tcl proc called run_expect that I use to run basic tcl expect flow: spawn <device>, send <cmd>, expect <string>. Now I need to run this code from 2 threads running in parallel, I did the following attemps:
when I tried to write the multi-threaded proc which simply calls run_expect I got the error of unknown command run_expect from the thread's context/scope.
I tried to take the implementation of the run_expect proc and put it in the thread itself, but then I encountered another issue that the thread doesn't seem to see expect library as the other procs and complains on: "invalid command name "spawn".
I tried then to do package require Expect from the thread itself, but got Segmentation fault: 11 error.
Tried to update the ::audio_path variable of the thread to be same as main context but also didn't help making the package require work (::thread::send -async [lindex $tids 0] [list set ::auto_path $::auto_path])
Is there anyway to call any already existing proc from a thread?
if not, is moving the code into the thread is the write solution? and how can I get the thread to know the packages / commands loaded?
Each thread in Tcl is almost totally isolated from all other threads. They don't share any commands (including procedures) or variables. The easiest way to manage things in multiple threads is to put the code to be in each thread into its own Tcl file and to tell the worker threads to source that as part of starting up.
However…
The Expect package is not thread safe; you've provided clear evidence of that. I don't know the details of why this is so. This means that if you want multithreaded expecting, your easiest approach is to use several processes instead. Tcl's good at managing subprocesses via asynchronous pipelines, and when everything is designed to work that way, you don't need to use Expect in the parent to manage things. You could also use the comm package to do the communications with the subprocesses.
Related
I have a code running in Python 3.7.4 which forks off multiple processes. I believe I'm hitting a known issue (issue6721: https://github.com/python/cpython/issues/50970). I setup the child process to send "progress report" through a pipe to the parent process and noticed that sometimes a log statement doesn't get printed and that the code gets stuck in a deadlock situation.
After reading issue6721, I'm not sure I'm still understanding why parent might hold logger Handler lock after a log statement is done execution (i.e the line that logs is executed and the execution has moved to the next line of code). I totally get it that in the context of C++, the compiler might re-arrange instructions. Not fully understand it in context of Python. In C++ I can have barrier instructions to stop the compiler moving instructions beyond a point. Is there something similar that can be done in Python to avoid having a lock getting copied to child process?
I have seen solutions using "atfork" which is a library that seems not supported (so I can't really use it).
Does anyone know a reliable and standard solution to this problem?
I was trying to check response messages written in perl which takes requests through Amazon API and returns responses..How to run parallel fork as single thread in perl?. I'm using LWP::UserAgent module and I want to debug HTTP requests.
As a word of warning - threads and forks are different things in perl. Very different.
However the long and short of it is - you can't, at least not trivially - a fork is a separate process. It actually happens when you run -any- external command in perl, it's just by default perl sits and waits for that command to finish and return output.
However if you've got access to the code, you can amend it to run single threaded - sometimes that's as simple as reducing the paralleism with a config parameter. (In fact quite often - debugging parallel code is a much more complicated task than sequential, so getting it working before running parallel is really important).
You might be able to embed a waitpid into your primary code so you've only got one thing running at once. Without a code example though, it's impossible to say for sure.
When I run QWebFrame::evaluateJavaScript(scriptSource) from main thread everything seems to work just fine. But when I try to run it from a different thread I get a SyntaxError: Parse error. Even when I'm trying to run trivial code like 1+1;.
Can somebody explain why this occurs and whether this is the expected behavior?
Is there a possibility to use the QWebKit in another thread then the main thread?
P.S.: I am running Qt4.8
I don't know much about QWebFrame or QT, but following should hold true.
In simplest terms, its GUI application and all the actions have to be done in main thread. If you have multiple threads, you will have to find a way to channel the call to main gui loop thread or main thread in your case.
One of the main reason is thread local storage that application could be using internally. If you execute the function from another thread local storage may not be set.
For GTK calls, most (all?) of the webkit calls have to be channelled through gtk idle hook so that it will get executed in proper thread. There should be something equivalent in QT.
In Linux, to get a backtrace you can use backtrace() library call, but it only returns backtrace of current thread. Is there any way to get a backtrace of some other thread, assuming I know it's TID (or pthread_t) and I can guarantee it sleeps?
It seems that libunwind (http://www.nongnu.org/libunwind/) project can help. The problem is that it is not supported by CentOS, so I prefer not to use it.
Any other ideas?
Thanks.
I implemented that myself here.
Initially, I wanted to implement something similar as suggested here, i.e. getting somehow the top frame pointer of the thread and unwinding it manually (the linked source is derived from Apples backtrace implementation, thus might be Apple-specific, but the idea is generic).
However, to have that safe (and the source above is not and may even be broken anyway), you must suspend the thread while you access its stack. I searched around for different ways to suspend a thread and found this, this and this. Basically, there is no really good way. The common hack, also used by the Hotspot JAVA VM, is to use signals and sending a custom signal to your thread via pthread_kill.
So, as I would need such signal-hack anyway, I can have it a bit simpler and just use backtrace inside the called signal handler which is executed in the target thread (as also suggested here by sandeep). This is basically what my implementation is doing.
If you are also interested in printing the backtrace, i.e. get some useful debugging information (function name, source code filename, source code line number, ...), read here about an extended backtrace_symbols based on libbfd. Or just see the source here.
Signal Handling with the help of backtrace can solve your purpose.
I mean if you have a PID of the Thread, you can raise a signal for that thread. and in the handler you can use the backtrace. since the handler would be executing in that partucular thread, the backtrace there would be the output what you are needed.
gdb provides these facilities for debugging multi-thread programs:
automatic notification of new threads
‘thread thread-id’, a command to switch among threads
‘info threads’, a command to inquire about existing threads
‘thread apply [thread-id-list] [all] args’, a command to apply a command to a list of threads
thread-specific breakpoints
‘set print thread-events’, which controls printing of messages on thread start and exit.
‘set libthread-db-search-path path’, which lets the user specify which libthread_db to use if the default choice isn't compatible with the program.
So just goto required thread in GDB by cmd: 'thread thread-id'.
Then do 'bt' in that thread context to print the thread backtrace.
I have an existing utility application, let's call it util.exe. It's a command-line tool which takes inputs from the command line, and creates a file on disk, let's say an image file
I want to use this within another application, by running util.exe. However it needs to be synchronous so the file is known to exist when processing continues.
e.g (psudeo)
bool CreateImageFile(params)
{
//ret is util.exe program exit code
int ret = runprocess("util.exe",params);
return ret==0;
}
Is there a single Win32 API call that will run the process and wait until it ends? I looked at CreateProcess but it returns as soon as it tries to start, I looked at ShellExecute but that seems a bit ugly even it were synchronous.
There's no single api, but this is actually a more interesting general question for Win32 apps. You can use CreateProcess or ShellExecuteEx and WaitForSingleObject on the process handle. GetExitCodeProcess at that point will give you the program's exit code. See here for simple sample code.
However this blocks your main thread completely, and can give you serious deadlock problems under some Win32 messaging scenarios. Let's say the spawned exe does a broadcast sendmessage. It can't proceed until all windows have processed the message - but you can't proceed because you're blocked waiting for it. Deadlock. Since you're using purely command line programs this issue probably doesn't apply to you though. Do you care if a command line program hangs for a while?
The best general solution for normal apps is probably to split a process launch-and-wait off onto a thread and post a message back to your main window when the thread runs to completion. When you receive the message, you know it is safe to continue, and there are no deadlock issues.
Process handle is a waitable object, AFAIK. This is exactly what you need.
However, I'd recommend against doing anything like that. Starting process on windows may be slow and it will block your UI. Consider a PeekMessage loop with 50ms wait timeouts to do it from a windows application.