Wait and waitln only work once even when using flushrecv - teraterm

I am writing a macro for Tera Term to test a microcontroller that is connected to a COM port. I want the macro to pause and wait for a user prompt, but when I use a wait command it works for the first prompt, but does not work for the second prompt.
I have tried to use different keys, CR, F1, even alpha keys to trigger the prompt, but it won't wait at all.
clearscreen 0
dispstr 'INSTRUCTIONS_1'
dispstr #13
flushrecv
dispstr 'INSTRUCTIONS_2'
wait #13
flushrecv
sendln 'COMMAND_1'
mpause 250
sendln 'COMMAND_2'
mpause 250
sendln 'COMMAND_3'
mpause 250
sendln 'COMMAND_4'
dispstr 'INSTRUCTIONS_3'
wait #13
flushrecv
sendln 'COMMAND_5'
sendln 'COMMAND_6'
dispstr 'INSTRUCTIONS_4'
wait #13
I expect the macro to display instructions for someone doing the testing in the future, then hit enter once they have performed the instructions. The commands are issued to the microcontroller and then the microcontroller does its thing while the macro waits for the user to connect things like an oscilloscope or a continuity checker. The mpause commands are there to give the microcontroller a bit of time to write to memory and to execute the command.
What actually happens is the first two sets of instructions show up and wait for a carriage return. Then the rest of the macro runs without pausing.
EDIT: I found a workaround using yesnobox and message box instead of waiting for a keystroke.

As far as I understand it, the wait command is just for what appears in the terminal screen. So if the instructions have a carriage return in them, the wait command will execute because it has received the carriage return, depending on how quickly or synchronously it processes with the dispstr command.
The yesnobox would be a good workaround, but if the people are reading the instructions, I'd be curious if you couldn't just add a short pause (2-3) seconds after your dispstr commands and prior to your wait commands.

Related

How can a forked node process send data to a terminal or to the parent on exit?

I am dealing with an odd problem which I couldn't find the answer to online, nor through a lot of trial and error.
In a multi-multi process cluster, forked worker processes can run arbitrarily long commands, but the parent process listens for keepalive messages sent by workers, and kills workers that are stuck for longer than X seconds.
Worker processes can asynchronously communicate with the rest of the world (using http, or process.send ipc communication), but on exit, I'd like to be able to communicate some things (typically, queued logs or error details).
Most online documentation for process.on('exit', handler) indicates usage of console.log, however it seems like forked processes don't inherit a normal stdout, and the console.log isn't a direct tty, it's a stream (the ipc stream, I presume?).
Because of this, the process exit handler doesn't let me use console.log to log extra lines (or if it does, I'm not sure where these lines end up)
I tried various combinations of fork options (silent/not silent, non-default stdio options like inherit), using fs.write to write to tty or a real file, using process.send, or but in no case, was I able to get the on-exit handler to log anywhere visible.
How can I get the forked process to successfully log on exit?
small additional points - all this testing is on unix-like systems (macos , amazon linux...) and both parent and child processes are fired with --sigint-trace so that we can get at least the top 10 stack frames of the interrupted process on exit. These frames do make it out to the terminal successfully
This was a bit of a misunderstanding about how SIGINT is handled, and I believe that it's impossible to accomplish what I want here, but I'd love to hear if someone else found a solution.
Node has its own SIGINT handler which is "more powerful" than custom SIGINT handlers - typically it interrupts infinite loops, which is extremely useful in the case where code is blocked by long-running operations.
Node allows one-upping its own SIGINT debugging capabilities by attaching a --trace-sigint flag which captures the last frames of execution.
If I understood this correctly, there are 4 cases with different behavior
No custom handler, event loop blocked
process is terminated without any further code execution. (and --trace-sigint can give a few stack traces)
No custom handler, event loop not blocked
normal exit flow, process.on('exit') event fires.
Custom handler, event loop blocked
nothing happens until event loop unblocks (if it does), then normal exit flow
Custom handler, event loop not blocked
normal exit flow.
This happens regardless of the way the process is started, and it's not a problem about pipes or exit events - in the case where the event loop is blocked and the native signal handler is in place, the process terminates without any further execution.
It would seem like there is no way to both get a forced process exit during a blocked event loop, AND still get node code to run on the same process after the native interruption to recover more information.
Given this, I believe the best way to recover information from the stuck process is to stream data out of it before it freezes (sounds obvious, but brings a lot of extra considerations in production environments).

Using the Debugger Effectively in Multithreaded Code (SBCL) [duplicate]

One of my threads entered the debugger. I want to switch to it, expect that stacktrace, choose a restart, etc... How can I do that?
I am using bordeaux-threads.
If you use SLIME, it should work automatically. Otherwise it depends on your implementation. In SBCL, (SB-THREAD:RELEASE-FOREGROUND) should let the other thread use the terminal.
SBCL manual, 12.8 Sessions/Debugging
Within a single session, threads arbitrate between themselves for the user's attention. A thread may be in one of three notional states: foreground, background, or stopped. When a background process attempts to print a repl prompt or to enter the debugger, it will stop and print a message saying that it has stopped. The user at his leisure may switch to that thread to find out what it needs. If a background thread enters the debugger, selecting any restart will put it back into the background before it resumes. Arbitration for the input stream is managed by calls to sb-thread:get-foreground (which may block) and sb-thread:release-foreground.

print to linux prompt from background process

I was playing on my virtual machine with some exploit learning tricks when i came across this script that was printing to 2 lines and then exit to prompt, and after 10 seconds it printed in my prompt like this :
[!] Wait for the "Done" message (even if you'll get the prompt back).
user#ubuntu:~/tests$ [+] Done! Now run ./exp
How is this possible ? It is clone involved or something like that ?
The program informs you that you should wait for the "Done" message even if you get the prompt back earlier.
This is because some other process is running, detached, in the background.
The process you started has finished, which is why you are getting the prompt back. But it spawned another (background) process, e.g. via fork() or some other mechanic. By the time you get your prompt back, that other process is still running, and you are told to wait for it to finish.
When it does, it prints "Done" to the standard output (stdout) it inherited from its parent -- which is (by default) the same terminal you used to start the initial process.
Not the smoothest design -- the main process could wait for the spawned process to finish before giving you that prompt back, since it is apparently important that other process finishes before you carry on. Perhaps the author didn't know how to do that. ;-)
The process, responsible for printing the messages are running in background (background process).
In general, running a process in background means detaching only the stdin, the stdout and stderr are still linked to the actual parent shell, so all the outputs are still visible on the terminal.

Having intercommunicating asynchronous processes in wxPython

I am working on a big project that puts performance as a high priority. I have a little bit of experience using wxPython to create windows and dialog boxes for software, but I have no experience in getting processes to work in parallel during the course of a single program.
So basically, what I want to accomplish is the following:
I want one main class that controls the high level program. It sets up a configuration either from a config file or from user input. This much I have accomplished on my own.
I need PROCESS #1 to read in a file and a list of commands, execute the commands, and then pass the modified file to PROCESS #2 (this requires that PROCESS #2 is ready to accept new input.) Once the file is passed, PROCESS #1 would begin work on the next set of inputs and wait for PROCESS #2 to finish before the cycle repeats.
PROCESS #2 takes input from PROCESS #1 and writes output to a log file. Once the output is complete, it waits for the next set of output from PROCESS #1.
I know how to use wxTimers and the events associated with that, but what I have found is that a timer event will not execute if the program is otherwise occupied (like in the middle of a method.)
I have seen threads about "threading" and "Pool", but the terminology tends to go over my head, and I haven't gotten any of that sort of stuff to work.
If anybody can point me in the right direction, I would be greatly appreciative.
If you use threads, then I think this would be fairly easy to do. Here's what I would suggest:
Create a button (or some other widget) to execute process #1 in a thread. The thread itself will run BOTH processes. Here's some psuedo-code that might help:
# this is in your thread code:
result = self.call_process_1(args)
self.call_process_2(result)
This will allow you to start another process #1/2 with a new set of commands every time you press the button. Since the two processes are encapsulated in the thread, they don't have to wait for process #2 to finish. You will probably need to log to separate logs for the logs to make sense, but you can label the logs with a timestamp and a thread number or a uuid.
Depending on how many of these processes you need to do, you might need to look into setting up a cluster that's driven with celery or some such. But I think this is a good starting place.

Perl: How to add an interrupt handler so one can control a code executed by mpirun via system()?

We use a cluster with Perceus (warewulf) software to do some computing. This software package has wwmpirun program (a Perl script) to prepare a hostfile and execute mpirun:
# ...
system("$mpirun -hostfile $tmp_hostfile -np $mpirun_np #ARGV");
# ...
We use this script to run a math program (CODE) on several nodes, and CODE is normally supposed to be stopped by Ctrl+C giving a short menu with options: status, stop, and halt. However, running with MPI, pressing Ctrl+C badly kills CODE with loss of data.
Developers of CODE suggest a workaround - the program can be stopped by creating a file with name stop%s, where %s is name of task-file being executed by CODE. This allows to stop, but we cannot get status of calculation. Sometimes it takes really long time and getting this function back would be very appreciated.
What do you think - the problem is in CODE or mpirun?
Can one find a way to communicate with CODE executed by mpirun?
UPDATE1
In single run, one gets status of calculation by pressing Ctrl+C and choosing option status in the provided menu by entering s. CODE prints status information in STDOUT and continues to do the calculation.
"we cannot get status of calculation" - what does that mean? do you expect to get the status somehow but are not? or is the software not designed to give you status?
Your system call doesn't re-direct standard error/out anyplace, is that where the status is supposed to be (in which case, catch it by opening a pipe or re-directing to a log and having the wrapper read the log).
Also, you're not processing the return code by evaluating the return value of system call - that may be another way the program communicates.
Your Ctrl+C problem might be because Ctrl+C is caught by the Perl wrapper which dies instead of by the CODE which has some nice Ctrl+C interrupt handler. The solution might be to add interrupt handler to mpirun - see Perl Cookbook Recipe 16.18 for $SIG{INT} or http://www.wellho.net/resources/ex.php4?item=p216/sigint ; you may want to have the Perl wrapper catch Ctrl+C and send the INT signal to CODE it launched.

Resources