EASY68k Trap task #12 Echo - 68000

Just a simple question. When looking up Trap Tasks in the help file, Trap task #12 is different in the sense that it gives you an option to turn off the keyboard "echo". But it doesn't explain what an "echo" is.
My questions are:
What is a keyboard echo
What application would this trap task be used for?

Keyboard echo means that when the user is providing input via the keyboard, the characters that he/she types will be displayed as they are typed (they are echoed).
The application of that trap task would be if you don't want the input to be echoed. If you have no such use-case in your program then you simply don't need that trap task.

Related

Linux process in background - "Stopped" in jobs?

I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.

Controlling multiple background process from a shell on an embedded Linux

Currently I am working with a embedded system that has the Linux OS. I need to run multiple application at the same time, and I would like them to be able to run through one script. A fellow colleague already had implemented this by using a wrapper script and return codes.
wrapperScript.sh $command & > output_log.txt
wrapperScript.sh $command2 & >output_log2.txt
But the problem arises in when exiting the application. Normally all the application that are on the embedded system require a user to press q to exit. But the wrapper script rather than doing that when it gets the kill signal or user signal, it just kill the process. This is dangerous because the wrapper script assumes that the application has the proper facilities to deal with the kill signal (that is not always the case and leads to memory leaks and unwanted socket connections). I have looked into automating programs such as expect but since I am using an embedded board, I am unable to get expect for it. Is there a way in the bash shell or embedded C to deal with multiple process have one single program automatically send the q signal to the programs.
I also would like the capability to maintain log and the output of the files.
EDIT:
Solution:
Okay I found the issue to the problem, Expect is the way to go about it in any situation. There is a serious limitation that it might slower, but the trade off is not bad in this situation. I decided to use Expect Scripting Language to implement the solution. There are certain trade off.
Pros:
* Precise control over embedded application
* Can Make Process Interactive to User
* can Deal with Multiple Process
Cons:
* Performance is slow
Use a pipe
Make the command read input from a named pipe. You'll then be able to send it commands from anywhere.
mkfifo command1.ctrl
{ "$command1" <command1.ctrl >command1.log 2>&1;
rm command1.ctrl; } &
Use screen
Run your applications inside the Screen program. You can run all your commands in separate windows in a single instance of screen (you'll save a little memory that way). You can specify the commands to run from a Screen configuration file:
sessionname mycommands
screen -t command1 command1
screen -t command2 command2
To terminate a program, use
screen -S mycommands -p 1 -X stuff 'q
'
where 1 is the number of the window to send the input to (each screen clause in the configuration file starts a window). The text after stuff is input to send to the program; note the presence of a newline after the q (some applications may require a carriage return instead; you can get one with stuff "q$(printf \\015)" if your shell isn't too featured-starved). If your command expects a q with no newline at all, just stuff q.
For logging, you can use Screen's logging feature, or redirect the output to a file as before.

How to find out when process exits in Linux?

I can't find a good way to find out when a process exits in Linux. Does anyone have a solution for that?
One that I can think of is check process list periodically, but that is not instant and pretty expensive (have to loop over all processes each time).
Is there an interface for doing that on Linux? Something like waitpid, except something that can be used from unrelated processes?
Thanks,
Boda Cydo
You cannot wait for an unrelated process, just children.
A simpler polling method than checking the process list, if you have permission, you can use the kill(2) system call and "send" signal 0.
From the kill(2) man page:
If sig is 0, then no signal is sent,
but error checking is still performed;
this can be used to check for the
existence of a process ID or process
group ID
Perhaps you can start the program together with another program, the second one doing whatever it is you want to do when the first program stops, like sending a notification etc.
Consider this very simple example:
sleep 10; echo "finished"
sleep 10 is the first process, echo "finished" the second one (Though echo is usually a shell plugin, but I hope you get the point)
Another option is to have the process open an IPC object such as a unix domain socket; your watchdog process can detect when the process quits as it will immediately be closed.
If you know the PID of the process in question, you can check if /proc/$PID exists. That's a relatively cheap stat() call.

Perl: How to add an interrupt handler so one can control a code executed by mpirun via system()?

We use a cluster with Perceus (warewulf) software to do some computing. This software package has wwmpirun program (a Perl script) to prepare a hostfile and execute mpirun:
# ...
system("$mpirun -hostfile $tmp_hostfile -np $mpirun_np #ARGV");
# ...
We use this script to run a math program (CODE) on several nodes, and CODE is normally supposed to be stopped by Ctrl+C giving a short menu with options: status, stop, and halt. However, running with MPI, pressing Ctrl+C badly kills CODE with loss of data.
Developers of CODE suggest a workaround - the program can be stopped by creating a file with name stop%s, where %s is name of task-file being executed by CODE. This allows to stop, but we cannot get status of calculation. Sometimes it takes really long time and getting this function back would be very appreciated.
What do you think - the problem is in CODE or mpirun?
Can one find a way to communicate with CODE executed by mpirun?
UPDATE1
In single run, one gets status of calculation by pressing Ctrl+C and choosing option status in the provided menu by entering s. CODE prints status information in STDOUT and continues to do the calculation.
"we cannot get status of calculation" - what does that mean? do you expect to get the status somehow but are not? or is the software not designed to give you status?
Your system call doesn't re-direct standard error/out anyplace, is that where the status is supposed to be (in which case, catch it by opening a pipe or re-directing to a log and having the wrapper read the log).
Also, you're not processing the return code by evaluating the return value of system call - that may be another way the program communicates.
Your Ctrl+C problem might be because Ctrl+C is caught by the Perl wrapper which dies instead of by the CODE which has some nice Ctrl+C interrupt handler. The solution might be to add interrupt handler to mpirun - see Perl Cookbook Recipe 16.18 for $SIG{INT} or http://www.wellho.net/resources/ex.php4?item=p216/sigint ; you may want to have the Perl wrapper catch Ctrl+C and send the INT signal to CODE it launched.

How does commands-piping work in *NIX?

When I do this:
find . -name "pattern" | grep "another-pattern"
Are the processes, find and grep, spawned together? My guess is yes. If so, then how does this work?:
yes | command_that_prompts_for_confirmations
If yes is continuously sending 'y' to stdout and command_that_prompts_for_confirmations reads 'y' whenever it's reading its stdin, how does yes know when to terminate? Because if I run yes alone without piping its output to some other command, it never ends.
But if piping commands don't spawn all the processes simultaneously, then how does yes know how many 'y's to output? It's catch-22 for me here. Can someone explain me how this piping works in *NIX?
From the wikipedia page: "By itself, the yes command outputs 'y' or whatever is specified as an argument, followed by a newline, until stopped by the user or otherwise killed; when piped into a command, it will continue until the pipe breaks (i.e., the program completes its execution)."
yes does not "know" when to terminate. However, at some point outputting "y" to stdout will cause an error because the other process has finished, that will cause a broken pipe, and yes will terminate.
The sequence is:
other program terminates
operating system closes pipe
yes tries to output character
error happens (broken pipe)
yes terminates
Yes, (generally speaking) all of the processes in a pipeline are spawned together. With regard to yes and similar situations, a signal is passed back up the pipeline to indicate that it is no longer accepting input. Specifically: SIGPIPE, details here and here. Lots more fun infomation on *nix pipelining is available on wikipedia.
You can see the SIGPIPE in action if you interrupt a command that doesn't expect it, as you get a Broken Pipe error. I can't seem to find an example that does it off the top of my head on my Ubuntu setup though.
Other answers have covered termination. The other facet is that yes will only output a limited number of y's - there is a buffer in the pipe, and once that is full yes will block in its write request. Thus yes doesn't consume infinite CPU time.
The stdout of the first process is connected to the stdin of the second process, and so on down the line. "yes" exits when the second process is done because it no longer has a stdout to write to.

Resources