Trap command in UNIX - linux

How does the trap command work in this code?
trap "ignore" 2
ignore()
{
main
}
main()
{
trap "main" 2
while [ 1 ]
do
echo -e "\t\t\t1.Add\n\t\t\t2.Remove\n\t\t\t3.Edit\n\t\t\t4.Search\n\t\t\t5.Display\n\t\t\t6.Exit"
echo "Enter the option"
read option
case $option in
1)echo "You take add option";;
2)echo "You take Remove option";;
3)echo "You take Edit option";;
4)echo "You take Search option";;
5)echo "You take Display option";;
6)exit;;
*)echo "Invalid Option"
esac
done
}
main
If the above script is executed, the ctrl+c signal is trapped and the main function is called. But, it is done only only time(first time). It does not work in second time. At first time of ctrl+c, main will be called. But, this works only once. At second time of the ctrl+c, main will not be executed.

That's how signals work in UNIX. When you send a SIGINT using by pressing ctrl + C, the signal handler is called i.e. in your case, the main function. But when the handler is processing the signal, all the subsequent signals are blocked until the handler returns. In your case, the handler never returns. So your program can't react to the subsequent SIGINT signals until the handler returns. It's not a good idea to call a signal handler recursively and a signal handler shouldn't do too much of work either. It should process the signal and return as soon as it can.
Also, note that the defined macro names such as SIGINT should be used for better portability than the actual numbers. You can get the list using kill -l or trap -l commands.

Related

How does prctl, set_pdeathsig work in perl?

perl-5.24.0 on RH7
I'd like a forked process to kill itself when it determines that it's parent dies. I've read that I can use Linux::Prctl, set_pdeathsig() to do that. But my test of this doesn't seem to work.
#!/usr/bin/env perl
use strict;
my $pid = fork();
die if not defined $pid;
if($pid == 0) {
do_forked_steps();
}
print "====PARENT===\n";
print "Hit <CR> to kill parent.\n";
my $nocare = <>;
exit;
sub do_forked_steps {
system("/home/dgauthie/PERL/sub_fork.pl");
}
And sub_fork.pl is simply...
#!/usr/bin/env perl
use strict;
use Linux::Prctl;
Linux::Prctl::set_pdeathsig(1);
sleep(300);
exit;
(I believe sending "1" tp set_pdeathsig = SIGHUP. But I also tried "9". Same results)
When I run the first script, I can see both procs using ps in another window. When I hit in the script to kill it, I can see that proc go away, but the second one, the forked process, remains.
What am I doing wrong?
You have three processes, not two, because system forks. They are:
The parent process in the parent script ($pid != 0) which waits on <> and calls exit.
The child process, created by fork in the parent script, which calls system. system forks and then waits for its child to exit before returning.
The child process created by system which execs your child script, calls prctl, and sleeps.
When you press enter, process #1 dies, but process #2 does not, and since process #2 is the parent of process #3, the PDEATHSIG is never invoked.
Changing system to exec in your first script, so that a third process isn't created, causes the PDEATHSIG to fire in your toy problem, but without more information it isn't clear if that's suitable in the "real world" version of what you're trying to do.

Spawn Expect from a perl thread

I am working on a script which needs to spawn an Expect process periodically (every 5 mins) to do some work. Below is the code that I have that spawns an Expect process and does some work. The main process of the script is doing some other work at all times, for example it may wait for user input, because of that I am calling this function 'spawn_expect' in a thread that keeps calling it every 5 minutes, but the issue is that the Expect is not working as expected.
If however I replace the thread with another process, that is if I fork and let one process take care of spawning Expect and the other process does the main work of the script (for example waiting at a prompt) then Expect works fine.
My question is that is it possible to have a thread spawn Expect process ? do I have to resort to using a process to do this work ? Thanks !
sub spawn_expect {
my $expect = Expect->spawn($release_config{kinit_exec});
my $position = $expect->expect(10,
[qr/Password.*: /, sub {my $fh = shift; print $fh "password\n";}],
[timeout => sub {print "Timed out";}]);
# if this function is run via a process, $position is defined, if it is run via a thread, it is not defined
...
}
Create the Expect object beforehand (not inside a thread) and pass it to a thread
my $exp = Expect->spawn( ... );
$exp->raw_pty(1);
$exp->log_stdout(0);
my ($thr) = threads->create(\&login, $exp);
my #res = $thr->join();
# ...
sub login {
my $exp = shift;
my $position = $exp->expect( ... );
# ...
}
I tested with multiple threads, where one uses Expect with a custom test script and returns the script's output to the main thread. Let me know if I should post these (short) programs.
When the Expect object is created inside a thread it fails for me, too. My guess is that in that case it can't set up its pty the way it does that normally.
Given the clarification in a comment I'd use fork for the job though.

Parallel processing through tcl script

Need solution for parallel processing in tcl (windows).
I tried with thread, still not able to achieve desired output.
To simplify My requirement I am giving a simple example as following.
Requirement:
I want to run notepad.exe without effecting my current execution of flow. From main thread control should go to called thread, start notepad.exe and come back to main thread with out closing the notepad .
Tried:(Tcl script)
package require Thread
set a 10
proc test_thread {b} {
puts "in procedure $b"
set tid [thread::create] ;# Create a thread
return $tid
}
puts "main thread"
puts [thread::id]
set ttid [test_thread $a]
thread::send $ttid {exec c:/windows/system32/notepad.exe &}
puts "end"
Getting Output:
running notepad without showing any log.
when closing notepad application I am getting following output.
main thread
tid0000000000001214
in procedure 10
end
Desired output:
main thread
tid0000000000001214
in procedure 10
---->> control should go to thread and run notepad.exe with out effecting main thread flow.
<<-------
end
So kindly help to solve this issue and if appart from thread concept any other is there let me know.
You're using a synchronous thread::send. It's the version that is most convenient for when you want to get a value back, but it does wait. You probably should be using the asynchronous version:
thread::send -async $ttid {exec c:/windows/system32/notepad.exe &}
# ^^^^^^ This flag here is what you need to add
However it is curious that the exec call is behaving as you describe at all; the & at the end should make it effectively asynchronous anyway. Unless there's some sort of nasty interaction with how Windows is interpreting asynchronous subprocess creation in this case.

How to exit in Node.js

What is the command that is used to exit? (i.e terminate the Node.js process)
Call the global process object's exit method:
process.exit()
From the docs:
process.exit([exitcode])
Ends the process with the specified code. If omitted, exit with a 'success' code 0.
To exit with a 'failure' code:
process.exit(1);
The shell that executed node should see the exit code as 1.
Just a note that using process.exit([number]) is not recommended practice.
Calling process.exit() will force the process to exit as quickly as
possible even if there are still asynchronous operations pending that
have not yet completed fully, including I/O operations to
process.stdout and process.stderr.
In most situations, it is not actually necessary to call
process.exit() explicitly. The Node.js process will exit on its own if
there is no additional work pending in the event loop. The
process.exitCode property can be set to tell the process which exit
code to use when the process exits gracefully.
For instance, the following example illustrates a misuse of the
process.exit() method that could lead to data printed to stdout being
truncated and lost:
// This is an example of what *not* to do:
if (someConditionNotMet()) {
printUsageToStdout();
process.exit(1);
}
The reason this is
problematic is because writes to process.stdout in Node.js are
sometimes asynchronous and may occur over multiple ticks of the
Node.js event loop. Calling process.exit(), however, forces the
process to exit before those additional writes to stdout can be
performed.
Rather than calling process.exit() directly, the code should set the
process.exitCode and allow the process to exit naturally by avoiding
scheduling any additional work for the event loop:
// How to properly set the exit code while letting
// the process exit gracefully.
if (someConditionNotMet()) {
printUsageToStdout();
process.exitCode = 1;
}
From the official nodejs.org documentation:
process.exit(code)
Ends the process with the specified code. If omitted, exit uses the 'success' code 0.
To exit with a 'failure' code:
process.exit(1);
If you're in a Unix terminal or Windows command line and want to exit the Node REPL, either...
Press Ctrl + C twice, or
type .exit and press Enter, or
press Ctrl + D at the start of a line (Unix only)
From the command line, .exit is what you want:
$ node
> .exit
$
It's documented in the REPL docs. REPL (Read-Eval-Print-Loop) is what the Node command line is called.
From a normal program, use process.exit([code]).
It depends on the reason why you're willing to exit node.js process, but in any case process.exit() is the last option to consider. A quote from documentation:
It is important to note that calling process.exit() will force the
process to exit as quickly as possible even if there are still
asynchronous operations pending that have not yet completed fully,
including I/O operations to process.stdout and process.stderr.
In most situations, it is not actually necessary to call
process.exit() explicitly. The Node.js process will exit on it's own
if there is no additional work pending in the event loop. The
process.exitCode property can be set to tell the process which exit
code to use when the process exits gracefully.
Let’s cover possible reasons why you might be willing to exit node.js process and why you should avoid process.exit():
Case 1 - Execution complete (command line script)
If script has reached its end and node interpreter doesn't exit, it indicates that some async operations are still pending. It’s wrong to force process termination with process.exit() at this point. It’s better to try to understand what is holding your script from exiting in expected way. And when you settle this, you can use process.exitCode to return any result to calling process.
Case 2 - Termination because of external signal (SIGINT/SIGTERM/other)
For example, if you’re willing to gracefully shut down an express app. Unlike command line script, express app keeps running infinitely, waiting for new requests. process.exit() will be a bad option here because it’s going to interrupt all requests which are in pipeline. And some of them might be non-idempotent (UPDATE, DELETE). Client will never know if those requests are completed or not on server side and it might be the reason of data inconsistency between client and server. The only good solution is to tell http server to stop accepting new requests and wait for pending ones to finish with server.close():
var express = require('express');
var app = express();
var server = app.listen(80);
process.on( 'SIGTERM', function () {
server.close(function () {
console.log("Finished all requests");
});
});
If it still doesn't exit - see Case 1.
Case 3 - Internal error
It's always better to throw an error, you’ll get a nicely formatted stack trace and error message. Upper levels of code can always decide if they can handle error (catch) or let it crash the process. On the other side, process.exit(1) will terminate process silently and there will be no chance to recover from this. It might be the only “benefit” of process.exit(), you can be sure that process will be terminated.
REPL(Command Line)
Press ctrl + c twice
Type .exit and press enter
Script File
process.exit(code)
Node normally exits with code 0 when no more async operations are pending.
process.exit(1) should be used to exit with a failure code.This will allow us to infer that node didn't close gracefully and was forced to close.
There are other exit codes like
3 - Internal JavaScript Parse Error ( very very rare)
5 - Fatal error in v8 javascript engine
9 - Invalid argument
For full list see node exit codes
I have an application which I wanted to:
Send an email to the user
Exit with an error code
I had to hook process.exit(code) to an exit event handler, or else the mail will not be sent since calling process.exit(code) directly kills asynchronous events.
#!/usr/bin/nodejs
var mailer = require('nodemailer');
var transport = mailer.createTransport();
mail = {
to: 'Dave Bowman',
from: 'HAL 9000',
subject: 'Sorry Dave',
html: 'Im sorry, Dave. Im afraid I cant do <B>THAT</B>.'
}
transport.sendMail(mail);
//process.exit(1);
process.on('exit', function() { process.exit(1); });
As #Dominic pointed out, throwing an uncaught error is better practice instead of calling process.exit([code]):
process.exitCode = 1;
throw new Error("my module xx condition failed");
Press Ctrl + C twice
or .exit.
>
(To exit, press ^C again or type .exit)
>
To exit
let exitCode = 1;
process.exit(exitCode)
Useful exit codes
1 - Catchall for general errors
2 - Misuse of shell builtins (according to Bash documentation)
126 - Command invoked cannot execute
127 - “command not found”
128 - Invalid argument to exit
128+n - Fatal error signal “n”
130 - Script terminated by Control-C
255\* - Exit status out of range
From code you can use process.exit([errorcode]) where [errorcode] is an optional integer (0 is the default to indicate success).
If you're using the Read Eval Print Loop (REPL), you can use Ctrl + D, or type .exit
Alternatively, on Windows or Linux you can use Ctrl + C, Ctrl + C
On Mac the command is Ctrl + Z, Ctrl + Z
adding
process.exit(1);
will do the trick for you
I was able to get all my node processes to die directly from the Git Bash shell on Windows 10 by typing taskkill -F -IM node.exe - this ends all the node processes on my computer at once. I found I could also use taskkill //F //IM node.exe. Not sure why both - and // work in this context. Hope this helps!
Open the command line terminal where node application is running and press Ctrl + C
if you want to exit a node js application from code,
process.exit(); // graceful termination
process.exit(1); // non graceful termination
As process is global object, you don't need to import any module. The following function exits or kills the current node process.
process.exit(code)
process.kill(process.pid)
process.abort()
if you want to exit from node js application then write
process.exit(1)
in your code
The exit in node js is done in two ways:
Calling process.exit() explicitly.
Or, if nodejs event loop is done with all tasks, and there is nothing left to do. Then, the node application will automatically exit.
How it works?
If you want to force the execution loop to stop the process, yo can use the global variable process which is an instance of EventEmitter. So when you call process.exit() you actually emit the exit event that ends all tasks immediately even if there still are asynchronous operations not been done.
process.exit() takes an exit code (Integer) as a parameter. The code 0 is the default and this means it exit with a 'success'. While the code 1 means it exit with a 'failure'.
import mongosse from 'mongoose'
import dotenv from 'dotenv'
import colors from 'colors'
import users from './data/users.js'
import products from './data/products.js'
import User from './models/userModel.js'
import Product from './models/productModel.js'
import Order from './models/orderModel.js'
import connectDB from './config/db.js'
dotenv.config()
connectDB()
const importData = async()=>{
try{
await Order.deleteMany()
await Product.deleteMany()
await User.deleteMany()
const createdUsers = await User.insertMany(users)
const adiminUser = createdUsers[0]._id
sampleProducts = products.map(product =>{
return {...product, user:adiminUser }
})
await Product.insertMany(sampleProducts)
console.log('Data Imported!'.green.inverse)
process.exit() //success and exit
}catch(error){
consolele.log(`${error}`.red.inverse)
process.exit(1) //error and exit
}
}
so here im populating some collections in a db and in the try block if i dont get any errors then we exit it with a success message , so for that we use process.exit() with nothing in the parameter.
If theres an error then we need to exit with an unsuccessfull message so we pass 1 in the parameter like this , process.exit(1).
extra: Here by exiting we mean exiting that typical node js program. eg if this code was in a file called dbOperations.js then the process.exit will exit and wont run any code that follows after process.exit
ctrl+C to terminate present process
ctrl+C twice is to exit REPL shell
ctrl+c to exit from REPL SHELL
You may use process.exit([code]) function.
If you want to exit without a 'failure', you use code 0:
process.exit(0);
To exit with a 'failure' code 1 you may run:
process.exit(1);
The 'failure' code of the failure is specific to the application. So you may use your own conventions for it.
If you're in Windows, go to Task Manager, then go to Processes, look for a process called "node", then click on it with the right button of your mouse and then click the "End Process" option.

Why doesnt SIGINT get caught here?

Whats going on here? I thought SIGINT would be sent to the foreground process group.
(I think, maybe, that system() is running a shell which is creating a new process group for the child process? Can anyone confirm this?)
% perl
local $SIG{INT} = sub { print "caught signal\n"; };
system('sleep', '10');
Then hit ctrl+d then ctrl+c immediately and notice that "caught signal" is never printed.
I feel like this is a simple thing... anyway to work around this? The problem is that when running a bunch of commands via system results in holding ctrl+c until all iterations are completed (because perl never gets the SIGINT) and is rather annoying...
How can this be worked around? (I already tested using fork() directly and understand that this works... this is not an acceptable solution at this time)
UPDATE: please note, this has nothing to do with "sleeping", only the fact that the command takes some arbitrary long amount of time to run which is considerably more than that of the perl around it. So much so that pressing ctrl+c gets sent to the command (as its in the foreground process group?) and somehow manages to never be sent to perl.
from perldoc system:
Since SIGINT and SIGQUIT are ignored during the execution of system,
if you expect your program to terminate on receipt of these signals you will need to arrange to do so yourself based on the return value.
#args = ("command", "arg1", "arg2");
system(#args) == 0
or die "system #args failed: $?"
If you'd like to manually inspect system's failure, you can check all possible failure
modes by inspecting $? like this:
if ($? == -1) {
print "failed to execute: $!\n";
}
elsif ($? & 127) {
printf "child died with signal %d, %s coredump\n",
($? & 127), ($? & 128) ? 'with' : 'without';
}
else {
printf "child exited with value %d\n", $? >> 8;
}
Alternatively, you may inspect the value of ${^CHILD_ERROR_NATIVE} with the W*() calls from the POSIX module
I don't quite get what you're trying to achieve here... but have you tried simply comparing to:
perl -wle'local $SIG{INT} = sub { print "caught signal"; }; sleep 10;'
Can you explain what effect you're trying to go for, and why you are invoking the shell? Can you simply call into the external program directly without involving the shell?

Resources