Bash trap unset from function - linux

Namely, http://fvue.nl/wiki/Bash:_Error_handling#Set_ERR_trap_to_exit
Why is it necessary to set -o errtrace to make trap set/unset from a function call work?
#!/usr/bin/env bash
function trapit {
echo 'trapped in a box'
}
function setTrap {
trap 'trapit' ERR
}
function unsetTrap {
trap - ERR
}
function foo_init {
fooOldErrtrace=$(set +o | grep errtrace)
set -o errtrace
trap 'echo trapped' ERR # Set ERR trap
}
function foo_deinit {
trap - ERR # Reset ERR trap
eval $fooOldErrtrace # Restore `errtrace' setting
unset fooOldErrtrace # Delete global variable
}
# foo_init
setTrap
echo 'set'
false
echo 'unset'
#foo_deinit
unsetTrap
false

According to man bash(5) functions not inherits ERR trap without the errtrace flag turned on. I dont know why ERR trap cant be inherited by default, but... it is so for now :)
You can test this behaviour with my sample code:
#!/usr/bin/env bash
trapit () {
echo 'some error trapped'
}
doerr1 () {
echo 'I am the first err generator and i return error status to the callee'
return 1
}
doerr2 () {
echo 'I am the second err generator and i produce an error inside my code'
fgrep a /etc/motttd
return 0
}
[[ $1 ]] && set -o errtrace
trap trapit ERR
doerr1
doerr2
echo 'We will produce an exception in the main program...'
cat /etc/ftab | fgrep a
echo 'OK, thats done, you see it :)'
If you pass any parameter to this script, errtrace flag will be turned on and you will see that exception was "catched" when doerr2 tried to do something awful.

Related

How do i pass result from waited npm script to bash script?

In my npm script i have the following:
#!/usr/bin/env node
import { main } from './main';
import { CONFIG } from '../config';
(async () => {
const res = await main(CONFIG);
process.stdout.write(res.join('\n'));
return res;
})();
Now want to do some stuff depending on what's been returned in bash script. Attempts to do it so won't work properly:
npm run update-imports &
PID=$!
UpdateResult=$(wait $PID)
if [ -z "$UpdateResult" ];
then
echo "No imports updated, committing changes"
else
echo "Check the following files:\n ${UpdateResult}"
exit 1
fi
In short - if nothing or empty string returned - proceed with executing script, otherwise - exit script with warning.
How do i make it work?
In bash, wait returns the exit value of the process. Not the standard output as you expect. You can use process.exit(value) to return a value.
If you want to capture and process the standard output of node program, see the answer to question: How do I set a variable to the output of a command in Bash?
This should do the work:
UpdateResult=$(npm run update-imports)
if [ -z "$UpdateResult" ];
then
echo "No imports updated, committing changes"
else
echo "Check the following files:\n ${UpdateResult}"
exit 1
fi

How to implement the mount with golang

Can I use go to implement the mount function in Linux? Mount a path transferred from the foreground to a local path?such as
add_iptables "${shared_file_path}"
if [[ "x$domain" == "xnoDomain" ]]
then
expect > /dev/null 2>&1 <<EOF
set timeout 1
//
spawn /usr/bin/mount -t cifs -o nodev,nosuid,noexec,username=${user_name} ${shared_file_path} ${local_path}
expect {
"Passwor*:" {send "${local_pws}\n"}
}
expect eof
catch wait result
exit [lindex \$result 3]
EOF
else
expect > /dev/null 2>&1 <<EOF
set timeout 1
spawn /usr/bin/mount -t cifs -o nodev,nosuid,noexec,domain=${domain},username=${user_name} ${shared_file_path} ${local_path}
expect {
"Passwor*:" {send "${local_pws}\n"}
}
expect eof
catch wait result
exit [lindex \$result 3]
EOF
You could use Go to wrap a system call to pretty much anything you want.
For instance, in nanobox-io/nanobox with util/provider/dockermachine_mount_windows.go (extract of a larger function):
// ensure cifs/samba utilities are installed
cmd = []string{"sh", "-c", setupCifsUtilsScript()}
if b, err := Run(cmd); err != nil {
lumber.Debug("cifs output: %s", b)
return fmt.Errorf("cifs:%s", err.Error())
}
// mount!
// mount -t cifs -o sec=ntlmssp,username=USER,password=PASSWORD,uid=1000,gid=1000 //192.168.99.1/<path to app> /<vm location>
source := fmt.Sprintf("//192.168.99.1/nanobox-%s", appID)
// mfsymlinks,
config, _ := models.LoadConfig()
additionalOptions := config.NetfsMountOpts
// since the mount command inserts the user into the command string with
// single quotes, we need to escape any single quotes from the real
// username. As the command will be running in bash, the actual escape
// sequence is a bit tricky. Each ' will be replaced with '"'"'.
escapedUser := strings.Replace(user, "'", "'\"'\"'", -1)
opts := fmt.Sprintf("nodev,sec=ntlmssp,user='%s',password='%s',uid=1000,gid=1000", escapedUser, pass)
if additionalOptions != "" {
opts = fmt.Sprintf("%s,%s", additionalOptions, opts)
}
cmd = []string{
"sudo",
"/bin/mount",
"-t",
"cifs",
"-o",
opts,
source,
host,
}
lumber.Debug("cifs mount cmd: %v", cmd)
if b, err := Run(cmd); err != nil {
lumber.Debug("mount output: %s", b)
return fmt.Errorf("mount: output: %s err:%s", b, err.Error())
}

Is it possible to roll a dynamically-generated Javascript array (Node) into a Bash array?

So I'm currently rolling a few different env variables into a Docker container using the following syntax:
Node script:
process.env['VAR1'] = 'someArbitraryValue';
process.env['VAR2'] = 'anotherArbitraryValue';
which then execs a bash script that looks like this:
params=()
[[ ! -z "$VAR1" ]] && params+=(-e "VAR1=$VAR1")
[[ ! -z "$VAR2" ]] && params+=(-e "VAR2=$VAR2")
docker run "${params[#]}"
That works just fine since I know the names of those env variables in advance and I can just hardcode the bash command to grab their values and insert them into params. However, what I'd like to be able to do is allow for a dynamically-generated list of variables to be added to the params list.
In other words, I run some function that returns an array that looks like:
var myArray = ['VAR3=somevalue', 'VAR4=anothervalue']
and is then passed into params by iterating through its contents and appending them. Since you can't set an array as an env variable in Bash, I'm not exactly sure if this is possible.
Is there a way to perform this operation, or am I out of luck?
If I'm not missing anything, yes; using child_process.execFile() (also see execFileSync()), you can pass elements of myArray as positional parameters to the bash script and do whatever you want with them in there.
const { execFile } = require('child_process');
// define "myArray" about here
const child = execFile('./myscript.sh', myArray, (error, stdout, stderr) => {
if (error) {
throw error;
}
console.log(stdout);
});
// ...
#!/bin/bash -
params=()
for param; do
params+=(-e "${param}")
done
docker run "${params[#]}"

Can a shell function behave as a command when it comes to failure status?

For a executable which can fail with not zero exit codes, one can do:
executable && echo "suceed" || echo "failure"
How to do this with a shell function?
myfunction() {
executable arg1 arg2
}
myfunction && echo "succeed" || echo "failure"
From the bash manual:
When executed, the exit status of a function is the exit status of the last command executed in the body.
In other words, shell functions behave exactly as you have demonstrated in your question. For example, given:
myfunction() {
false
}
Running:
myfunction && echo success || echo failed
Results in:
failed
On the other hand, if we have:
myfunction() {
true
}
Running the same command returns success.

Perl script not executing some external calls

I have wrote this Perl script to automate my wireless connections:
#!/usr/bin/perl
use strict;
my #modes = ("start", "stop");
my $mode = $modes[0];
my $kill_command = "sudo kill -TERM ";
sub check_args
{
if($#ARGV != 0)
{
print(STDERR "Wrong arguments\n");
print(STDERR "Usage: ./wicd.pl start|stop\n");
exit();
}
my #aux = grep(/^$ARGV[0]$/, #modes);
if (!#aux)
{
print(STDERR "Unknown argument\n");
print(STDERR "Usage: ./wicd.pl start|stop\n");
exit();
}
$mode = $ARGV[0];
}
check_args();
my #is_wicd_running = `ps -A | grep wicd`;
# START
if ($mode eq $modes[0])
{
if (!#is_wicd_running)
{
system("gksudo ifconfig wlan0 down");
system("sudo macchanger -r wlan0");
system("sudo wicd");
}
my #is_wicd_gui_running = grep(/wicd-client/, #is_wicd_running);
if (!#is_wicd_gui_running)
{
system("gksudo wicd-gtk &");
}
}
# STOP
else
{
for (#is_wicd_running)
{
my #aux = split(/ /, $_);
system("$kill_command$aux[1]");
}
system("sudo ifconfig wlan0 down");
}
The problem is that macchanger and sudo ifconfig wlan0 down are not executing (only those...). The weird thing is that those call do execute when calling the script through Perl debugger (perl -d). I thought this could be a timing problem and added some sleep() calls before those calls, but no change. I also tried with system() calls with no change as well.
EDIT: more strange, I've found that if I run the script as perl wicd.pl it runs properly, while ./wicd.pl does not (it runs but has the problem described above). I've attached the whole script. The Perl interpreter used on the header is the same that which perl command returns.
Any clues? Thanks in advance!
More information may help, along with assuring that a \n always ends the output line. Try your running commands within
sub runx
{
foreach my $cmd ( #_ )
{
my $out = qx("$cmd 2>&1");
my $x = $?;
$out =~ s/\s*$/\n/s;
printf "\%s (0x\%0x):\n\%s", $cmd, $x, $out;
last if $x;
}
return $x;
}
No time to run this code this morning and can't delete my prior comment. But somethings running a "which" command can also assure your command is on PATH.

Resources