Linux LVM lvs command fails from cron perl script but works from cron directly - linux

I am trying to run "lvs" in a perl script to parse its output.
my $output = `lvs --noheadings --separator : --units m --nosuffix 2>&1`;
my $result = $?;
if ($result != 0 || length($output) == 0) {
printf STDERR "Get list of LVs failed (exit result: %d): %s\n",
$result, $output;
exit(1);
}
printf "SUCCESS:\n%s\n", $output;
When I run the above script from a terminal window it runs fine. If I run via cron it fails:
Get list of LVs failed (exit result: -1):
Note the lack of any output (stdout + stderr)
If I run the same "lvs --noheadings --separator : --units m --nosuffix" command directly in cron, it runs and outputs just fine.
If I modify the perl script to use open3() I also get the same failure with no output.
If I add "-d -d -d -d -d -v -v -v" to the lvs command to enable verbose/debug output I see that when I run the perl script from terminal, but there is no output when run via cron/perl.
I'm running this on RHEL 7.2 with /usr/bin/perl (5.16.3)
Any suggestions???

According to perldoc system, "Return value of -1 indicates a failure to start the program or an error of the wait(2) system call (inspect $! for the reason)." So the reason there's no output is because lvs isn't being started successfully.
Given the usual nature of cron-related problems, I'd say the most likely reason it's failing to run would be that it's not on the $PATH used by cron. Try specifying the full path and, if that doesn't work, check $! for the operating system's error message.

Try using absolute path to lvs:
my $output = `/sbin/lvs --noheadings --separator : --units m --nosuffix 2>&1`;

Related

Why does this bash command's return value change?

What I actually want to do
Save a command's output and check its return status.
The solution?
After some googling I found basically the same answer here on StackOverflow as well as on AskUbuntu and Unix/Linux StackExchange:
if output=$(command); then
echo "success: $output"
fi
Problem
When trying out this solution with command info put the if clause is executed even if the actual command fails, but I can't explain myself why?
I tried to check the return value $? manually and it seems like the var= changes the return value:
$ info put
info: No menu item 'put' in node '(dir)Top'
$ echo $?
1
$ command info put
info: No menu item 'put' in node '(dir)Top'
$ echo $?
1
$ var=$(command info put)
info: No menu item 'put' in node '(dir)Top'
$ echo $?
0
$ var=$(command info put); echo $?
info: No menu item 'put' in node '(dir)Top'
0
It's also the same behavior when `
So why does that general solution not work in this case?
And how to change/adapt the solution to make it work properly?
My environment/system
I'm working on Windows 10 with WSL2 Ubuntu 20.04.2 LTS:
$ tmux -V
tmux 3.0a
$ echo $SHELL
/bin/bash
$ /bin/bash --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
$ info --version
info (GNU texinfo) 6.7
When trying out this solution with command info put the if clause is executed even if the actual command fails, but I can't explain myself why?
Indeed, info exits with 0, when output is not a terminal and there's an error.
// texinfo/info.c
if ((!isatty (fileno (stdout))) && (user_output_filename == NULL))
{
user_output_filename = xstrdup ("-");
...
}
...
// in called get_initial_file()
asprintf (error, _("No menu item '%s' in node '%s'"),
(*argv)[0], "(dir)Top");
...
if (user_output_filename)
{
if (error)
info_error ("%s", error);
...
exit (0); // exits with 0!
}
References: https://github.com/debian-tex/texinfo/blob/master/info/info.c#L848 , https://github.com/debian-tex/texinfo/blob/master/info/info.c#L277 , https://github.com/debian-tex/texinfo/blob/master/info/info.c#L1066 .
why does that general solution not work in this case?
Because the behavior of the command changes when its output is redirected not to a terminal.
how to change/adapt the solution to make it work properly?
You could simulate a tty - https://unix.stackexchange.com/questions/157458/make-program-in-a-pipe-think-it-has-tty , https://unix.stackexchange.com/questions/249723/how-to-trick-a-command-into-thinking-its-output-is-going-to-a-terminal .
You could grab stderr of the command and check if it's not-empty or match with some regex.
I think you could also contact texinfo developers and let them know that it's I think a bug and make a patch, so it would be like exit(error ? EXIT_FAILURE : EXIT_SUCCESS);.
Instead of checking the exit status of the command, I ended up with saving the output and simply checking if there is any output that could be used for further processing (in my case piping into less):
my_less () {
output=$("$#")
if [[ ! -z "$output" ]]; then
printf '%s' "$output" | less
fi
}
Even with the bug in info, my function now works as the bug only affects the command's exit status. It's error messages are written to stderr as expected, so I'm using that behavior.

Is it possible to see live output of command runned by execute_process?

I'm running some time consuming bash script:
execute_process(
COMMAND "bash" "slow_script.sh"
WORKING_DIRECTORY ${INSTALL_SCRIPT_DIR}
ERROR_VARIABLE ERROR_MESSAGE
RESULT_VARIABLE ERROR_CODE)
and I want to see progress. I tried to show xterm window:
execute_process(
COMMAND "xterm" "-e" "slow_script.sh"
WORKING_DIRECTORY ${INSTALL_SCRIPT_DIR}
ERROR_VARIABLE ERROR_MESSAGE
RESULT_VARIABLE ERROR_CODE)
It works, but seems ugly.
Is it possible to show script output in CMake output while script is executing?
Probably you could use some of the standard /dev devices as OUTPUT_FILE.
The following CMake example worked with a quick test on my Ubuntu machine:
cmake_minimum_required(VERSION 2.4)
project(TestExecuteProcessToStdOut NONE)
execute_process(
COMMAND "${CMAKE_COMMAND}" -E echo "Test"
ERROR_VARIABLE ERROR_MESSAGE
RESULT_VARIABLE ERROR_CODE
OUTPUT_FILE "/proc/self/fd/0"
)
References
Difference between FILE * "/dev/stdout" and stdout
Unix & Linux: echo or print /dev/stdin /dev/stdout /dev/stderr

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Condor job - running shell script as executable

I’m trying to run a Condor job where the executable is a shell script which invokes certain Java classes.
Universe = vanilla
Executable = /script/testingNew.sh
requirements = (OpSys == "LINUX")
Output = /locfiles/myfile.out
Log = /locfiles/myfile.log
Error = /locfiles/myfile.err
when_to_transfer_output = ON_EXIT
Notification = Error
Queue
Here’s the content for /script/testingNew.sh file –
(Just becaz I’m getting error, I have removed the Java commands for now)
#!/bin/sh
inputfolder=/n/test_avp/test-modules/data/json
srcFolder=/n/test_avp/test-modules
logsFolder=/n/test_avp/test-modules/log
libFolder=/n/test_avp/test-modules/lib
confFolder=/n/test_avp/test-modules/conf
twpath=/n/test_avp/test-modules/normsrc
dataFolder=/n/test_avp/test-modules/data
scriptFolder=/n/test_avp/test-modules/script
locFolder=/n/test_avp/test-modules/locfiles
bakUpFldr=/n/test_avp/test-modules/backupCurrent
cd $inputfolder
filename=`date -u +"%Y%m%d%H%M"`.txt
echo $filename $(date -u)
mkdir $bakUpFldr/`date -u +"%Y%m%d"`
dirname=`date -u +"%Y%m%d"`
flnme=current_json_`date -u +"%Y%m%d%H%M%S"`.txt
echo DIRNameis $dirname Filenameis $flnme
cp $dataFolder/current_json.txt $bakUpFldr/`date -u +"%Y%m%d"`/current_json_$filename
cp $dataFolder/current_json.txt $filename
mkdir $inputfolder/`date -u +"%Y%m%d"`
echo Creating Directory $(date -u)
mv $filename $filename.inprocess
echo Created Inprocess file $(date -u)
Also, here’s the error log from Condor –
000 (424639.000.000) 09/09 16:08:18 Job submitted from host: <135.207.178.237:9582>
...
001 (424639.000.000) 09/09 16:08:35 Job executing on host: <135.207.179.68:9314>
...
007 (424639.000.000) 09/09 16:08:35 Shadow exception!
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
...
012 (424639.000.000) 09/09 16:08:35 Job was held.
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
Code 6 Subcode 8
...
Can anyone explain whats causing this error, also how to resolve this?
The testingNew.sh scripts run fine on the Linux box, if executed on a network machine seperately.
Thx a lot!! - GR
The cause, in our case, was the shell script using DOS line endings instead of Unix ones.
The Linux kernel will happily try to feed the script not to /bin/sh (as you intend) but to /bin/sh
. (Do you see that trailing carriage return character? Neither do I, but the Linux kernel does.) That file doesn't exist, so then, as a last resort, it will try to execute the script as a binary executable, which fails with the given error.
You need to specify input as:
input = /dev/null
Source: Submitting a job to Condor

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources