Is there a way to execute an if statement if I see a specific output? For example, when the console says "bad interpreter permission denied" I want to execute a command like "dos2unix file_name"?
So the logic will be like the following,
if (output is "bad interpreter permission denied")
{
send dos2unix file_name
}
fi
This is an expect script.
Edit:
Could I do something like this in an expect script?
if (grep -cim1 '^M$' lruload.sh) -eq 1; then
send dos2unix filename
fi
When you say execute the commands, I hope you meant to execute the command in the shell. You can use exec command for this purpose.
I'm not sure where you are interacting. I mean like telnet or ftp or bash. Anyway, under any case, you will be sending a command and expecting a prompt.
send "command 1"
expect "prompt"
send "command 2"
expect {
timeout { puts "timeout happened"}
"bad interpreter per mission denied" {
set result [exec dos2unix <filename>]
}
}
# if need to intact with three application, further use 'send' and 'expect'
You have the result variable to store the dos2unix output.
How about this logic/psuedo-code?
export cmdText=`myCmd param1 param2 2>&1`
if ($cmdText is "bad interpreter permission denied")
{
send dos2unix file_name
}
fi
The permission denied text probably went to stderr, not stdout, so the redirect 2>&1 lumps both of them together, making the test simple.
Related
I am writing shell script to install my application. I have more number of commands in my script such as copy, unzip, move, if and so on. I want to know the error if any of the commands fails. Also I don't want to send exit codes other than zero.
Order of script installation(root-file.sh):-
./script-to-install-mongodb
./script-to-install-jdk8
./script-to-install-myapplicaiton
Sample script file:-
cp sourceDir destinationDir
unzip filename
if [ true]
// success code
if
I want to know by using variable or any message if any of my scripts command failed in root-file.sh.
I don't want to write code to check every command status. Sometimes cp or mv command may fail due to invalid directory. At the end of script execution, I want to know all commands were executed successfully or error in it?
Is there a way to do it?
Note: I am using shell script not bash
/* the status of your last command stores in special variable $?, you can define variable for $? doing export var=$? */
unzip filename
export unzipStatus=$?
./script1.sh
export script1Status=$?
if [ !["$unzipStatus" || "$script1Status"]]
then
echo "Everything successful!"
else
echo "unsuccessful"
fi
Well as you are using shell script to achieve this there's not much external tooling. So the default $? should be of help. You may want to check for retrieval value in between the script. The code will look like this:
./script_1
retval=$?
if $retval==0; then
echo "script_1 successfully executed ..."
continue
else;
echo "script_1 failed with error exit code !"
break
fi
./script_2
Lemme know if this added any value to your scenario.
Exception handling in linux shell scripting can be done as follows
command || fallback_command
If you have multiple commands then you can do
(command_one && command_two) || fallback_command
Here fallback_command can be an echo or log details in a file etc.
I don't know if you have tried putting set -x on top of your script to see detailed execution.
Want to give my 2 cents here. Run your shell like this
sh root-file.sh 2> errors.txt
grep patterns from errors.txt
grep -e "root-file.sh: line" -e "script-to-install-mongodb.sh: line" -e "script-to-install-jdk8.sh: line" -e "script-to-install-myapplicaiton.sh: line" errors.txt
Output of above grep command will display commands which had errors in it along with line no. Let say output is:-
test.sh: line 8: file3: Permission denied
You can just go and check line no.(here it is 8) which had issue. refer this go to line no. in vi.
or this can also be automated: grep specific line from your shell script. grep line with had issue here it is 8.
head -8 test1.sh |tail -1
hope it helps.
When I'm trying to send some message to all terminals through echo "some message" > /dev/pts/* it works good. But when I do the same thing through bash script then error occurrs: myscript.sh: line 2: /dev/pts/*: Permission denied. Even when I set highest privileges to myscript.sh. What can I do to make it work?
read msg
echo $msg > /dev/pts/*
Did you have a look at the wall command?
See http://linux.die.net/man/1/wall
You need privileges to do this, but here is described a workaround
How do I broadcast messages to all bash terminal in python using wall command with stdin?
Usually on unix systems you can suppress command output by redirecting STDIN and/or STDERR to a file or /dev/null. But what, if you need to pass content to a piped command via STDIN in a bash script?
The example below should make clear what is meant. It's just an example, though - I'm not searching for a solution to this command in specific but to that kind of situation in general. Sadly there are numerous situations where you would want to suppress output in a script, but need to pass content via STDIN, when a command has no switch to submit the information in an other way.
My "problem" is that I wrote a function to execute commands with proper error handling and in which I would like to redirect all output produced by the executed commands to a log file.
Example problem:
[18:25:35] [V] root#vbox:~# echo 'test' |read -p 'Test Output' TMP &>/dev/null
[18:25:36] [V] root#vbox:~# echo $TMP
[18:25:36] [V] root#vbox:~#
Any ideas on how to solve my problem?
What user000001 is saying is that all commands in a bash pipeline are executed in subshells. So, when the subshell handling the read command exits, the $TMP variable disappears too. You have to account for this and either:
avoid subshells (examples given in comment above)
do all your work with variables in the same subshell
echo test | { read value; echo subshell $value; }; echo parent $value
use a different shell
$ ksh -c 'echo test | { read value; echo subshell $value; }; echo parent $value'
subshell test
parent test
I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"
I'm having a very strange error.
I run a perl script which executes linux commands. They are executed like this:
my $err = `cp -r $HTML /tssobe/www/tstweb/$subpath/$HTMLDIR1`;
myLog("$err");
And $err is empty, which mean the command didn't return and error. (right?)
I tried to execute the linux command with exec "" or system (), but no success.
I tried to change the path. Same.
Also, I tried to run only the cp command in a new perl script. It works.
But not in my full perl script.
In this perl script, some commands are working, some are not.
The script was working yesterday, Not anymore this morning. No changes have been made in the meantime.
I tried a lot of things, I would be glad if anybody has an idea.
EDIT:
The server was having a lot of processes unterminated. Cleaning those solved the problem.
So the problem is related to another application, but I'll improve the logging thanks to your comments.
Small problem: you are NOT capturing STDERR, so you won't see the error (you are also not checking $? return code).
You should do
my $err = `cp -r $HTML /tssobe/www/tstweb/$subpath/$HTMLDIR1 2>&1`;
to redirect STDERR to STDOUT, or use one of the modules for running commands.
Large problem:
You should not run system commands from Perl for which Perl-native modules exist. In this case: File::Copy::Recursive module.
You can also roll your own directory copied from File::Copy.
Are you using backticks? Add -v to the cp commmand to see something in STDOUT and redirect the STDERR to STDOUT and check the cmd exitcode not the error message in the STDERR.
What about printing out the command output right after the execution?
my $err = `cp -rv $HTML /tssobe/www/tstweb/$subpath/$HTMLDIR1 2>&1`;
my $exitcode = $? >> 8;
warn "Output: $err\nexitcode: $exitcode\n";
It would be better to use qx. Check this: http://www.perlmonks.org/?node_id=454715
You may also want to quote arguments that may potentially contain any shell special characters, including spaces. As shell will do word splitting on the string given to it, if $HTML contains a space cp would get more arguments than you expect. Perl has very simple mechanism for that: \Q\E. Here is how you do it:
my $err = `cp -r \Q$HTML\E \Q/tssobe/www/tstweb/$subpath/$HTMLDIR1\E 2>&1`;
Anything except for alphanumeric will be backslash escaped before passing it to the shell. And you would provide exactly 2 arguments to cp regardless of what is in those variables.