shell script can't see files in remote directory - linux

I'm trying to write an interactive script on a remote server, whose default shell is zsh. I've been trying two different approaches to get this to work:
Approach 1: ssh -t <user>#<host> "$(<serverStatusReport.sh)"
Approach 2: ssh <user>#<host> "bash -s" < serverStatusReport.sh
I've been using approach 1 just fine up until now, when I ran into the following issue - I have a block of code that runs depending on whether certain files exist in the current directory:
filename="./service_log.*"
if ls $filename 1> /dev/null 2>&1 ; then
echo "$filename found."
##process files
else
echo "$filename not found."
fi
If I ssh into the server and run the command directly, I see "$filename found."
If I run the block of code above using Approach 1, I see "$filename not found".
If I copy this block into a new script (lets call this script2), and run it using Approach 2, then I see "$filename found".
I can't for the life of me figure out where this discrepancy is coming from. I thought that the difference may be that script2 is piped into bash whereas my original script is being run with zsh... but considering that running the same command verbatim on the server, with its default zsh shell, returns correctly... I'm stumped.
:( any help would be greatly appreciated!

I guess that when executing your approach 1 it is the local shell that expands "$(<serverStatusReport.sh)", not the remote. You can easily check this with:
ssh -t <user>#<host> "$(<hostname)"
Is the serverStatusReport.sh script also in the PATH on the local host?
What I do not understand is why you get this message instead of an error message.

Related

Bash script : Storing command with spaces and arguments in variable and then executing

Been banging my head against the wall for a couple hours so time to call in the experts. Writing a small script to run some reports on one of my office's systems and I was asked to take care of a Bash script for it. The program called "auto_rep" takes various options such as "-t" to run one task (to generate one type of report) and a "-1" to exit after one task. The options are separated by spaces when running the command from command-line. The command works directly from command line but I cannot get it to work from a script...
Below is the snippet of code causing me issues:
cmd=$(auto_rep -t createfin1report -1)
echo "running ${cmd} command..."
echo
eval $cmd
The problem is when I run the script, only the "auto_rep" part of the command (from $cmd variable) is run; basically running the program without any options. And it creates tons of reports without the "-t createfin1report -1" part of the command (yikes!). Glad I only tried it on our test system.
Anyone have any tips to help me out? Is my approach way off? BTW - had tried just storing the command in a non-array (cmd="auto_rep -t createfin1report -1") and that was causing me other headache with a "command not found" errors :)...
Thanks in advance!
Save output to an array, then executing this array.
declare -a cmd
cmd=( $(auto_rep -t createfin1report -1) )
echo Executing: "${cmd[#]}"
"${cmd[#]}"
Please make sure the output is a valid command, and spaces have been correctly placed in double-quotes.

How to handle errors in shell script

I am writing shell script to install my application. I have more number of commands in my script such as copy, unzip, move, if and so on. I want to know the error if any of the commands fails. Also I don't want to send exit codes other than zero.
Order of script installation(root-file.sh):-
./script-to-install-mongodb
./script-to-install-jdk8
./script-to-install-myapplicaiton
Sample script file:-
cp sourceDir destinationDir
unzip filename
if [ true]
// success code
if
I want to know by using variable or any message if any of my scripts command failed in root-file.sh.
I don't want to write code to check every command status. Sometimes cp or mv command may fail due to invalid directory. At the end of script execution, I want to know all commands were executed successfully or error in it?
Is there a way to do it?
Note: I am using shell script not bash
/* the status of your last command stores in special variable $?, you can define variable for $? doing export var=$? */
unzip filename
export unzipStatus=$?
./script1.sh
export script1Status=$?
if [ !["$unzipStatus" || "$script1Status"]]
then
echo "Everything successful!"
else
echo "unsuccessful"
fi
Well as you are using shell script to achieve this there's not much external tooling. So the default $? should be of help. You may want to check for retrieval value in between the script. The code will look like this:
./script_1
retval=$?
if $retval==0; then
echo "script_1 successfully executed ..."
continue
else;
echo "script_1 failed with error exit code !"
break
fi
./script_2
Lemme know if this added any value to your scenario.
Exception handling in linux shell scripting can be done as follows
command || fallback_command
If you have multiple commands then you can do
(command_one && command_two) || fallback_command
Here fallback_command can be an echo or log details in a file etc.
I don't know if you have tried putting set -x on top of your script to see detailed execution.
Want to give my 2 cents here. Run your shell like this
sh root-file.sh 2> errors.txt
grep patterns from errors.txt
grep -e "root-file.sh: line" -e "script-to-install-mongodb.sh: line" -e "script-to-install-jdk8.sh: line" -e "script-to-install-myapplicaiton.sh: line" errors.txt
Output of above grep command will display commands which had errors in it along with line no. Let say output is:-
test.sh: line 8: file3: Permission denied
You can just go and check line no.(here it is 8) which had issue. refer this go to line no. in vi.
or this can also be automated: grep specific line from your shell script. grep line with had issue here it is 8.
head -8 test1.sh |tail -1
hope it helps.

Using SSH to execute a command - $PATH is fine but running perl scripts fails?

Going from a Linux host to another Linux host
Say I run:
ssh user#server ' . /etc/profile; /path/to/myScript.pl'
I always get errors involving scripts within that perl script like...
/path/to/otherscript.sh was not found No Such File or Directory
-even though it's obviously there. Running this script locally on "server" works just fine. What's also confusing is that the output of....
ssh user#server ' . /etc/profile; echo $PATH'
...looks EXACTLY the same as echo $PATH when running on "server" locally.
Any ideas as to why this is not working? I do not have permissions to modify the perl script to always* include the complete path to the files listed.
if it's useful this is running with a shebang of #!/usr/bin/env perl - reading up on it now, would this alter my path?**

Can PSFTP execute loops?

I've searched a lot on the internet but haven't been able to find any useful info this yet. Does PFTP not allow you to run loops like 'IF' and 'WHILE' at all?
If it does, please let me know the syntax, I'm tried of banging my head against it. Annoyingly, PuTTY allows these commands but psftp doesn't seem to even though both are from the same family. I really hope there is a solution to this!
PSFTP isn't a language. It's just an SFTP client. SFTP itself is just a protocol for moving files between computers. If you have SFTP set up on the remote computer then it suggests that you have SSH running (since SFTP generally comes bundled with the SSH server install).
You can do a test in a bash shell script, for instance, to see if the file exists on the remote server, then execute your psftp command based on the result. Something like:
#!/bin/bash
# test if file exists on remote system
fileExists=$(ssh user#yourothercomputer "test -f /tmp/foo && echo 'true' || echo 'false'")
if $fileExists; then
psftp <whatever>
fi
You can stick that whole mess in a loop or whatevs. What's happening here is that we are sending a command test -f /tmp/foo && echo 'true' || echo 'false' to the remote computer to execute. The stdout of the command is returned and stored in the variable fileExists. Then we just test it.
If you are in windows you could convert this to a batch script and use plink.exe to send the command kind of like they do here. Or maybe just plop cygwin on your computer with an SSH and SFTP client and use what's above.
The big take-away here is that you will need a separate scripting environment to do the loop and run psftp based on a test.

Linux/Unix Scripting - strangest behaviour ever in a few lines - variable set but empty

I can tell you this is the craziest thing I have seen in a long time.
I have this (part of) sh script running on CentOS 5.4:
# Check GOLD_DIR`
echo $GOLD_DIR"<--"
#export GOLD_DIR=/share/apps/GOLD_Suite/GOLD <uncommenting this line works!!
if [ "X$GOLD_DIR" = "X" ] ; then
echo "ERROR: GOLD_DIR is (probably) not set on host ${HostName}" >> ${3}
exit 1
fi
And this gives the following output:
/share/apps/GOLD_Suite/GOLD<--
Waiting for 5 seconds ..... Testing output
The test script did spawn a job (i.e. PVM ran OK),
but errors were detected in the test script output
on the host machine: Below is the output
ERROR: GOLD_DIR is (probably) not set on host xxx.yyy.local
As you can see the GOLD_DIR variable is set (the script finds it as shown by the output with postfixed "<--") ! If I uncomment the export of the GOLD_DIR variable in the script code (first snippet) everything works.
EDIT: GOLD_DIR is exported in /etc/profile (using export GOLD_DIR=/share/apps/GOLD_Suite/GOLD)
Any ideas why?
Note1: I don't know if this is important but this is a spawn script on PVM.
Note2: The script is written in sh #!/bin/sh but I am using bash...
Edit3: I GOT IT TO WORK BUT I DONT KNOW WHY! - Ok so what I did was rename the hostname (with sudo hostname abc) to the name of the machine I ssh into (e.g. abc). Before the PVM was listing the full name of the machine abc.mycompany.local. Note that both abc.mycompany.local and abc are the same machine.
So the var is set. If you just do export GOLD_DIR instead of commented line (without setting the value), will it work?
Also. It's an isolated case? Is it bash there on CentOS? Try to use [[ ]] to check what's working wrong.
I believe that it might have something to do with the non-interactive nature of the job. Jobs run from shell scripts won't necessarily source /etc/profile, so they might not be picking up your ${GOLD_DIR} variable. (Unless you've explicitly changed its behavior, bash will only source /etc/profile for a login shell.)
Try adding:
. /etc/profile
in the beginning of your script just to see if that changes anything. If not, when you echo the error statement, add in ${GOLD_DIR} somewhere to see if the variable is still available in that statement.

Resources