I am trying to run a script remotely using ssh and the need use some parameters from remote server. Kept all parameters in remote server location temp/test/test.prm file. Getting an error saying " invoke.sh: line 20: . /temp/test/test.prm: No such file or directory "
See below for sample script. Have very basic knowledge in scripting so plz direct me
#!/bin/sh
Param1=$1
Param2=$2
ssh usr#Server1
. ${Param1}/Client/scripts/Sample1.prm
cd $prmHome/$prmSetPath
ls | sed '/\.log$/d' > $prmHome/$prmScript/Filelist.txt
cd $prmHome/$prmScript
while read LINE
do
ExportFilName=$LINE
./conversion.sh $prmHome/tmp_export/convert_$Param2.csv $prmHome/$prmSetPath/'$ExportFilName'
done < Filelist.txt
rm -rf $prmHome/$prmScript/Filelist.txt
exit 0
Content of Sample1.prm
prmHome=/iis/home
prmSetPath=/export/set
PrmScript=/Client/scripts
I have tried the same trough command line after connecting to remote server using ssh and it is working, but when I am trying to do the same through a script (invoke.sh) its throwing no such file or directory error
UPDATE
This is unclear and will not work !
ssh usr#Server1
. ${Param1}/Client/scripts/Sample1.prm
As mentioned you should use
ssh usr#Server1 ". ${Param1}/Client/scripts/Sample1.prm"
format first.
And secondly what you expect following command to do ?
. ${Param1}/Client/scripts/Sample1.prm
Notice that there is a space in between . and the path, which is a synonym for source. So also check if the Sample1.prm have valid commands.
Looks like you are not running any ssh shell command on remote host but only on your local host.
How exactly you are running the ssh shell command ?
The code snippet in your example is poorly formatted.
The general structure should look like this e.g.
ssh user1#server1 date
or
ssh user1#server1 'df -H'
Please revise you invocation script, or make the formatting in the question appropriate.
Update:
If you want to execute your code on remote server via ssh, you can do following:
Create separate file my_script.sh for your code you want to run remotely, and paste following code:
#!/bin/bash
function my_function() {
prmHome=$1
prmSetPath=$2
PrmScript=$3
cd $prmHome/$prmSetPath
ls | sed '/\.log$/d' > $prmHome/$prmScript/Filelist.txt
cd $prmHome/$prmScript
while read LINE
do
ExportFilName=$LINE
./conversion.sh $prmHome/tmp_export/convert_$Param2.csv $prmHome/$prmSetPath/'$ExportFilName'
done < Filelist.txt
rm -rf $prmHome/$prmScript/Filelist.txt
exit 0
}
Then you can call you function remotely, by sourcing your file on remote server:
ssh usr#Server1 '. my_script.sh; my_function "/iis/home" "/export/set" "/Client/scripts"'
That's it :)
This code will not work:
ssh usr#Server1
. ${Param1}/Client/scripts/Sample1.prm
Change it like that:
ssh usr#Server1 ". ${Param1}/Client/scripts/Sample1.prm"
Related
Going from a Linux host to another Linux host
Say I run:
ssh user#server ' . /etc/profile; /path/to/myScript.pl'
I always get errors involving scripts within that perl script like...
/path/to/otherscript.sh was not found No Such File or Directory
-even though it's obviously there. Running this script locally on "server" works just fine. What's also confusing is that the output of....
ssh user#server ' . /etc/profile; echo $PATH'
...looks EXACTLY the same as echo $PATH when running on "server" locally.
Any ideas as to why this is not working? I do not have permissions to modify the perl script to always* include the complete path to the files listed.
if it's useful this is running with a shebang of #!/usr/bin/env perl - reading up on it now, would this alter my path?**
I want to execute a shell script in remote machine and i achieved this using the below command,
ssh user#remote_machine "bash -s" < /usr/test.sh
The shell script executed properly in the remote machine. Now i have made some changes in script to get some values from the config file. The script contains the below lines,
#!bin/bash
source /usr/property.config
echo "testName"
property.config :
testName=xxx
testPwd=yyy
Now if i run the shell script in remote machine, i am getting no such file error since /usr/property.config will not be available in remote machine.
How to pass the config file along with the shell script to be executed in remote machine ?
Only way you can reference to your config file that you created and still run your script is you need to put the config file at the required path there are two ways to do it.
If config is almost always fixed and you need not to change it, make the config locally on the host machine where you need to run the script then put the absolute path to the config file in your script and make sure the user running the script has permission to access it.
If need to ship your config file every time you want to run that script, then may just simply scp the file before you send and call the script.
scp property.config user#remote_machine:/usr/property.config
ssh user#remote_machine "bash -s" < /usr/test.sh
Edit
as per request if you want to forcefully do it in a single line this is how it can be done:
property.config
testName=xxx
testPwd=yyy
test.sh
#!bin/bash
#do not use this line source /usr/property.config
echo "$testName"
Now you can run your command as John has suggested:
ssh user#remote_machine "bash -s" < <(cat /usr/property.config /usr/test.sh)
Try this:
ssh user#remote_machine "bash -s" < <(cat /usr/property.config /usr/test.sh)
Then your script should not source the config internally.
Second option, if all you need to pass are environment variables:
There are a few techniques described here: https://superuser.com/questions/48783/how-can-i-pass-an-environment-variable-through-an-ssh-command
My favorite one is perhaps the simplest:
ssh user#remote_machine VAR1=val1 VAR2=val2 bash -s < /usr/test.sh
This of course means you'll need to build up the environment variable assignments from your local config file, but hopefully that's straightforward.
I'm trying to write an interactive script on a remote server, whose default shell is zsh. I've been trying two different approaches to get this to work:
Approach 1: ssh -t <user>#<host> "$(<serverStatusReport.sh)"
Approach 2: ssh <user>#<host> "bash -s" < serverStatusReport.sh
I've been using approach 1 just fine up until now, when I ran into the following issue - I have a block of code that runs depending on whether certain files exist in the current directory:
filename="./service_log.*"
if ls $filename 1> /dev/null 2>&1 ; then
echo "$filename found."
##process files
else
echo "$filename not found."
fi
If I ssh into the server and run the command directly, I see "$filename found."
If I run the block of code above using Approach 1, I see "$filename not found".
If I copy this block into a new script (lets call this script2), and run it using Approach 2, then I see "$filename found".
I can't for the life of me figure out where this discrepancy is coming from. I thought that the difference may be that script2 is piped into bash whereas my original script is being run with zsh... but considering that running the same command verbatim on the server, with its default zsh shell, returns correctly... I'm stumped.
:( any help would be greatly appreciated!
I guess that when executing your approach 1 it is the local shell that expands "$(<serverStatusReport.sh)", not the remote. You can easily check this with:
ssh -t <user>#<host> "$(<hostname)"
Is the serverStatusReport.sh script also in the PATH on the local host?
What I do not understand is why you get this message instead of an error message.
Below is my current .bat content. i run it on window cmd. it will connect to remote linux server and prompt me password. but after i put the password and login as remotehost, linux server wont run my ls command. please help.
#echo off
ssh remotehost#10.1.1.10
ls
You really should do man ssh as this is explained there (and you could also make an internet search to get an answer).
But, to answer your question anyway: you should put all commands you want to run on the remote machine on the same line with the actual ssh command, for example to run directory listing and grep all files containing "foo", do: ssh <user>#<host> 'ls|grep foo'.
I hinted that it is possible to have the code in a batch file in my comment to #Sami Laine. This is what it would look like:
#echo off
setlocal
:: Run the end of this file in remote computer
more +8 %0 | plink user#remote.compu.ter "tr -d '\r'| bash"
endlocal
exit /b 0
:: remote bash stuff to be bootstrapped
pwd
ls -h
I'm using plink, because that what I have installed but it should work with most flavors of ssh too. Works also with ksh and zsh. Probably also with tcsh csh etc. This can sometimes be useful. Same technique can be used for a lot of things. Be careful with the +8 offset value it has to be on the right line.
I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.