How to run shell script located on Linux server from Windows environment? - linux

I am trying to run a shell script located on a Linux server from Windows. The shell script does two things:
Do a sed command to replace text in an .sql file in the same directory.
Run the .sql file with sqlplus.
The shell script:
!/bin/sh
arg1=$1
arg2=$2
arg3=$(echo $arg1 | tr '[:lower:]' '[:upper:]')
arg4=$(echo $arg2 | tr '[:lower:]' '[:upper:]')
echo $arg1
echo $arg2
echo $arg3
echo $arg4
sed -i "s/$arg3/$arg4/g" sequence.$arg1.sql
sqlplus $arg2/$arg2#MYDB <<EOF
#sequence.$arg1.sql
exit;
(My database is located on the same Linux server.)
1) Script runs correctly when I log in to the server via MobaXterm
Connect to server with userID.
Set my_env.
cd to the shell script's directory.
Run script with ./myscript.sh with arguments.
2) Same shell script runs successfully via .cmd manually
Create a Windows script test.cmd on my Windows PC.
In the .cmd file I have the line:
plink.exe -ssh userID#Server
After the console window pops up, I repeat the steps 2 to 4 and script runs successfully.
What I am failing to do so is to automate the whole process.
Here's the line in my .cmd file which I attempted:
plink.exe -ssh userID#Server /myfilepath/myscript.sh %arg1% %arg2%
I can see the arguments passed correctly using multiple echo in the shell script. However, the shell script fails to locate the .sql file.
Error log:
/mypath/myscript.sh[1]: !/bin/sh^M not found [No such file or directory]
myarg1value
myarg2value
:No such file or directory[myarg1value]
/mypath/myscript.sh[12]: sqlplus: not found [No such file or directory]
I also tried below, but unfortunately with same result:
plink.exe -ssh userID#Server -m command.txt
Where file command.txt contains:
. my_env
cd /filepath/
./myscript.sh %arg_with_actual_value%
I do not know why it is not working, especially when 2) works and the script is relatively simple.
Do I assume things incorrectly about plink (path, variable, etc.)?
Is Cygwin the only way out?
I tried not to rely on yet another tool as I have been using plink.
EDIT: While the line
sed -i "s/$arg3/$arg4/g" sequence.$arg1.sql
fails to run on the .sh, i can run it on the .cmd file itself via:
plink.exe -ssh userID#Server sed -i "s/%arg3%/%arg4%/g" /myfilepath/sequence.%arg1%.sql
Hence I am suspecting the problem comes from the .sh file not having the required components to run (i.e. set env variable, path, etc)

This is not a solution but partially fixed some issue, thanks to Martin Prikryl and Mofi's input:
in the command.txt, the following needs to be set:
ORACLE_SID
ORACLE_HOME
PATH
after these are set, the sqlplus and sed will work normally. However, passing values from .cmd through plink to Linux's shell script seems to have issue with the actual value being passed. The variable will have the assigned value along with some unreadable characters. In this case,
sqlplus $arg2/$arg2#MYDB
login fails because arg2 contains some other char.
#sequence.$arg1.sql
this line also fails as it will try to opens 2 files, one being called sequence.myvalue and another one called "%s", which i suspect the assigned variable contains some sort of unreadable nextline character.
EDIT: fixed, we can use the same treatment from sed - run sqlplus directly from plink instead of passing value and running a .sh script in Linux:
sqlplus $arg2/$arg2#MYDB #/myfilepath/sequence.%arg1%.sql

Related

How to run shell script without typing bash (bash command error:mapfile not found)

I am using mapfile -t to obtain content of a text file and assign it to array.
In Jenkins it works fine where it will prompt steps and what command executed in console output .When I try to run in local console for example putty it prompts.
mapfile: not found [No such file or directory]
I know that mapfile is a bash command is and I am able to run the shell program after typing bash and executing the script.Is there anyway that I don't need to type bash in order to run the program ?I include #!/bin/bash -x on top of the script it still display the same error .The reason I don't want to type bash and execute the script is due to that it did not show what are the errors when the script dies.It did not display the error handling process that was in the script and it did not display output when it runs the command.
Please open a new file called script in a text editor. Type your program in:
#!/bin/bash -x
set -e
item=$1
if [ $item = '-database' ] then
mapfile -t DATA < $DATA_FILES
fi
save the file, execute chmod u+x and then
./script "-database"
to run it.
That's it.
However, that script will print nothing.

Errant error output behavior with perl execute bash scripts on a remote machine

I have this line in my perl script that sshes into a remote machine and execute a bash script:
system("ssh -t remotemachine /dir/dir/bashscript");
In my bash script, I used exit code 2 some commands 2 >> error.txt to capture any errors that I may encounter and I want this error.txt to be written in the same folder where the bash script is stored.
My problem is when I ssh into the machine and run the program from the terminal, the error can be captured and written in error.txt but if I run the program from my perl, the program is able to run but the error is not captured.
Any suggestions?
Use the full path for the capture file.
some commands 2 >> /dir/dir/error.txt
Otherwise the file will be created in the users $HOME if it exists.

Sed not working right when executing shell script (.sh file) via a batch file using plink but works perfectly when executed manually using ssh (putty)

I am having an issue getting sed to work correctly. I have a shell script (.sh file) on a remote server that is running Cent OS Linux. I want to execute using a batch file and Plink. When I execute the shell script (.sh file) using Putty or any other ssh ( e.g ./backup.sh ) it works fine but when I execute the shell script (.sh file) using a batch file sed throws an error message.
Here is how sed is being used in part of the shell script (.sh file) named backup.sh
#!/bin/bash
date=$(date +%s)
sed -i -e '1i'$date'\' backup/backups.txt
Here is my batch file
#echo off
cd C:\Program Files (x86)\Putty
plink user#my.server.com -pw passwd /the/directory/backup.sh
Here is the error message sed is throwing
sed: can't read backup/backups.txt: No such file or directory
It looks like you are executing the script by hand as ./backup.sh whereas via putty/ssh, it's run as /the/directory/backup.sh. It's all fine, but it suggests they are not run in the same working directory, so backup/backups.txt is not referring the same place.
You can try changing to the correct directory as the first line in the script (via cd) or referring to the absolute, full pathname of the backups.txt instead.

Command NOT found when called from inside bash script

I have an application named puppet installed on my Linux box. It is installed at location /usr/test/bin/puppet
This is how .bash_profile looks
export PATH=/usr/test/bin
if I run command puppet apply from console, it works fine but when I call puppet command from inside bash script, it says command not found
#!/bin/bash
puppet apply x.pp
Any ideas on what is wrong ?
.bash_profile is loaded only if bash is invoked as login shell (bash -l or from a real tty), at least in Debian based distributions bash in a virtual tty (for example when using xterm, gnome-terminal, etc...) is invoked as interactive shell.
Interactive shells loads the configuration from ~/.bashrc.
bash manpage:
~/.bash_profile
The personal initialization file, executed for login shells
~/.bashrc
The individual per-interactive-shell startup file
Shellscripts don't load any of these.
You can check which files are opened by any program with strace:
strace ./s.sh 2>&1 | grep -e stat -e open
Possible solutions:
You can export the variable at the beginning of every script:
#!/bin/bash
export PATH=$PATH:...
Or you can have another file with the desired variables and source it from any script that need those:
/etc/special_vars.sh:
export PATH=$PATH:...
script:
#!/bin/bash
. /etc/special_vars.sh
puppet ...
Configure the PATH in in ~/.bashrc, ~/.bash_profile and ~/.profile for the user running the script (sub-processes will inherit the environment variables) to have some warranty that the user can run the script from different environments and shells (some bourne compatible shells others than bash do load ~/.profile)
Maybe the export of PATH is wrong?
export PATH=$PATH:/usr/test/bin/puppet
You could try using an alias, like so
in your .bash_profile:
alias puppet='bash puppet.fileextension'
you can also do
alias puppet='bash path/to/puppet.fileextension'
which will let you run the script from anywhere in Terminal.
EDIT:
OP has stated in the comments that there will be two different systems running, and he asked how to check the file path to the bash file.
If you do
#!/bin/bash
runPuppet(){
if [ -e path/to/system1/puppet.fileextension]
then
bash path/to/system1/puppet.fileextension $1 $2
elif [ -e path/to/system2/puppet.fileextension]
then
bash path/to/system2/puppet.fileextension $1 $2
fi
}
runPuppet apply x.pp
and change the runPuppet input to whatever you'd like.
To clarify/explain:
-e is to check if the file exists
$1 & $2 are the first two input parameters, respectively.

Bash Shell script in unix to rsh to another machine and perform command

Im trying to write a script that rsh's over to a unix machine and executes certain commands on certain files
rsh's over to machine (in this case a machine called uk01ff200)
searches for a directory in machine
then within that directory searches for a files starting with core
executes a command on those files (if they exist) which then creates new files
SCRIPT SO FAR:
#!/bin/bash
#
MACHINE=uk01ff200
DIRECTORY=/var/core
rsh $MACHINE "cd /var/core"
for file in `ls -1 core.*`
do stack_dump $file
done
When I do this manually in the shell on the command line it works. So if I rsh over to the machine, cd to the directory, then type in the for loop it works (so I know the for loop syntax is correct). So I dont know where I'm going wrong with my script.
What I would do using a here-document :
#!/bin/bash
#
MACHINE=uk01ff200
DIRECTORY=/var/core
rsh $MACHINE bash <<'EOF'
cd /var/core
for file in core.*
do stack_dump "$file"
done
EOF

Resources