How to run multiple scripts in remote machine - linux

I have to remotely connect to a gateway(working on Linux platform), inside which I have couple of executable files (signingModule.sh and taxModule.sh).
Now I want to write one script in my desktop which will connect to that gateway and run signingModule.sh and taxModule.sh in two different terminals.
I have written below code:
ssh root#10.138.77.150 #to connect to gateway
sleep 5
cd /opt/swfiscal/signingModule #path of both modules
./signingModule #executable.
But through this code I am able to connect my gateway but after connecting to gateway nothing is happening.
2nd code:
source configPath # file where i have given path of both the modules
cd $FCM_SCRIPTS # variable in which i have stored the path of modules
ssh root#10.138.77.150 'sh -' < startSigningModule** #to connect and run one module.
as an output of this i am getting:
-source: configPath: file not found
Please help me working this out. Thanks in advance.
Notes:
I can copy paste my files in that gateway if required.
Gnome-Terminal or any other alternatives of this is not working in my gateway

ssh root#10.138.77.150 "cd /opt/swfiscal/signingModule && ./signingModule"
Line source configPath doesn't work because you need specify full path to the file.

You can pass several commands to ssh to run them in sequence; but I prefer a different solution: I have whole scripts locally; and running them remotely means:
Using scp to copy my script to the remote system
Using ssh to then run the script on the remote system
The big advantage here: there is always a potential for getting things wrong (for example: quoting) when directly giving commands to ssh. But when you put everything into a script, you have exact/full control over what is going to happen. You can put things like "set -e" into your script to improve error handling ...
(and of course, you can also automate the two steps listed above!)

Related

Cannot Connect to Linux Oracle Databse with Perl Script after connecting with PuTTY

I have the following problem:
I currently connect to one of our Linux servers using PuTTY on my Windows 10 machine. If I use a ‘standard’ PuTTY connection I have no problem: I can log in and run my Perl script to access an Oracle database on the Linux server. However, recently I have set up a new PuTTY connection (I copied the original working copy used above). The only difference from the original is that I have entered the following in the section Connection->SSH->Remote command of the PuTTY configuration window:
cd ../home/code/project1/scripts/perl ; /bin/bash
(I have done this so I arrive directly in the folder containing all my scripts.)
I can still log into the server with no problems and it takes me straight to the folder that contains my Perl scripts. However, when I run the script to access the Oracle database I get the following error:
DBI connect('server1/dbname','username',...) failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at PerlDBFile1.pl line 10.
impossible de se connecter à server1 / dbname at PerlDBFile1.pl line 10, <DATA> line 1.
In addition, if I run the env command on the server the variable $ORACLE_HOME is not listed (If I run the same env command on the server with the standard PuTTY connection the $ORACLE_HOME variable is present.)
Just to note: Running any other Perl script on the server (that does NOT access the Oracle database) through either of the PuTTY sessions I have created works with no problems.
Any help much appreciated.
When you set the remote command in PuTTY, it skips running of .bash_profile that is present in your default $HOME directory. This is why you are getting the error.
To resolve it, either place a copy of .bash_profile in your perl directory, or add a command to execute .bash_profile in remote command
OK, I have the solution!...Thanks to everyone who replied.
Basically, I originally had the command:
cd ../home/code/project1/scripts/perl ; /bin/bash (See original post)
To get it to work I replaced the above with
cd ../home/code/project1/scripts/perl; source ~/.bash_profile; /bin/bash
I also tried:
cd ../home/code/project1/scripts/perl; /bin/bash; source ~/.bash_profile
But that did NOT work.
Hope this helps someone.
Gauss76

Run a command in remote server

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

Is there a way to run a local bash function in ssh

I've got a script that needs to run a whole bunch of commands on a remote server. I was wondering if there was to call a local bash function during an ssh session. My current code triggers a command not found response, which presumably means that it's running the function as a Unix comand on the remote server, is there a way to make it expand the function?
ssh host.domain << EOF
runMemberSetup 1
EOF
Since I realize the obvious answer is to do away with the function and paste its contents in the here document, I suppose it would be worth mentioning that there are a lot of these ssh calls on various servers, so it would just look ugly and be rather massive if I had to paste the function's contents into each here document.
I think you should copy the script or install binary file "runMemberSetup" to the remote host. And if it's runnable in the remote server from the remote server itself, then you can run it through ssh locally.

How to terminate an application based on file existence check in Tcl on Linux environment

I run T-Plan robot which connects to my windows machine and executes some script.
On successful execution of script,I export the generated xml file via pscp to my linux machine.
T-paln robot acts as a 3rd party freeware to pass some command via cmd on windows machine.
This takes place by running a simple batch file on t-plan robot.However,the script which sends out command to windows disconnects itself based on some explicitly declaring timeout seconds.
I want to write a tcl code which launches this application on linux machine and once the command has generated a successful outcome as xml file and is received on linux machine,it should check whether the xml file exists on the specified directory and terminates the application right at that moment.I want this because the next code section would parse this received xml report and perform other actions.
I think there should be some class in tcl which kills the process/service on any environment ,here I need to perform that in linux.
Sincerely waiting for reply .Thanks in advance
To kill a process on the same Linux machine, provided you've got permission to do so (i.e., you're running as the same user), you do either:
package require Tclx
kill $processId
Or:
exec kill $processId
(The former doesn't require an external command — it does the syscall directly — but the second doesn't need the Tclx pacakge.)
Both of these require the process ID.
To test if a file exists, use file exists like this:
if {[file exists $theFilename]} {
puts "Woohoo! $theFilename is there!"
}
To kill something on a remote machine, you need to send a command to run to that machine. Perhaps like:
exec ssh $remoteMachine kill $remotePID
Getting the $remotePID can be “interesting” and may require some considerable thought in your whole system design. If calling from Windows to Linux, replace ssh with plink. If going the other way, you're talking about doing:
exec ssh $remoteMachine taskkill /PID $remotePID
This can get very complicated, and I'm not sure if the approach you're taking right now is the right one.

Execute command on remote server via ssh

I am attempting to execute a command on a remote linux server via an ssh command on a local server like this:
ssh myremoteserver 'type ttisql'
where ttisql is an executable on the path of my remote machine.
The result of running this is:
bash: line 0: type: ttisql: not found
When I simply connect first and do:
ssh myremoteserver
and then enter the command:
[myuser#myremoteserver~]$: type ttisql
I get back the path of the ttisql exe as I would expect.
The odd thing is that when I execute the first command in my beta environment it works as expected and returns the path of the exe. In the beta scenario, machine A is connecting to remote machine B but both machines are onsite and the ssh command connects to the remote machine quickly.
The problem is encountered in our production environment when machine A is local and machine B is offsite and the ssh command takes a second or two to connect.
The only difference I can see is the time it takes the production ssh to connect. The path on the remote system is correct since the command works if entered after the initial connection.
Can anyone help me understand why this simple command would work in one environment and not the other? Could the problem be related to the time it takes to connect via ssh?
Your PATH is setup differently when your shell is interactive (= when you are logged in on the server), and when not interactive (running commands with ssh).
Look into the rc files used by your shell, for example .bashrc, .bash_profile, .profile (depends on your system). If you set PATH at the right place, then ttisql can work when you run it via ssh.
Another solution is to use the absolute path of ttisql, then it will not depend on your PATH setup.
The environment can be different in a non-interactive session (ssh command) from an interactive session (ssh, then command). Try echo $PATH in both cases.
ssh myremoteserver 'echo $PATH'
vs
ssh myremoteserver
[myuser#myremoteserver~]$: echo $PATH
If they differ, look in all startup script for some differentiated behavior based on $PS1 or $-

Resources