Is there a way to run a local bash function in ssh - linux

I've got a script that needs to run a whole bunch of commands on a remote server. I was wondering if there was to call a local bash function during an ssh session. My current code triggers a command not found response, which presumably means that it's running the function as a Unix comand on the remote server, is there a way to make it expand the function?
ssh host.domain << EOF
runMemberSetup 1
EOF
Since I realize the obvious answer is to do away with the function and paste its contents in the here document, I suppose it would be worth mentioning that there are a lot of these ssh calls on various servers, so it would just look ugly and be rather massive if I had to paste the function's contents into each here document.

I think you should copy the script or install binary file "runMemberSetup" to the remote host. And if it's runnable in the remote server from the remote server itself, then you can run it through ssh locally.

Related

How to run multiple scripts in remote machine

I have to remotely connect to a gateway(working on Linux platform), inside which I have couple of executable files (signingModule.sh and taxModule.sh).
Now I want to write one script in my desktop which will connect to that gateway and run signingModule.sh and taxModule.sh in two different terminals.
I have written below code:
ssh root#10.138.77.150 #to connect to gateway
sleep 5
cd /opt/swfiscal/signingModule #path of both modules
./signingModule #executable.
But through this code I am able to connect my gateway but after connecting to gateway nothing is happening.
2nd code:
source configPath # file where i have given path of both the modules
cd $FCM_SCRIPTS # variable in which i have stored the path of modules
ssh root#10.138.77.150 'sh -' < startSigningModule** #to connect and run one module.
as an output of this i am getting:
-source: configPath: file not found
Please help me working this out. Thanks in advance.
Notes:
I can copy paste my files in that gateway if required.
Gnome-Terminal or any other alternatives of this is not working in my gateway
ssh root#10.138.77.150 "cd /opt/swfiscal/signingModule && ./signingModule"
Line source configPath doesn't work because you need specify full path to the file.
You can pass several commands to ssh to run them in sequence; but I prefer a different solution: I have whole scripts locally; and running them remotely means:
Using scp to copy my script to the remote system
Using ssh to then run the script on the remote system
The big advantage here: there is always a potential for getting things wrong (for example: quoting) when directly giving commands to ssh. But when you put everything into a script, you have exact/full control over what is going to happen. You can put things like "set -e" into your script to improve error handling ...
(and of course, you can also automate the two steps listed above!)

How to execute a system command on a remote Linux server using CGI and Perl's Net::OpenSSH module?

EDIT: Read this first: What I was trying to do here is the result of extreme tunnel visioning, the post might be amusing, but not informative. You don't need to SSH into your own server to execute a command, what was I even thinking...
the title pretty much says it all. I want to host a CGI website on a Linux server (Debian, if it matters) and when clicking a button, perform a system command on the server itself. I'm doing this through Perl and it's Net::OpenSSH module.
Here is the problem. I can run the script through the terminal on the server itself, but only if I use sudo. It doesn't matter if the command is simply "ls". Unsurprisingly, when clicking on a button on a website which calls the module, it doesn't work either.
Here is my code:
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
print("Content-type: text/html\n\n");
print("TEST");
my $ssh = Net::OpenSSH->new('localhost', user => 'myusername', password => 'mypassword');
$ssh->system("ls") or die "ERROR: " . $ssh->error;
print("TEST2");
When running it in the terminal using sudo, the script prints out TEST, then lists the folders in my home directory (Desktop, Documents, etc) and finally, TEST2.
When I'm not using sudo, it prints only TEST and after that this error message:
ERROR: unable to establish master SSH connection: the authenticity of
the target host can't be established; the remote host public key is
probably not present on the '~/.shh/known_hosts' file at
opensshtest.pl line 13.
I'm not using SSH keys at all, I'm trying to supply the username and password by hardcoding them into the script.
Also, when opened in a browser, it only prints out the first TEST.
Any help would be appreciated.
It's me again, the guy who posted the question. It's funny how I've spent hours trying to make this work, and stumbled upon a solution maybe an hour after posting the question here. But here it is:
I've added master_opts => [-o => "StrictKeyChecking=no"] as an additional argument to the creation of the Net::OpenSSH object (the line with user, password, etc).

the usage of scp and ssh

I'm newbie to Linux and trying to set up a passphrase-less ssh. I'm following the instructions in this link: http://wiki.hands.com/howto/passphraseless-ssh/.
In the above link, it said:"One often sees people using passphrase-less ssh keys for things like cron jobs that do things like this:"
scp /etc/bind/named.conf* otherdns:/etc/bind/
ssh otherdns /usr/sbin/rndc reload
which is dangerous because the key that's being used here is being offered root write access, when it need not be.
I'm kind of confused by the above commands.
I understand the usage of scp. But for ssh, what does it mean "ssh otherdns /usr/sbin/rndc reload"?
"the key that's being used here is being offered root write access."
Can anyone also help explain this sentence more detail? Based on my understanding, the key is the public key generated by one server and copied
to otherdns. What does it mean "being offered root write access"?
it means to run a command on a remote server.
the syntax is
ssh <remote> <cmd>
so in your case
ssh otherdns /usr/sbin/rndc reload
is basically 4 parts:
ssh: run the ssh executable
otherdns: is the remote server; it's lacking a user information, so the default user (the same as currently logged in; or the one configured in ~/.ssh/config for this remote machine)
/usr/sbin/rndc is a programm on the remote server to be run
reload is an argument to the program to be run on the remote machine
so in plain words, your command means:
run the program /usr/sbin/rndc with the argument reload on the remote machine otherdns

Running git from node.js as a child process?

I am attempting to write a generic command-runner in Node.JS - however that's not massively important.
My setup is as follows:
I have a list of string commands that are executed using child_process.exec one after the other.
I want to run git from one of these commands, specifically a pull.
The location I am pulling from requires SSH authentication. HTTPS is not an option.
My private key is passphrased.
I am currently using keychain to manage ssh-agent.
When running git pull from the command line, it succeeds. When running my application as the logged-in user, it succeeds. However, when running my application using forever, it fails.
The error I receive is Permission denied (publickey).. I have tried calling keychain as part of my command, but I cannot get it to recognise the credentials.
How can I fix this?
My mistake was taking the contents of .bash_profile and using that to set up keychain from my exec.
What I needed to do was:
. $HOME/.keychain/$HOSTNAME-sh; git pull
I found this out by looking up examples of how to use keychain with bash scripts.

How to terminate an application based on file existence check in Tcl on Linux environment

I run T-Plan robot which connects to my windows machine and executes some script.
On successful execution of script,I export the generated xml file via pscp to my linux machine.
T-paln robot acts as a 3rd party freeware to pass some command via cmd on windows machine.
This takes place by running a simple batch file on t-plan robot.However,the script which sends out command to windows disconnects itself based on some explicitly declaring timeout seconds.
I want to write a tcl code which launches this application on linux machine and once the command has generated a successful outcome as xml file and is received on linux machine,it should check whether the xml file exists on the specified directory and terminates the application right at that moment.I want this because the next code section would parse this received xml report and perform other actions.
I think there should be some class in tcl which kills the process/service on any environment ,here I need to perform that in linux.
Sincerely waiting for reply .Thanks in advance
To kill a process on the same Linux machine, provided you've got permission to do so (i.e., you're running as the same user), you do either:
package require Tclx
kill $processId
Or:
exec kill $processId
(The former doesn't require an external command — it does the syscall directly — but the second doesn't need the Tclx pacakge.)
Both of these require the process ID.
To test if a file exists, use file exists like this:
if {[file exists $theFilename]} {
puts "Woohoo! $theFilename is there!"
}
To kill something on a remote machine, you need to send a command to run to that machine. Perhaps like:
exec ssh $remoteMachine kill $remotePID
Getting the $remotePID can be “interesting” and may require some considerable thought in your whole system design. If calling from Windows to Linux, replace ssh with plink. If going the other way, you're talking about doing:
exec ssh $remoteMachine taskkill /PID $remotePID
This can get very complicated, and I'm not sure if the approach you're taking right now is the right one.

Resources