execute part of shell script on remote machine - linux

I am logging into remote machine through shell script (by placing ssh command in script).
After ssh command ,The remaining lines of the script are getting executed on the current machine rather than remote machine. How to make the rest of shell script lines execute on remote machine.?
Lets say this is my script
ssh username#ip-address
ls
whoami
----
The rest of lines after ssh should execute on remote machine rather than the current machine. How to achieve this?

One possible solution would be to use a heredoc as in the following example:
$ ssh example.foo.com -- <<##
> ls /etc/
> cat /etc/passwd
> ##
Basically everything between the ## on the first line and the last line will be executed on the remote machine.
You could also use the contents of a file by either reading the contents of the file into a variable:
$ MYVAR=`cat ~/foo.txt`
$ ssh example.foo.com -- <<##
> $MYVAR
> ##
or by simply performing the same action inside the heredoc:
$ ssh example.foo.com -- <<##
> `cat ~/foo.txt`
> ##

Is your login passwordless.
If yes, you can just use pipe to execute the statement on the remote machine
like:
cat myshellscript.sh | ssh blah#blah.com -q

Use ssh command with -t options. For example:
ssh -t myremotehost 'uptime'
name#myremotehost's password:
10:14:14 up 91 days, 21:20, 5 users, load average: 0.20, 0.35, 0.36

ssh -t user#remotehost 'uptime'
user#remotehost's password:
23:35:33 up 2:05, 3 users, load average: 0.79, 0.52, 0.60
Connection to remote closed.
When you specify -t, it opens the terminal in remote machine and execute the command.

Related

Using SSH with expected user input for picking an environment on the server

I am working on a script in linux in which i need to SSH to another server.
I am using this syntax ssh user#ip
When doing a manual SSH when we type "ssh user#ip" a few second, a second prompt will show and ask what environment are we choosing ex 1 to 4
(4) - this is the number of the environment i will be choosing.
But when doing ssh user#ip 4 - error received (bash command not found)
Here is the image on what is the prompt when using ssh
Without knowing what happens after you select the environment, its probably something like:
echo 4 | ssh user#ip
Feed 4 to whatever is prompting for environment, then exit.
If you need to stay connected, then something like
echo 4 | ssh -tt user#ip "ksh -l"
ksh -l could be replaced by some other shell or command like vi foo.txt (here when the command exits, the connection closes).
The above is similar to this previous question:
Run ssh and immediately execute command

Running bash script over SSH [duplicate]

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

establish ssh connection and execute command remotely [duplicate]

I wish to run a script on the remote system and then wish to stay there.
Running following script:-
ssh user#remote logs.sh
This do run the script but after that I am back to my host system. i need to stay on remote one. I tried with..
ssh user#remote logs.sh;bash -l
somehow it solves the problem but still not working exactly as a fresh login as the command:-
ssh user#remote
Or it will be better if i could include something in my script that would open the bash terminal in the same directory where the script was running. Please suggest.
Try this:
ssh -t user#remote 'logs.sh; bash -l'
The quotes are needed to pass both commands to ssh. The -t option forces a pseudo-tty allocation.
Discussion
Consider:
ssh user#remote logs.sh;bash -l
When the shell parses this line, it splits it into two commands. The first is:
ssh user#remote logs.sh
This runs logs.sh on the remote machine. The second command is:
bash -l
This opens a login shell on the local machine.
The quotes were added above to prevent the shell from splitting up the commands this way.

SSH, run process and then ignore the output

I have a command that will SSH and run a script after SSH'ing. The script runs a binary file.
Once the script is done, I can type any key and my local terminal goes back to its normal state. However, since the process is still running in the machine I SSH'ed into, any time it logs to stdout I see it in my local terminal.
How can I ignore this output without monkey patching it on my local machine by passing it to /dev/null? I want to keep the output inside the machine I am SSH'ing to and I want to just leave the SSH altogether after the process starts. I can pass it to /dev/null in the machine, however.
This is an example of what I'm running:
cat ./sh/script.sh | ssh -i ~/.aws/example.pem ec2-user#11.111.11.111
The contents of script.sh looks something like this:
# Some stuff...
# Run binary file
./bin/binary &
Solved it with ./bin/binary &>/dev/null &
Copy the script to the remote machine and then run it remotely. Following commands are executed on your local machine.
$ scp -i /path/to/sshkey /some/script.sh user#remote_machine:/path/to/some/script.sh
# Run the script in the background on the remote machine and pipe the output to a logfile. This will also exit from the SSH session right away.
$ ssh -i /path/to/sshkey \
user#remote_machine "/path/to/some/script.sh &> /path/to/some/logfile &"
Note, logfile will be created on the remote machine.
# View the log file while the process is executing
$ ssh -i /path/to/sshkey user#remote_machine "tail -f /path/to/some/logfile"

Why does executing a command over SSH without a visual terminal use a different PATH location?

When executing an SSH session that simply launches a command instead of actually connecting you, it appears as though my PATH environmental variable differs from when I connect to the SSH session normally, and it's missing the location of my binaries for bash commands. Why would this be, and how can I avoid it?
Normal connection of : ssh root#host
Yields a PATH env of
PATH='/sbin;/usr/sbin;/proc/boot'
An ssh to execute command but not connect to the terminal directy (ssh root#host ls) yields "ls: command not found". Upon further inspection, the PATH environmental variable is missing /proc/boot, and thus missing the location of the ls binary file.
The PATH env of this 'non terminal' session yields:
PATH='/usr/sbin;/sbin'
but NOT /proc/boot, so it can't call standard actions like ls,mkdir, etc.
Why is this? How can I get my proper PATH when simply executing a command over SSH, but not connecting directly to a displayed terminal?
Run the .profile of the remote server before running commands
ssh user#host "~/.bash_profile; $command"
#!/bin/bash
dets () {
sleep 1;
echo $1
sleep 1
}
dets "$1" | ssh -T username#ipaddress
Try using the above script passing the command you want to execute to the script i.e. ./sshscr "ls" This will disable pseudo-tty allocation (-T) and then execute the commands through a function det with the commands passed.
This is actually a feature. When you use a terminal ssh session, you get an interactive login session. So the sshd daemon starts your login shell (the one that is declared in /etc/password) as a login shell. The profile files are read and initialize various environment parameters and you can the start entering commands - for old dinosaurs it is the rlogin mode, for younger guys it is just a login mode
When you pass a remote command directly on the ssh line, none of the above occurs. The sshd daemon just sets up a default environment and launches the command - it is the rsh mode for dinosaurs or command mode for younger ones.
How to fix:
The best way is to not rely on the PATH when you pass commands directly in the ssh line:
ssh root#host /bin/ls
Alternatively, you can pass commands to an interactive shell (assuming bash on linux):
echo 'ls' | ssh root#host "bash -i"
But beware it is just an interactive shell, not a login shell: the ~/.bashrc will be read, but not ~/.profile nor ~/.bash_profile

Resources