expect utility is not working when executing from jenkins - linux

we have a unix script which uses expect utility for interactive execution. This script works well when we run from unix server.
If we run this script from Jenkins, it is not working.
Below is the script
var="xxxxx"
expect -c "
spawn sudo cp /abcd/sjws/config/obj.conf /abcd/sjws/config/obj.conf_jenkins
expect {
"Password:" { send $var\r;interact }
}
exit
"
Below is the output when we run from jenkins
spawn sudo cp /abcd/sjws/config/obj.conf /abcd/sjws/config/obj.conf_jenkins
Password:
Password:
Build step 'Execute shell' marked build as failure
Finished: FAILURE

Since Jenkins doesn't run it from an actual terminal, interact doesn't work, since there can't actually be any user interaction there.
You can replace interact with
expect eof
wait

Please use var='xxxxx' instead of var="xxxxx". Hope this help.

It seems that Jenkins doesn't support the interactive command model.
Try to use sshpass tool to pass the password in a non-interactive way.
install sshpass
$ sshpass -p "password" {your command}

This question is very similar to: Jenkins not waiting on call to a bash script with expect; for those encountering the similar problem, especially when using "su" utility to change the user, please be aware that Jenkins closes the SSH connection as soon as the user is changed. You can not continue giving commands with the new user. Design your script in such a way that you are sending the command directly, such as:
spawn su - username -c "yoursinglewordcommand"
and then the rest of the expect script as usual (I was also successful with the interact command).

You may use ssh steps plugin. This plugin will provide steps like sshScript which executes given script(file) on remote node and responds with output, sshCommand that executes given command on remote node and responds with output and also sshPut, sshGet, sshRemove.
Add to your Jenkinsfile :
node {
def remote = [:]
remote.name = 'test'
remote.host = 'test.domain.com'
remote.user = 'root'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Remote SSH') {
sshCommand remote: remote, command: "cp /abcd/sjws/config/obj.conf /abcd/sjws/config/obj.conf_jenkins"
}
}

Related

Using ssh to login to linux terminal from windows and run command in a logged in shell

First of all, this may seem like a duplicate question but I have searched stack overflow/various other forum sites and still haven't managed to find a solution.
A few example forum posts I have reviewed to prove I've done my research before asking a question:
https://superuser.com/questions/130443/remotely-run-script-on-unix-get-output-locally
https://linuxconfig.org/executing-commands-remotely-with-ssh-and-output-redirection
https://zaiste.net/posts/few-ways-to-execute-commands-remotely-ssh/
https://unix.stackexchange.com/questions/474533/get-output-of-this-command-from-another-server-via-ssh
Run ssh and immediately execute command
https://www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/
There's hundreds more but I won't include them all.
I essentially need a shell script to open a command prompt on windows, login to a remote linux system and run a command.
I am aware this can be done with the following:
start cmd /k ssh user#host ls
But the problem with the above is that the ssh connection is closed upon completion of the task.
I am also aware I can keep the ssh connection open by adding:
bash -l
in some cases.
For my use case, I need to run a launch file for ROS (robot operating system) and for this I need to see the output from the command.
And when attempting to run roslaunch launchFile.launch (in place of ls above):
start cmd /k ssh user#host "roslaunch launchFile.launch"
the command prompt returns
bash: roslaunch: command not found
I've obviously sanitised the specific name of my launch file but
roslaunch launchFile.launch
runs perfectly if I login to the linux PC first:
ssh user#host
then run the command.
I have achieved this exact use case on MacOS but I now need reimplement the same solution on windows:
osascript -e 'tell app "Terminal"
do script "ssh quantum#172.23.199.1 \n
roslaunch launchFile.launch"
end tell'
Thanks in advance for any help or advice.
Try this :
start cmd /k ssh user#host "/full/path/to/roslaunch launchFile.launch; exec /bin/bash"

Running bash script over SSH [duplicate]

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

'su' by using 'script' in Docker returns different results compared to the standard environment

I need to request certain commands via su including password in one line.
I found a solution and it is working in a standard environment (Ubuntu) (more about solution here):
{ sleep 1; echo password; } | script -qc 'su -l user -c id' /dev/null | tail -n +2
But I am faced with the problem that this solution is not suitable in a Docker container environment
Script terminates the command without waiting for echo and as a result i get:
su: Authentication failure
Any help is much appreciated.
Passing the password for su via stdin is problematic for various reasons: the biggest one is probably that your password will end up in the history.
You could instead:
Call the entire script as the specific user and thus enter the password manually
Use sudo with the appropriate NOPASSWD sudoers configuration
In your case you are using docker, so you could just set the USER in your Dockerfile

Run remote command from script

I need to run a shell script from within jenkins, to commit changes after making the build. Deploying build to remote server is not a problem, so the new build is there. All I need to do is just commit it.
For that I need to login with ssh to that remote server using shell script, and so far it is okay:
#!/user/bin/expect -f
spawn ssh myusername#url
expect "password:"
send "mypassword\r"
interact
So now when I am logged in, I want to run a few commands: cd /path/to/repository; svn commit -m "Some change log"
I tried something like:
#!/user/bin/expect -f
spawn ssh -o "LocalCommand cd /path/to/repository" myusername#url
expect "password:"
send "mypassword\r"
But it just don't work, as I have no idea how to do it.
If anyone know how to do it, please let me know.
The remote server is running on linux, and the jenkins on osx.
I found solution just with expect:
#!/user/bin/expect -f
spawn ssh myusername#url
expect "password:"
send "mypassword\r"
expect "some server prompt"
send "cd /path/to/repository\r"
send "svn commit -m 'Some change log'\r"
EDIT:
This solution seems to work from time to time only. I mean commiting changes.

Getting environment from .profile while executing a command through ssh

I am new to the linux/unix world. I would like to trigger a make on a remote machine. For this purpose i created a little script on the remote machine which i want to execute via ssh.
The script looks something like this:
echo "loading .profile"
. ~/.profile
echo "profile loaded"
echo "starting gmake"
cd ~/helloWorld/
gmake all
I invoke the script with following ssh command:
ssh user#remotehost "cd ~/helloWorld && ./myscript.sh"
When I execute this command my machine connects to the remote machine. It tells me that the profile is loaded and then i have to press [CTRL]+[D] to continue with the script. So it seems that the . ./myscript.sh command creates something like a new terminal. I dont want this behaviour. I would like to use the ssh command to automate the building process without the need of closing the terminal manually. Is there a way to do this?
Thanking you in anticipation,
John
./source does not invoke a new shell. It runs the commands in the script in the current shell. Something else is going wrong.

Resources