VSTS SSH tasks uses STDERR - azure-pipelines-build-task

I am configuring release step with VSTS to update database and use SSH (https://learn.microsoft.com/fr-fr/vsts/build-release/tasks/deploy/ssh) to run our script to update mongodb.
Script works just fine but somehow all output goes to STDERR.
Run: Inline Script
Arguments:
cd /home/ubuntu/Project/root/Deploy
dos2unix sync_mongo.sh
sh ./mongosync.sh
Here is beginning for step log:
2018-01-18T17:39:55.7603461Z dos2unix sync_mongo.sh
2018-01-18T17:39:55.7603748Z sh ./mongosync.sh
2018-01-18T17:39:55.7604695Z Trying to setup SSH connection to ********#****:22
2018-01-18T17:39:57.5259303Z Successfully connected.
2018-01-18T17:39:59.7115141Z tr -d '\015' <"./sshscript_1516297195734" > "./sshscript_1516297195734._unix"
2018-01-18T17:40:00.0197880Z chmod +x "./sshscript_1516297195734._unix"
2018-01-18T17:40:00.2617249Z "./sshscript_1516297195734._unix"
2018-01-18T17:40:00.5124617Z ##[error]dos2unix:
2018-01-18T17:40:00.5124929Z
2018-01-18T17:40:00.5125475Z ##[error]converting file sync_mongo.sh to Unix format ...

It turns out that many unix commands uses stderr to show progress.
Solution was to ignore stderr output:
dos2unix sync_mongo.sh 2> /dev/null
sh ./mongosync.sh 2> /dev/null

Related

remote wget command with & symbol doesn't behave as expected

Here are some test results:
I run command on my localhost, and try to execute some command on the remote host 11.160.48.88
Command 1:
ssh 11.160.48.88 "wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme"
expect:
File can be downloaded and be renamed to wgetReadme
result:
work as expected
Command 2:
ssh 11.160.48.88 "wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme&"
I simply add the & at the end of command, because I want this command to run in background
result:
the file wgetReadme is null on the remote server, I don't know why
Command 3:
To test if the Command 2 can be run on the remote server, I try to run the command directly on the server 11.160.48.88
wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme&"
result:
There are some wget transport message print to stdout, and the file is downloaded to wgetReadme. Work corretly.
Command 4:
I want to figure out if it is the SIGHUP signal kill the subprocess, and I found two evidences to prove it is not.
I found this question, and I try to run this on remote server 11.160.48.88
$shopt|grep hup
huponexit off
So the subprocess will not receive SIGHUP when ssh exits
I try to run another command to prove it
ssh 11.160.48.88 "wget https://raw.githubusercontent.com/mirror/wget/master/README -O - 2>&1 > wgetReadme&"
result:
The file can be downloaded to the target file correctly.
My question is why Command 2 cannot work as expected?
Because backgrounded jobs in ssh can cause the shell to hang on logout due to a race condition that occurs when two or more threads can access shared data and they try to change it at the same time and you can also solve the problem by redirecting all three I/O streams such as > /dev/null 2>&1 & So Nohup command is useful in your case and it is a POSIX command to ignore the HUP (hangup) signal. The HUP signal is, by convention, the way a terminal warns dependent processes of logout. So I change your code as following way:
ssh -f 11.160.48.88 "sh -c 'nohup wget https://raw.githubusercontent.com/mirror/wget/master/README -O - > wgetReadme 2>&1 &'"
You can read more at https://en.wikipedia.org/wiki/Nohup
& is a bash special characters which make process running in background. Then , ssh will not capture anymore output of command when you run this remotely.
You should escape it with \ to be able to run your command
in your example :
wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme\&"
regards

Can't get remote ssh stdout output in cron, but in my terminal it works

I run into an issue last week that drives me crazy. I wrote a BASH script which does a remote ssh connection to acamai and than performs a simple 'ls'. I want to redirect the 'ls' sdtout output to a given file.
While the script itself works like a charm when run manually, it does not while it runs via cron. The cronjob runs as root and each command works as expected expect the ssh command. My System is Gentoo Linux and cron is the old but gold vixie-cron.
To reduce the 200 LOC I put the basics herein which alone (as a single script) are enough to demonstrate the problem.
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>') > '/root/listing.txt'
Even in -vvv debug mode of ssh I can see, that everything works...just except that I get no stdout output.
Than I tried something else that I found in another posting on the internet:
#!/bin/bash
PATH='/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin'
#set -x
shopt -s lastpipe
exec 2>log.out
(ssh -T -i <path to key> -o HostKeyAlgorithms=+ssh-dss -o StrictHostKeyChecking=no <account#example.com> 'ls -r <path>' </dev/zero) > '/root/listing.txt'
Drawback here, I start a ssh session that I can't close and I guess its due to /dev/zero.
Another approach was to TEE Pipe the sub-shell of the ssh command...this worked for a short time ( and why not yet anymore ?!)
Now I'm clueless and need help. Cron has its PATH, uses BASH etc. Curious my boss did that with success with java (and he hates BASH...).
Any explanation and helpful tips are greatly welcome.
I have same issue, I make script for CRON and it gets output from remote SSH host.
If i run script manually - it works as should. But when CRON runs it - i get just a part of remote output.
I cant realise why its happening.
#!/bin/sh
pass=123
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")
filestring=$(echo "$filelist" | grep -Po "(\S+\s\S+\s+\d+\s\d{2}:\d{2}:\d{2}\s\d{4})\slist0\.lst")
filedate=${filestring% list0.lst}
echo $filedate
filestamp=$(date -d "$filedate" +"%s")
echo $filestamp
When i get echos in file via CRON - there are date from 0:00:00 - field with date (echo $filedate) is empty. But when i run manually - i get normal date with time...
It really bother me.
Help?
I found solution - add "-tt" to ssh command and all input goes to variable.
filelist=$(sshpass -p "$pass" ssh -q -tt -o StrictHostKeyChecking=no user#"10.10.10.10" "list")

Crontab not recognising command

I have a bash script which I want to run as a cron job.
It works fine except one command.
I redirected its stderr to get the error and found out that the error it shows was the command not recognized.
It is a root crontab.
Both the current user and root execute the command successfully when I type it in the terminal.
Even the script executes the command when I run it through the terminal.
Startup script :
#!/bin/bash
sudo macchanger -r enp2s0 > /dev/null
sudo /home/deadpool/.logkeys/logger.sh > /dev/null
logger.sh :
#!/bin/bash
dat="$(date)"
echo " " >> /home/deadpool/.logkeys/globallog.log
echo $dat >> /home/deadpool/.logkeys/globallog.log
echo " " >> /home/deadpool/.logkeys/globallog.log
cat /home/deadpool/.logkeys/logfile.log >> /home/deadpool/.logkeys/globallog.log
cat /dev/null > /home/deadpool/.logkeys/logfile.log
cat /dev/null > /home/deadpool/.logkeys/error.log
logkeys --start --output /home/deadpool/.logkeys/logfile.log 2> /home/deadpool/.logkeys/error.log
error.log
/home/deadpool/.logkeys/logger.sh: line 10: logkeys: command not found
Remember cron runs with a different environment then your user account or root does and might not include the path to logkeys in its PATH. You should try the absolute path for logkeys (find it with which logkeys from your user) in your script. Additionally I recommend looking at this answer on serverfault about running scripts like they are running from cron when you need to find out why it's working for you and not in a job.

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

how to write a bash shell script to ssh to remote machine and change user and export a env variable and do other commands

I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client)
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows:
Your environment has been modified. Please check /tmp/webservice.env.
And it just gets stuck there. I mean no return.
As suggested by a commentor, I added "-t" for ssh
#!/bin/sh
ssh -t myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect.
A detailed local log file is like below:
Your environment has been modified. Please check /tmp/dir/.env.
bash: line 0: cd: dir: No such file or directory
bash: p4: command not found
bash: line 0: cd: bin: No such file or directory
bash: ./prog: No such file or directory
tail: cannot open `../logs/service.log' for reading: No such file or directory
tail: no files remaining
Instead of connecting via ssh and then immediately changing users, can you not use something like ssh -t webservice_username#remotehost1 to connect with the desired username to begin with? That would avoid needing to sudo altogether.
If that isn't a possibility, try wrapping up all of the commands that you want to run in a shell script and store it on the remote machine. If you can get your task working from a script, then your ssh call becomes much simpler and should encounter fewer problems:
ssh myname#remotehost1 '/path/to/script'
For easily updating this script, you can write a short script for your local machine that uploads the most recent version via scp and then uses ssh to invoke it.
Note that when you run:
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Your ssh session runs sudo -u webservice_username -i waits for it to exit and then runs the rest of the commands; it does not execute sudo and then run the commands following. This has to do with the context in which you're running the series of commands. All the commands get executed in the shell of myname#remotehost1 and all sudo -u webservice_username - i is starts a shell for webservice_username and doesn't actually run any commands.
Really the best solution here is like bta said; write a script and then rsync/scp it to the destination and then run that using sudo.
export command simply not working with ssh like this, what you want to do is remote modify ~/.bashrc and it will source itself each time u do ssh login.

Resources