string is not getting converted properly while setting it from ssh command - string

This is my actions file which is running the ssh command to ssh into my workstation with given parameters and calling deployer.sh file.
MOUNT_ECR_LOGIN="-v /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login"
ACTIONS="${WORKSTATION_EC2} MOUNT_ECR_LOGIN=$MOUNT_ECR_LOGIN ./deployer.sh"
which gets converted into the following string while running:
ssh -t -t -q ec2-user#networkba-bastion ssh -q -t ec2-user#workstation MOUNT_ECR_LOGIN=-v /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login ./deployer.sh
Below is the error:
bash: /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login: No such file or directory
I am setting up the variable for deployer.sh file which is running the docker run command.
unfortunately, the MOUNT_ECR_LOGIN is assigned as -v only and not the full string with file information. Am I supposed to escape the spaces here? or we need some other solution?

updated
The issue is due to argument splitting, because of the two ssh the arguments must be quoted twice
MOUNT_ECR_LOGIN=$'\'"-v /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login"\''
ACTIONS="${WORKSTATION_EC2} MOUNT_ECR_LOGIN=$MOUNT_ECR_LOGIN ./deployer.sh"
previous answer was
, a quick fix could be to use arrays
ACTIONS=("${WORKSTATION_EC2}" "SOURCE_REGISTRY=$SOURCE_REGISTRY" "DEPLOYMENT_NAME=$DEPLOYMENT_NAME" "MOUNT_ECR_LOGIN=$MOUNT_ECR_LOGIN" ./deployer.sh)
and then use "${ACTIONS[#]}"

Related

Reading environment variables with ad-hoc ansible command

I am trying to run a simple ad-hoc ansible command on various hosts to find out if a directory exists.
The following command works correctly by returning exists although it does not print the environment variable beforehand:
ansible all -i hosts.list -m shell -a "if test -d /the/dir/; then echo '$HOSTNAME exists'; fi"
Can anyone please tell me why only exists is returned instead of HOSTNAME exists?
Because it's included in double quotes ("), your $HOSTNAME is being interpreted by your local shell. You probably want to write instead:
ansible all -i hosts.list -m shell -a \
'if test -d /the/dir/; then echo "$HOSTNAME exists"; fi'
In most cases you will want to use single quotes (') for your argument to -a when you're using the shell module to prevent your local shell from expanding variables, etc, that are intended to be expanded on the remote host.

Copy file from remote server through lftp

I'm new to linux scripting. I want to copy file from remote server to current server(executing or client server),required cert & key files are already installed on my server(client server). below commands work when I execute it individually in sequence but, after Integrating into a .sh script it doesnt!
--My Script--
lftp -u username,xxx -p 2121 remoteServer.net;
set ssl:cert-file /abc/def/etc/User_T.p12;
set ssl:key-file abc/def/etc/User_T.p12.pwd;
lftp -e 'set net:timeout 10; get /app/home/atm/feed.txt -o /com/data/';
man lftp:
-f script_file
Execute commands in the file and exit. This option must be used
alone without other arguments (except --norc).
-c commands
Execute the given commands and exit. Commands can be separated
with a semicolon, `&&' or `||'. Remember to quote the commands
argument properly in the shell. This option must be used alone
without other arguments (except --norc).
Use "here document" feature of the shell:
lftp <<EOF
set...
open...
get...
EOF
Thanks lav for your suggestion, I found that my script was not executing second line so added continuation like
<< SCRIPT
& ended script with SCRIPT
removed all semi colon... Its working

run a remote bash script with arguments with ssh

I am unable to run a remote shell script located on "admin" server with arguments.
ssh koliwada#admin "~/bin/addautomaps $groupentry $homeentry $ticket"
"groupentry" and "homeentry" are as follows
user1:*:52940:OWNER-user1
user1 -rw,intr,hard,rsize=32768,wsize=32768 basinas01:/ifs/basinas01/home/&
the script is located at ~/bin/addautomaps in admin server.
I see the error,
tput: No value for $TERM and no -T specified
I also see the arguments also are not passed correctly.
I also tried using "ssh -t ..." but that doesnt work.
Answering your questions in reverse order (or most serious to least serious).
Your problem with the arguments (with spaces) not being passed correctly is that while you are quoting the command string locally you aren't quoting them when they are actually run by the remote machine.
That is you are generating a single string with the variables expanded but nothing tells the remote system not to split the expanded values on spaces.
The fix for that is that you need to quote the arguments inside the command for the remote shell as well as the entire string for ssh.
My answer here might help explain some (it is a similar issue).
The tput "issue" is likely just a warning that you can probably ignore if you don't care about the colorized/stylized/etc. output that tput is likely being used to create. You could also try forcing a value for $TERM on the remote side like ssh ... "export TERM=dumb; ..." or something like that to silence it.

scp multiple remote files while entering password once

I'm trying to write a script to copy multiple files (in multiple directories) from a remote host to my local machine.
My script is (more or less) as follows:
path1="/home/db/primary/*.xml"
path2="/tmp/log_*"
copyto="/home/pathtodesktop/Desktop/temp"
mkdir $copyto
scpcommand="scp -o StrictHostKeyChecking=no root#$address:\"$path1 $path2\" $copyto"
echo $scpcommand
$scpcommand
When I run the script, I get the following output:
scp -o StrictHostKeyChecking=no root#SERVER:"/home/db/primary/*.xml /tmp/log_*" /home/pathtodesktop/Desktop/temp
sh: syntax error: unterminated quoted string
cp: cannot stat '/tmp/log_*"': No such file or directory
The output of the echo is as expected. But when I copy the output above and run the command manually in the terminal, it works as expected with no errors.
So the ultimate question is, what am I doing wrong? The command seems to work fine when run manually in the terminal. Where is my syntax error?
Thanks for your help!
Adding set -f will prevent the wildcards in your paths from being expanded locally (although you may run in to other issues with spaces/special characters).
(You can re-enable wildcards afterwards with set +f)

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

Resources