Bash variable inside third remote server - linux

I need to input a variable into third linux system, here is the scheme:
From my laptop > docker server > a container,
#!/bin/bash
domain=$1
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 /bin/bash -c curl -Is $domain
Of course the variable reaches only the docker server, but not the container.

The first option to test is to pass $domain as an environment variable to your docker run command:
docker run -it --rm -e "domain=$domain" 931967fb3e32 /bin/bash -c curl -Is $domain
(note the use of -it, to be sure to have a tty in an interactive session)
If the curl somehow doesn't pick the right value, (you can test it by replacing /bin/bash -c curl -Is $domain with /bin/bash -c echo $domain), wrap it in a script (which mean your image should include that script)
As discussed in the comments, it seems to work without the bash -c:
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 curl -Is $domain

Related

Why am I unable to cd within a dockercontainer?

I would like to automatically execute tasks inside a docker container. The task that should be executed should be run inside a specific, mounted directory. To do this, I am using the this command:
docker run --rm -v /a/dir/on/my/host:/tmp some_container /bin/bash -c "cd /tmp/dir/inside/volume && echo \"$PWD\""
, followed by the actual task, which I omit, due to brevity.
PWD should give me /tmp/dir/inside/volume, but prints /a/dir/on/my/host. Why is that?
$PWD is expanded before you run your container. Use single quotes instead of double quotes to defer expansion. Also it's simpler to use --workdir or -w instead of cd .. && and subshell.
docker run --rm -v /a/dir/on/my/host:/tmp some_container /bin/bash -c 'cd /tmp/dir/inside/volume && echo "$PWD"'
or I suggest:
docker run --rm -v /a/dir/on/my/host:/tmp -w /tmp/dir/inside/volume /some_container pwd

Executing a command inside docker shows wrong $PATH

I am trying to run a bash command inside docker from host:
$ docker exec -it -u weiss apollo_dev /bin/bash -c "rosbag"
/bin/bash: rosbag: command not found
So I tried:
$ docker exec -it -u weiss apollo_dev /bin/bash -c "echo \$PATH"
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
But when I run docker interactively:
$ docker exec -it -u weiss apollo_dev /bin/bash
weiss#docker$ echo $PATH
/usr/local/cuda-8.0/bin:/home/tmp/ros/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Any reason why I am getting different results for $PATH?
This path is most likely changed in your .bashrc file, and this file is not loaded when the shell is non interactive (see https://www.gnu.org/software/bash/manual/bash.html#Bash-Startup-Files)
So /bin/bash will load it, /bin/bash -c will not
Here you are getting de $PATH of your Host. Before you run the container the variable is replace for the host's $PATH.
$ docker exec -it -u weiss apollo_dev /bin/bash -c "echo \$PATH"
You need to pass the command without replace the variable, so when run the command in the container just invoke the $PATH variable.
$ docker exec -it -u weiss apollo_dev /bin/bash -c 'echo \$PATH'
Te 'apostrophe' is the key. Bye

Bash - combine 2 ssh calls into 1 (with optional and mandatory commands)

I have a script with 2 ssh commands. The SSH scripts uses SSH to log into a remote server and deletes docker images.
ssh person#someserver.com 'set -x &&
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q)'
Note use of ; to separate commands (we don't care if one or more of the commands fail).
The 2nd ssh command uses SSH to log into the same server, grab a docker compose file and run docker.
ssh person#someserver.com 'set -x &&
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
Note use of && to separate commands (this time we do care if one or more of the commands fail as we grab the exit code i.e exitCode=$?).
I don't like the fact I have to split this into 2 so my question is can these 2 sections of bash commands be combined into a single SSH call (with both ; and && combinations)?
Although it is possible to pass a set of commands as a simple single-quoted string, I wouldn't recommend that, because:
internal quotation marks should be escaped
it is difficult to read (and maintain!) a code that looks like a string in a text editor
I find it better to keep the scripts in separate files, then pass them to ssh as standard input:
cat script.sh | ssh -T user#host -- bash -s -
Execution of several scripts is done in the same way. Just concatenate more scripts:
cat a.sh b.sh | ssh -T user#host -- bash -s -
If you still want to use a string, use a here document instead:
ssh -T user#host -- <<'END_OF_COMMANDS'
# put your script here
END_OF_COMMANDS
Note the -T option. You don't need pseudo-terminal allocation for non-interactive scripts.
ssh person#someserver.com 'set -x;
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q) ;
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'

Ksh script: How to remain in ssh and continue the script

So for my script I want to ssh into a remote host and remain in the remote host after the script ends and also have the directory changed to match the remote host when the script ends.
#!/bin/ksh
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; echo $PWD; ssh -q -X ssh mylogin#myremotemachine; cd $HOME/bin/folder2; echo $PWD'
The PWD gets changed correctly before the second ssh. The reason for the second ssh is because it ends the script in the correct remote host but it will not retain the directory change which I attempted by putting commands after it but they won't execute.
Does anyone have any ideas?
Just launch a shell at the end of the command list:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; echo $PWD; ssh -q -X ssh mylogin#myremotemachine; cd $HOME/bin/folder2; echo $PWD; ksh'
If you want the shell to be a login one (i.e. one that reads .profile), use exec -l:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; exec -l ksh'
If the remote server uses an old ksh release that doesn't support the exec -l builtin and if bash or ksh93 is available, here is a workaround:
ssh -t -X mylogin#myremotemachine 'cd $HOME/bin/folder1; exec bash -c "exec -l ksh"'

How to write a bash script which automate entering "docker container" and doing other things?

I want to implement an automatic bash script which enters a running docker container, and do some stuffs:
# cat docker.sh
#!/bin/bash -x
docker exec -it hammerdb_net8 bash
cd /data/oracle/tablespaces/
pwd
Executing the script on terminal:
# ./docker.sh
+ docker exec -it hammerdb_net8 bash
[root#npar1 /]#
The output shows only login the docker container, but won't do other operations.
Is there any method to automate entering docker container and doing other things?
You can use bash -c:
docker exec -it hammerdb_net8 bash -c 'cd /data/oracle/tablespaces/; pwd; ls'
For running a series of commands use here-doc in BASH:
docker exec -i hammerdb_net8 bash <<'EOF'
cd /data/oracle/tablespaces/
pwd
ls
EOF

Resources