Using Variable in Shell script as input for another command - linux

#!/bin/bash
sudo docker-compose -f /home/administrator/compose/docker-compose.yml up --build -d
OUTPUT=$(docker ps | grep 'nginx_custom' | awk '{ print $1 }')
echo $OUTPUT
sudo docker $OUTPUT nginx -s reload
This the the ID that get´s printed correctly in the console.
6e3b3aa3fbc4
This command works fine.
docker exec 6e3b3aa3fbc4 nginx -s reload
However the variable seems not to get passed to the command here:
sudo docker $OUTPUT nginx -s reload
I am quite unfamiliar with the shell :(. How do I pass the variable to a command that is longer than just echo?

add set -x to the script and see what happens:
you can probably get rid of grep and incorporate it inside awk
#!/bin/bash
set -x
sudo docker-compose -f /home/administrator/compose/docker-compose.yml up --build -d
OUTPUT=$(docker ps | awk '/ngnix_custom/{ print $1 }')
echo $OUTPUT
sudo docker $OUTPUT nginx -s reload

Related

Execute a remote command over SSH with remote evaluation

I'm trying to run a command over SSH and want the evaluation of an expression in the command to happen on the remote machine.
I'm trying to run this:
ssh -A username#ip "sudo docker exec -it "$(docker ps | grep 'some' | awk '{ print $1 }')" python manage.py shell"
but the expression $(docker ps | grep 'some' | awk '{ print $1 }') is not evaluated correctly on the remote machine when I use the ssh command.
To confirm, if I first ssh into the remote machine, and then run sudo docker exec -it "$(docker ps | grep 'some' | awk '{ print $1 }')" python manage.py shell, it does evaluate correctly and gives me a shell successfully. I just cannot make it work directly from my local machine as a part of an argument to the ssh command.
What can I do to make it work as a part of the ssh command?
The problem with doing the command below is that I receive a Pseudo-terminal will not be allocated because stdin is not a terminal. message from my terminal (iTerm) and do not get a shell like I'm expecting after the execution of this command.
ssh -A username#ip <<'EOL'
name="$(docker ps | grep 'some' | awk '{ print $1 }')"
docker exec -it $name python manage.py shell
EOL
You need to escape all the characters that need to be interpreted by the remote shell like so:
ssh -A username#ip "sudo docker exec -it \"\$(docker ps | grep 'some' | awk '{ print \$1 }')\" python manage.py shell"
This way you will send the quotes belonging to the -it argument, as well as the $ sign unchanged and the remote shell will execute them.

Can't run bash file inside ZSH

I've placed a bash file inside .zshrc and tried all different ways to run it every time I open a new terminal window or source .zshrc but no luck.
FYI: it was working fine on .bashrc
here is .zshrc script:
#Check if ampps is running
bash ~/ampps_runner.sh & disown
Different approach:
#Check if ampps is running
sh ~/ampps_runner.sh & disown
Another approach:
#Check if ampps is running
% ~/ampps_runner.sh & disown
All the above approaches didn't work (meaning it supposes to run an app named ampps but it doesn't in zsh.
Note: It was working fine before switching to zsh from bash. so it does not have permission or syntax problems.
Update: content of ampps_runner.sh
#! /usr/bin/env
echo "########################"
echo "Checking for ampps server to be running:"
check=$(pgrep -f "/usr/local/ampps" )
#[ -z "$check" ] && echo "Empty: Yes" || echo "Empty: No"
if [ -z "$check" ]; then
echo "It's not running!"
cd /usr/local/ampps
echo password | sudo -S ./Ampps
else
echo "It's running ..."
fi
(1) I believe ~/.ampps_runner.sh is a bash script, so, its first line should be
#!/bin/bash
or
#!/usr/bin/bash
not
#! /usr/bin/env
(2) Then, the call in zsh script (~/.zshrc) should be:
~/ampps_runner.sh
(3) Note: ~/.ampps_runner.sh should be executable. Change it to executable:
$ chmod +x ~/ampps_runner.sh
The easiest way to run bash temporarily from a zsh terminal is to
exec bash
or just
bash
Then you can run commands you previously could only run in bash. An example
help exec
To exit
exit
Now you are back in your original shell
If you want to know your default shell
echo $SHELL
or
set | grep SHELL=
If you want to reliably know your current shell
ps -p $$
Or if you want just the shell name you might use
ps -p $$ | awk "NR==2" | awk '{ print $4 }' | tr -d '-'
And you might just put that last one in a function for later, just know that it is only available if it was sourced in a current shell.
whichShell(){
local defaultShell=$(echo $SHELL | tr -d '/bin/')
echo "Default: $defaultShell"
local currentShell=$(ps -p $$ | awk "NR==2" | awk '{ print $4 }' | tr -d '-')
echo "Current: $currentShell"
}
Call the method to see your results
whichShell

Shell script: execute 2 commands and keep first running

I'm trying to write a shell script for my docker image where:
a mssqql server is started
database setup happens
However with my current script my sql server instance stops as soon as the data import is done. Could anyone point me out what I'm doing wrong?
#!/bin/bash
database=myDB
wait_time=30s
password=myPw
exec /opt/mssql/bin/sqlservr &
echo importing data will start in $wait_time...
sleep $wait_time
echo importing data...
/opt/mssql-tools/bin/sqlcmd -S 0.0.0.0 -U sa -P $password -i ./init.sql
for entry in "table/*.sql"
do
echo executing $entry
/opt/mssql-tools/bin/sqlcmd -S 0.0.0.0 -U sa -P $password -i $entry
done
for entry in "data/*.csv"
do
shortname=$(echo $entry | cut -f 1 -d '.' | cut -f 2 -d '/')
tableName=$database.dbo.$shortname
echo importing $tableName from $entry
/opt/mssql-tools/bin/bcp $tableName in $entry -c -t',' -F 2 -S 0.0.0.0 -U sa -P $password
done
I did not see any clear mistakes in your shell script. I am just advising you the below:-
Try to run the server from the current shell of the script without exec
/opt/mssql/bin/sqlservr &
Put some echo in both the loop statements to check what is going on there.
Hope this will help.
Seems I need to add set -m to resolve this.

Problems with fish shell and ssh remote commands

I use fish shell on my desktop.
We use many servers running nginx within docker. I've tried to create a function so I can ssh to the servers and then log into the docker.
The problem is fish is complaining about the $ in the command, but the command is the one to be executed on the remote server (running bash), not on my machine running fish. I've simplified the script to make it easier to see.
config.fish snippet
function ssh-docker-nginx
ssh -t sysadmin#10.10.10.10 "sudo bash && docker exec -it $(docker ps | grep -i nginx | awk '{print $1}') bash"
end
Fish error:
$(...) is not supported. In fish, please use '(docker)'.
~/.config/fish/config.fish (line 59): ssh -t sysadmin#10.10.10.10 "sudo bash && docker exec -it $(docker ps | grep -i nginx | awk '{print $1}') bash"
^
from sourcing file ~/.config/fish/config.fish
called during startup
Is there a way to get fish to ignore this?
You'll want to single-quote that argument.
In double-quotes (") fish will try to expand everything that starts with a $, so it will see that $( and then print the error for it. But it will also see the $1 in your arguments to awk and expand that.
And when you want single-quotes to go to the called command (like here, where you want the argument to awk to be single-quoted because this'll go through bash's expansion), you need to escape the quotes with \.
Try
ssh -t sysadmin#10.10.10.10 'sudo bash && docker exec -it $(docker ps | grep -i nginx | awk \'{print $1}\') bash'
Thanks for the great advice and tip above about the single/double quotes. Unfortunately the escaped quotes in awk did not play nicely being passed to ssh.
After various options, I settled with this approach (which needed force tty):
function ssh-docker-nginx
cat docker-bash.sh | ssh -t -t sysadmin#10.10.10.10
end
# docker-bash.sh
#!/bin/bash
sudo chmod 777 /var/run/docker.sock
sudo docker exec -it $(docker ps | grep -i nginx | awk '{print $1}') bash

Bash variable inside third remote server

I need to input a variable into third linux system, here is the scheme:
From my laptop > docker server > a container,
#!/bin/bash
domain=$1
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 /bin/bash -c curl -Is $domain
Of course the variable reaches only the docker server, but not the container.
The first option to test is to pass $domain as an environment variable to your docker run command:
docker run -it --rm -e "domain=$domain" 931967fb3e32 /bin/bash -c curl -Is $domain
(note the use of -it, to be sure to have a tty in an interactive session)
If the curl somehow doesn't pick the right value, (you can test it by replacing /bin/bash -c curl -Is $domain with /bin/bash -c echo $domain), wrap it in a script (which mean your image should include that script)
As discussed in the comments, it seems to work without the bash -c:
ssh -i $SSH_KEY docker#10.10.10.10 "docker run --rm=true 931967fb3e32 curl -Is $domain

Resources