Kubectl with Bash command is always passed in LowerCase and not CamelCase - linux

Consider the Bash code:
function dropMyDB() {
kubectl -n $1 exec -ti $1-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c "truncate table "$2";"
}
dropMyDB $1 "myTableNameInCamelCase"
When I execute the code it produces:
ERROR: relation "mytablenameincamelcase" does not exist
command terminated with exit code 1
Which means that the table name is not passed in its CamelCase form.
How can we fix this ?

First
Escape your "$2" because it is inside another double quote
postgres -c "truncate table "$2";"
# to
postgres -c "truncate table $2;"
# or
postgres -c "truncate table \"$2\";"
Second
You can test that the issue is not bash
function dropMyDB() {
echo "kubectl -n $1 exec -ti $1-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c \"truncate table \"$2\";\""
}
dropMyDB $1 "myTableNameInCamelCase"
Then
chmod +x script.sh
./script.sh foo
kubectl -n foo exec -ti foo-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c "truncate table "myTableNameInCamelCase";"

IMO it's no kubectl's fault:
fun(){ k exec aaaaaaaaaaaaa -it -- echo "$1"; }
fun AdsdDasfsFsdsd
AdsdDasfsFsdsd
But probably psql's one, try it like this:
... psql -d MYDBNAME -U postgres -c "truncate table '$2';"

Related

SSH remote execution - How to declare a variable inside EOF block (Bash script)

I have the following code in a bash script:
remote_home=/home/folder
dump_file=$remote_home/my_database_`date +%F_%X`.sql
aws_pem=$HOME/my_key.pem
aws_host=user#host
local_folder=$HOME/db_bk
pwd_stg=xxxxxxxxxxxxxxxx
pwd_prod=xxxxxxxxxxxxxxx
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as $i FROM $i" ; done'
EOF
My loop while is not working because "i" variable is becoming empty. May anyone give me a hand, please? I would like to understand how to handle data in such cases.
The local shell is "expanding" all of the $variable references in the here-document, but AIUI you want $i to be passed through to the remote shell and expanded there. To do this, escape (with a backslash) the $ characters you don't want the local shell to expand. I think it'll look like this:
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as \$i FROM \$i" ; done'
EOF
You can test this by replacing the ssh -i $aws_pem $aws_host command with just cat, so it prints the here-document as it'll be passed to the ssh command (i.e. after the local shell has done its parsing and expansions, but before the remote shell has done its). You should see most of the variables replaced by their values (because those have to happen locally, where those variables are defined) but $i passed literally so the remote shell can expand it.
BTW, you should double-quote almost all of your variable references (e.g. ssh -i "$aws_pem" "$aws_host") to prevent weird parsing problems; shellcheck.net will point this out for the local commands (along with some other potential problems), but you should fix it for the remote commands as well (except $i, since that's already double-quoted as part of the SELECT command).

Sql in bash script (postgres)

i have a command in bash script for rename base.
It's work, example
psql -U $User -t -A -q -c 'ALTER DATABASE "Old_Name" RENAME TO "New_Name"'
But if i do this -
O_Name='Old_Name'
N_Name='New_Name'
psql -U $User -t -A -q -c 'ALTER DATABASE "$O_Name" RENAME TO "$N_Name"'
It's not work, i think sql get $O_Name not Old_Name.
How to pass the value of a variable bash to sql?
Single quotes don't allow for environment variable expansion. Use double quotes instead (and escape the nested quotes). Like,
psql -U $User -t -A -q -c "ALTER DATABASE \"$O_Name\" RENAME TO \"$N_Name\""

bash list postgresql databases over ssh connection

I am doing some work on a remote Postgresql database.
When I log into the server this command works on bash:
$ psql -c "\l"
Remote login over ssh is possible using:
ssh user#server -C "cd /tmp && su postgres -c psql"
But why doesn't it work from this command?
ssh user#server -C " cd /tmp && su postgres -c psql -c '\l' "
→ bash: l: command not found
This is working, also "psql -l" but I don't understand why I have to use backslash 3 times here?
ssh user#server -C " cd /tmp && su postgres -c 'psql -c \\\l' "
Use several levels of quoting:
ssh user#server -C "cd /tmp && su postgres -c 'psql -c \"\\l\"'"
The double backslash is not strictly necessary since \l is no recognized escape sequence.

shell script is not working when it is called using cron scheduler

I have this sh script
SPTH="/home/db_backup/log/log_"$(date +'%d-%m-%Y %H:%M')".txt"
echo "===DUMP started at $(date +'%d-%m-%Y %H:%M:%S')===" >> "$SPTH";
DBNAME="test"
PGPASSWORD=postgres psql -U postgres -d postgres -c "CREATE DATABASE $DBNAME";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "ALTER SCHEMA public RENAME TO public_old";
for entry in /home/SQL/*.*
do
echo "$entry"
SUBSTRING=$(echo $entry| cut -d'/' -f 4)
#echo $SUBSTRING
SCHEMA=$(echo $SUBSTRING| cut -d'.' -f 1)
echo $SCHEMA
echo "===================== CREATING public SCHEMA =====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "Create Schema public";
echo "===================== RESTORING DATA OF $SCHEMA =====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -f "$entry";
echo "===================== DELETING SCHEMA $SCHEMA=====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "DROP SCHEMA $SCHEMA CASCADE";
echo "===================== RENAMING SCHEMA PUBLIC TO $SCHEMA=====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "ALTER SCHEMA public RENAME TO $SCHEMA";
echo "===================== RENAMING SQL FILE =====================" >> "$SPTH";
mv /home/SQL/{$SUBSTRING,DONE--$SUBSTRING};
echo "===================== DONE =====================" >> "$SPTH";
done
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "ALTER SCHEMA public_old RENAME TO public";
echo "===DUMP finished at $(date +'%d-%m-%Y %H:%M:%S')===" >> "$SPTH";
When I am running this command using command
sh script.sh
its working fine, working as I expected and running each and every command. But when I am running this script using cron scheduler commands related to POSTGRES is not executing. I also want log of every command I in this shell script.
I want to run this script using cron scheduler.
Where I am wrong?

How to store output of sudo -S su -c <user> <command> to any variable

I am trying to execute the following command but the output is not coming as required.
var=$(echo "<password>"|sudo -S su -l <user> -c "<command>")
Please help if anyone can?
Expected Result:
var=$(echo ""|sudo -S su -l -c "pwd")
echo $var /home/bhushan
$:
Actual Result:
echo $var
$:
You can use backticks
var=`sudo -S su -l -c ""`
or the $(command) syntax
var=$(sudo -S su -l -c "")
(keep in mind though that sudo -S su -l -c "" doesn't output anything so $var will be empty)
You can workaround it by storing the output of the command into a file, then change its permission so that all users will see it and in a following command load it from the file:
sudo -S "<command> > /tmp/sudocmd.out && chmod 644 /tmp/sudocmd.out"
var=$(cat /tmp/sudocmd.out)

Resources