Sql in bash script (postgres) - linux

i have a command in bash script for rename base.
It's work, example
psql -U $User -t -A -q -c 'ALTER DATABASE "Old_Name" RENAME TO "New_Name"'
But if i do this -
O_Name='Old_Name'
N_Name='New_Name'
psql -U $User -t -A -q -c 'ALTER DATABASE "$O_Name" RENAME TO "$N_Name"'
It's not work, i think sql get $O_Name not Old_Name.
How to pass the value of a variable bash to sql?

Single quotes don't allow for environment variable expansion. Use double quotes instead (and escape the nested quotes). Like,
psql -U $User -t -A -q -c "ALTER DATABASE \"$O_Name\" RENAME TO \"$N_Name\""

Related

Kubectl with Bash command is always passed in LowerCase and not CamelCase

Consider the Bash code:
function dropMyDB() {
kubectl -n $1 exec -ti $1-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c "truncate table "$2";"
}
dropMyDB $1 "myTableNameInCamelCase"
When I execute the code it produces:
ERROR: relation "mytablenameincamelcase" does not exist
command terminated with exit code 1
Which means that the table name is not passed in its CamelCase form.
How can we fix this ?
First
Escape your "$2" because it is inside another double quote
postgres -c "truncate table "$2";"
# to
postgres -c "truncate table $2;"
# or
postgres -c "truncate table \"$2\";"
Second
You can test that the issue is not bash
function dropMyDB() {
echo "kubectl -n $1 exec -ti $1-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c \"truncate table \"$2\";\""
}
dropMyDB $1 "myTableNameInCamelCase"
Then
chmod +x script.sh
./script.sh foo
kubectl -n foo exec -ti foo-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c "truncate table "myTableNameInCamelCase";"
IMO it's no kubectl's fault:
fun(){ k exec aaaaaaaaaaaaa -it -- echo "$1"; }
fun AdsdDasfsFsdsd
AdsdDasfsFsdsd
But probably psql's one, try it like this:
... psql -d MYDBNAME -U postgres -c "truncate table '$2';"

SSH remote execution - How to declare a variable inside EOF block (Bash script)

I have the following code in a bash script:
remote_home=/home/folder
dump_file=$remote_home/my_database_`date +%F_%X`.sql
aws_pem=$HOME/my_key.pem
aws_host=user#host
local_folder=$HOME/db_bk
pwd_stg=xxxxxxxxxxxxxxxx
pwd_prod=xxxxxxxxxxxxxxx
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as $i FROM $i" ; done'
EOF
My loop while is not working because "i" variable is becoming empty. May anyone give me a hand, please? I would like to understand how to handle data in such cases.
The local shell is "expanding" all of the $variable references in the here-document, but AIUI you want $i to be passed through to the remote shell and expanded there. To do this, escape (with a backslash) the $ characters you don't want the local shell to expand. I think it'll look like this:
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as \$i FROM \$i" ; done'
EOF
You can test this by replacing the ssh -i $aws_pem $aws_host command with just cat, so it prints the here-document as it'll be passed to the ssh command (i.e. after the local shell has done its parsing and expansions, but before the remote shell has done its). You should see most of the variables replaced by their values (because those have to happen locally, where those variables are defined) but $i passed literally so the remote shell can expand it.
BTW, you should double-quote almost all of your variable references (e.g. ssh -i "$aws_pem" "$aws_host") to prevent weird parsing problems; shellcheck.net will point this out for the local commands (along with some other potential problems), but you should fix it for the remote commands as well (except $i, since that's already double-quoted as part of the SELECT command).

BCP Command on Shell (.sh file)

I have a .sh script that do this:
bcp "EXEC SPName" queryout "test.csv" -k -w -t"," -S "$server" -U "$user" -P "$pass"
The variables $server, $user and $pass are being read from a external config file.
The problem is that the variables don't work and give me always connection timeout. For example if I use the same command but with the variables hard coded works fine:
bcp "EXEC SPName" queryout "test.csv" -k -w -t"," -S "TEST" -U "admin" -P "admin"
How I can make the command dynamic?
I found the problem, I was reading the variables from a external json file created in Windows and the file contained "\r" at the end and then the command could not execute.
How I solved:
sed -i 's/\r//g' YourFile.json

How to work with nzsql in Netezza

I'm completely new to Netezza. I've connected to Netezza server through a putty access and need to run an nzsql command in the Linux terminal but when I give nzsql, it says command not found. Can someone tell me how to get started with nzsql and execute queries ?
Thanks in advance
You need to install NzClient to run nzsql from staging machine, Please read following link -
http://bajajvarun.blogspot.in/2014/02/install-netezza-client-on-ubuntu.html
Most likely the nzsql command is not on your path.
http://pic.dhe.ibm.com/infocenter/ntz/v7r0m3/index.jsp?topic=%2Fcom.ibm.nz.adm.doc%2Fr_sysadm_nzsql_command.html indicates the location of the commands, so if you are on the Netezza host the command is expected to be in /nz/kit/bin.
Does typing "/nz/kit/bin/nzsql" find the command? If so, add that directory to your path. If not, check with someone who can run the command to see what "which nzsql" shows, and add that directory to your path.
If you want the nzsql commands then try something like:
nzsql -host -d -u -pw -c -c "select * from tablename" -o /root/home/outputfilename.txt;
nzsql -host -d -u -pw -c "select * from tablename" -F "|" -A -q -t | gzip > /root/home/outputfilename.txt.gz;
nzsql -host -d -u -pw -c 'insert into tablename values (1 ,2 )' -o /root/home/outputfilename.txt;
http://dwbitechguru.blogspot.ca/2014/11/extract-load-migrate-filesdata-from.html
or use them from unix scripts:
# Unix Script to Drop & Truncate a Netezza tables
#!/bin/sh
# enter your database name and table name in below
dbname=exampledb
tblname=exampletbl
echo "Dropping table $i"
# use below line to drop a table
nzsql $dbanme -c "drop table $tblname"
# use below line to truncate a table
nzsql $dbanme -c "truncate table $tblname"
http://dwbitechguru.blogspot.ca/2014/12/how-to-create-unix-script-to-drop.html

How to run a SQL script in tsql

I'm using tsql (installed with FreeTDS) and I'm wondering if there's a way to run a SQL script in tsql from the command line and get the result in a text file.
For example, in psql I can do:
psql -U username -C "COPY 'SELECT * FROM some_table' to 'out.csv' with csv header"
Or:
psql -U username -C "\i script.sql"
And in script.sql do:
\o out.csv
SELECT * FROM some_table;
Is there a way for doing this in tsql? I have read the linux man page and search everywhere but I just don't find a way.
I think, you can try "bsqldb", see http://linux.die.net/man/1/bsqldb
I really didn't find how to do this and I'm starting to thinks it just can't be done in tsql. However I solved for my specific problem redirecting the stdin and stdout. I use a bash script like this:
tsql connection parameters < tsql_input.sql > tsql_output.csv
lines=`cat tsql_output.csv | wc -l`
# The output includes the header info of tsql and the "(n number of rows)" message
# so I use a combination of head and tail to select the lines I need
headvar=$((lines-2))
tailvar=$((lines-8))
head -$headvar tsql_output.csv | tail -$tailvar tsql_output.csv > new_output_file.csv
And that saves just the query result on 'new_output_file.csv'
freebcp "select * from mytable where a=b" queryout sample.csv -U anoop -P mypassword -S hostname.com -D dbname -t , -c
This command woks like a charm. Kudos to FreeTDS...
tsql -U username -P password -p 1433 > out.csv <<EOS SELECT * FROM some_table; go EOS

Resources