How to work with nzsql in Netezza - linux

I'm completely new to Netezza. I've connected to Netezza server through a putty access and need to run an nzsql command in the Linux terminal but when I give nzsql, it says command not found. Can someone tell me how to get started with nzsql and execute queries ?
Thanks in advance

You need to install NzClient to run nzsql from staging machine, Please read following link -
http://bajajvarun.blogspot.in/2014/02/install-netezza-client-on-ubuntu.html

Most likely the nzsql command is not on your path.
http://pic.dhe.ibm.com/infocenter/ntz/v7r0m3/index.jsp?topic=%2Fcom.ibm.nz.adm.doc%2Fr_sysadm_nzsql_command.html indicates the location of the commands, so if you are on the Netezza host the command is expected to be in /nz/kit/bin.
Does typing "/nz/kit/bin/nzsql" find the command? If so, add that directory to your path. If not, check with someone who can run the command to see what "which nzsql" shows, and add that directory to your path.

If you want the nzsql commands then try something like:
nzsql -host -d -u -pw -c -c "select * from tablename" -o /root/home/outputfilename.txt;
nzsql -host -d -u -pw -c "select * from tablename" -F "|" -A -q -t | gzip > /root/home/outputfilename.txt.gz;
nzsql -host -d -u -pw -c 'insert into tablename values (1 ,2 )' -o /root/home/outputfilename.txt;
http://dwbitechguru.blogspot.ca/2014/11/extract-load-migrate-filesdata-from.html
or use them from unix scripts:
# Unix Script to Drop & Truncate a Netezza tables
#!/bin/sh
# enter your database name and table name in below
dbname=exampledb
tblname=exampletbl
echo "Dropping table $i"
# use below line to drop a table
nzsql $dbanme -c "drop table $tblname"
# use below line to truncate a table
nzsql $dbanme -c "truncate table $tblname"
http://dwbitechguru.blogspot.ca/2014/12/how-to-create-unix-script-to-drop.html

Related

SSH remote execution - How to declare a variable inside EOF block (Bash script)

I have the following code in a bash script:
remote_home=/home/folder
dump_file=$remote_home/my_database_`date +%F_%X`.sql
aws_pem=$HOME/my_key.pem
aws_host=user#host
local_folder=$HOME/db_bk
pwd_stg=xxxxxxxxxxxxxxxx
pwd_prod=xxxxxxxxxxxxxxx
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as $i FROM $i" ; done'
EOF
My loop while is not working because "i" variable is becoming empty. May anyone give me a hand, please? I would like to understand how to handle data in such cases.
The local shell is "expanding" all of the $variable references in the here-document, but AIUI you want $i to be passed through to the remote shell and expanded there. To do this, escape (with a backslash) the $ characters you don't want the local shell to expand. I think it'll look like this:
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as \$i FROM \$i" ; done'
EOF
You can test this by replacing the ssh -i $aws_pem $aws_host command with just cat, so it prints the here-document as it'll be passed to the ssh command (i.e. after the local shell has done its parsing and expansions, but before the remote shell has done its). You should see most of the variables replaced by their values (because those have to happen locally, where those variables are defined) but $i passed literally so the remote shell can expand it.
BTW, you should double-quote almost all of your variable references (e.g. ssh -i "$aws_pem" "$aws_host") to prevent weird parsing problems; shellcheck.net will point this out for the local commands (along with some other potential problems), but you should fix it for the remote commands as well (except $i, since that's already double-quoted as part of the SELECT command).

Sql in bash script (postgres)

i have a command in bash script for rename base.
It's work, example
psql -U $User -t -A -q -c 'ALTER DATABASE "Old_Name" RENAME TO "New_Name"'
But if i do this -
O_Name='Old_Name'
N_Name='New_Name'
psql -U $User -t -A -q -c 'ALTER DATABASE "$O_Name" RENAME TO "$N_Name"'
It's not work, i think sql get $O_Name not Old_Name.
How to pass the value of a variable bash to sql?
Single quotes don't allow for environment variable expansion. Use double quotes instead (and escape the nested quotes). Like,
psql -U $User -t -A -q -c "ALTER DATABASE \"$O_Name\" RENAME TO \"$N_Name\""

Open screen for a impala query

I have an impala query in a file and I want to open a screen for this query.
The query is like
select *
from db.test
where date >20201101
I will run the file with impala-shell -f test.sql
Also is it possible to generate a file with the execution time of the screen after?
Yes, this is possible using some more shell scripting.
impala-shell -k --ssl -i xx:100 -q "select * from table;" >logfile.txt 2>&1
grep "Fetched" logfile.txt | cut -d " " -f 5 > Run_info.txt #for Select,Create
grep "Modified" logfile.txt | cut -d " " -f 5 >>Run_info.txt #for Insert
Output will be 0.11s
You can use this to run all your scripts in loop. If you want you can see how many records fetched/modified too from logfile. You just have to tweak the grep command.

BCP Command on Shell (.sh file)

I have a .sh script that do this:
bcp "EXEC SPName" queryout "test.csv" -k -w -t"," -S "$server" -U "$user" -P "$pass"
The variables $server, $user and $pass are being read from a external config file.
The problem is that the variables don't work and give me always connection timeout. For example if I use the same command but with the variables hard coded works fine:
bcp "EXEC SPName" queryout "test.csv" -k -w -t"," -S "TEST" -U "admin" -P "admin"
How I can make the command dynamic?
I found the problem, I was reading the variables from a external json file created in Windows and the file contained "\r" at the end and then the command could not execute.
How I solved:
sed -i 's/\r//g' YourFile.json

How to run a SQL script in tsql

I'm using tsql (installed with FreeTDS) and I'm wondering if there's a way to run a SQL script in tsql from the command line and get the result in a text file.
For example, in psql I can do:
psql -U username -C "COPY 'SELECT * FROM some_table' to 'out.csv' with csv header"
Or:
psql -U username -C "\i script.sql"
And in script.sql do:
\o out.csv
SELECT * FROM some_table;
Is there a way for doing this in tsql? I have read the linux man page and search everywhere but I just don't find a way.
I think, you can try "bsqldb", see http://linux.die.net/man/1/bsqldb
I really didn't find how to do this and I'm starting to thinks it just can't be done in tsql. However I solved for my specific problem redirecting the stdin and stdout. I use a bash script like this:
tsql connection parameters < tsql_input.sql > tsql_output.csv
lines=`cat tsql_output.csv | wc -l`
# The output includes the header info of tsql and the "(n number of rows)" message
# so I use a combination of head and tail to select the lines I need
headvar=$((lines-2))
tailvar=$((lines-8))
head -$headvar tsql_output.csv | tail -$tailvar tsql_output.csv > new_output_file.csv
And that saves just the query result on 'new_output_file.csv'
freebcp "select * from mytable where a=b" queryout sample.csv -U anoop -P mypassword -S hostname.com -D dbname -t , -c
This command woks like a charm. Kudos to FreeTDS...
tsql -U username -P password -p 1433 > out.csv <<EOS SELECT * FROM some_table; go EOS

Resources