how to use /dev/null for bash command - linux

I am using below command to get result for my SQL query.
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER"
This works fine on my server but on different machine it is printing bash warning with output of SQL query.
For example -
/etc/profile: line 46: HISTSIZE: readonly variable
/etc/profile: line 50: HISTCONTROL: readonly variable
/etc/profile.d/20-tmout.sh: line 1: TMOUT: readonly variable
/etc/profile.d/history.sh: line 6: hcmnt_tty: readonly variable
name
abc
Please let me know anyway so that I can skip above warning messages and only get data.
If I would like to use /dev/null in this case how to modify above command to get data only.

if what you mean is "how to discard only error output?", the way to go is to redirect the standard error stream to oblivion (/dev/null), like so:
your-command 2>/dev/null
that way, if the command outputs data to standard out, it passes through, but any output to the standard error socket is discarded, so you won't see these error messages.
by the way, 2 here is a shorthand file descriptor for the standard error.

Sorry this is untested, but I hit this same error, your db session isn't read/write. You can echo the statements to psql to force a proper session as follows. I'm unsure as to how stdin may be effected
echo 'SET TRANSACTION READ WRITE; SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE ; COPY ( my SQL query ) TO STDOUT WITH CSV HEADER' | su - postgres -c 'psql -d dbname' with stdin

caution - bash hack
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER" | grep -v "readonly"

Related

How to call Oracle SQL query using bash shell script

How to call a sql query using bash shell script. I tried the below but seems there is some syntax error:
#!/bin/sh
LogDir='/albt/dev/test1/test2/logs' # log file
USER='test' #Enter Oracle DB User name
PASSWORD='test' #Enter Oracle DB Password
SID='test' #Enter SID
sqlplus -s << EOF > ${LogDir}/sql.log
${DB_USER_NAME}/${DB_PASSWORD}#${DB_SID}
SELECT count(1) FROM dual; # SQL script here to get executed
EOF
var=$(SELECT count(1) FROM dual)
I'm getting - unexpected token error
#!/bin/sh
user="test"
pass="test"
var="$1"
sqlplus -S $user/$pass <<EOF
SELECT * FROM tableName WHERE username=$var;
exit;
EOF
I'm getting - sqlplus: command not found -- when I run the above script
Can anyone guide me?
In your first script, one error is in the use of count(1). The whole line is
var=$(SELECT count(1) FROM dual)
This means that the shell is supposed to execute a program named SELECT with the parameters count(1), FROM and dual, and stores its stdout into the variable var. I think you want instead to feed a SELECT command into sqlplus, i.e. something like
var=$(sqlplus .... )
In your second script, the error message simply means that sqlplus can not be found in your path. Either add it to the PATH or provide the absolute path to the command explicitly.

how to passing variable from bash to sqlplus

I have below code in bash:
#/!bin/bash
USERS=tom1,tom2,tom3
and I want to drop all users from my variable "USERS" via sqlplus
#/!bin/bash
USERS=tom1,tom2,tom3
sqlplus -s system/*** <<EOF
*some code*
DROP USER tom1 CASCADE;
DROP USER tom2 CASCADE;
DROP USER tom2 CASCADE;
EOF
pls help
Splitting in database using sql / plsql by using the USERS variable is complex and requires understanding of REGEXP functions and I will not recommend that to you. Moreover, it would also require Dynamic Sql to execute a drop query.
I would suggest you to split it within the shell script,create a script and execute it in sqlplus
USERS="tom1,tom2,tom3"
echo ${USERS} | tr ',' '\n' | while read word; do
echo "drop user $word;"
done >drop_users.sql
sqlplus -s system/pwd <<EOF
#some code
#drop_users.sql
EOF
Your code is OK except for DDL commands for Oracle DB which are derived from USERS parameter of the script file, call userDrop.sh. To make suitable relation with USERS parameter and DB, an extra file (drpUsr.sql) should be produced, and invoked inside EOF tags with an # sign prefixing.
Let's edit by vi command :
$ vi userDrop.sh
#!/bin/bash
USERS=tom1,tom2,tom3
IFS=',' read -ra usr <<< "$USERS" #--here IFS(internal field separator) is ','
for i in "${usr[#]}"; do
echo "drop user "$i" cascade;"
done > drpUsr.sql
#--to drop a user a privileged user sys or system needed.
sqlplus / as sysdba <<EOF
#drpUsr.sql;
exit;
EOF
Call as
$ . userDrop.sh
and do not forget to come back to command prompt by an exit command of DB.

Parsing oracle SQLPLUS error message in shell script for emailing

I'm trying to extract a substring from an Oracle error message so I can email it off to an administrator using awk, this part of the code is trying to find where the important bit I want to extract.
starts here's what I have....
(The table name is incorrect to generate the error)
validate_iwpcount(){
DB_RETURN_VALUE=`sqlplus -s $DB_CRED <<END
SELECT count(COLUMN)
FROM INCORRECT_TABLE NAME;
exit
END`
a="$DB_RETURN_VALUE"
b="ERROR at line"
awk -v a="$a" -v b="$b" 'BEGIN{print index(a,b)}'
echo $DB_RETURN_VALUE
}
Strange thing is no matter how big that $DB_RETURN_VALUE is the return value from awk is always 28. Im assuming that somewhere in this error message there's something linux either thinks is a implcit delimiter of somesort and its messing with the count or something stranger. This works fine with regular strings as opposed to what oracle gives me.
Could anybody shine a light on this?
Many thanks
28 seems to be the right answer for the query you have (slightly amended to avoid an ORA-00936, and with tabs in the script). The message you're echoing includes a file expansion; the raw message is:
FROM IW_PRODUCTzS
*
ERROR at line 2:
ORA-00942: table or view does not exist
The * is expanded when you echo $DB_RETURN_VALUE, so the directory you're executing this from seem to have logs mail_files scripts in it, and they are being shown through expansion of the *. If you run it from different directories the echoed message length will vary, but the length of the actual message from Oracle stays the same - the length is changing (through the expansion) after the SQL*Plus call and after awk has done its thing. You can avoid that expansion with echo "$DB_RETURN_VALUE" instead, though I don't suppose you actually want to see that full message anyway in the end.
The substring from character 28 gives you what you want though:
validate_iwpcount(){
DB_RETURN_VALUE=`sqlplus -s $CENSYS_ORACLE_UID <<END
SELECT count(COLUMN_NAME)
FROM IW_PRODUCTzS;
exit
END`
# To see the original message; note the double-quotes
# echo "$DB_RETURN_VALUE"
a="$DB_RETURN_VALUE"
b="ERROR at line"
p=`awk -v a="$a" -v b="$b" 'BEGIN{print index(a,b)}'`
if [ ${p} -gt 0 ]; then
awk -v a="$a" -v p="$p" 'BEGIN{print substr(a,p)}'
fi
}
validate_iwpcount
... displays just:
ERROR at line 2:
ORA-00942: table or view does not exist
I'm sure that can be simplified, maybe into a single awk call, but I'm not that familiar with it.

stdout all at once instead of line by line

I wrote a script that gets load and mem information for a list of servers by ssh'ing to each server. However, since there are around 20 servers, it's not very efficient to wait for the script to end. That's why I thought it might be interesting to make a crontab that writes the output of the script to a file, so all I need to do is cat this file whenever I need to know load and mem information for the 20 servers. However, when I cat this file during the execution of the crontab it will give me incomplete information. That's because the output of my script is written line by line to the file instead of all at once at termination. I wonder what needs to be done to make this work...
My crontab:
* * * * * (date;~/bin/RUP_ssh) &> ~/bin/RUP.out
My bash script (RUP_ssh):
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca
done
Thanks,
niefpaarschoenen
You can buffer the output to a temporary file and then output all at once like this:
outputbuffer=`mktemp` # Create a new temporary file, usually in /tmp/
trap "rm '$outputbuffer'" EXIT # Remove the temporary file if we exit early.
for comp in `cat ~/bin/servers`; do
ssh $comp ~/bin/ca >> "$outputbuffer" # gather info to buffer file
done
cat "$outputbuffer" # print buffer to stdout
# rm "$outputbuffer" # delete temporary file, not necessary when using trap
Assuming there is a string to identify which host the mem/load data has come from you can update your txt file as each result comes in. Asuming the data block is one line long you could use
for comp in `cat ~/bin/servers`; do
output=$( ssh $comp ~/bin/ca )
# remove old mem/load data for $comp from RUP.out
sed -i '/'"$comp"'/d' RUP.out # this assumes that the string "$comp" is
# integrated into the output from ca, and
# not elsewhere
echo "$output" >> RUP.out
done
This can be adapted depending on the output of ca. There is lots of help on sed across the net.

cat multiple files over one ssh connection and get return value for each

As said in the title, i'm trying to cat multiple files (content needs to be appended to existing files on host) over one ssh connection and get return value for each, i.e. if that cat for the particular file was successful or not.
Up to now, i did this for each file individually, by just repeating the following command for each and checking the return value.
cat specific_file | ssh user#host -i /root/.ssh/id_rsa "cat >> result/specific_file"
I then just checked the return value for each transfer (automatically) and thereby could determine the status for each file. My question is: is it possible to to this over one single ssh connection, but to obtain a return value for every single file ?
Thanks in advance !
EDIT:
(b) As you can see in the following, i generate a command for one specific file and then check the return code. The method containing this part of the program is then called for the different types o
If the file was successfully logged, it can be removed from the client, and that further logging to the type of file is (still) allowed. If the file is exceeding its limits, this bit is unset in another part of the program.
sprintf(command,"/bin/cat /root/%s%s | /usr/bin/ssh log-bot#192.168.5.1 -i"
" /root/.ssh/id_rsa \"/bin/cat >> result/%s%s\"", apmac,
ending_str[source], apmac, ending_str[dest]);
rc = system(command);
if(rc != 0)
{
sprintf(buffer, "[%i] - LOG ERROR from %s to %s CODE %i ", (int)time(0),
ending_str[source], ending_str[dest], rc);
sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
}
else
{
sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
remove(filename);
sprintf(buffer, "[%i] - LOG process from %s to %s ok", (int)time(0),
ending_str[source], ending_str[dest]);
switch(source)
{
case PROBE_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_PROB);
break;
case LIST_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_LIST);
break;
case SCAN_FILE:
LOG_MASK_SET(globals->log_mask, LOG_MASK_SCAN);
break;
default:
/* Other cases not YET considered */
break;
}
}
SECOND EDIT:
I'm running this code on a router as a part of a program. Please note that i'm not allowed to add new libraries or non basic functions to the system. In addition, the ssh client on the system itself doesn't allow "-M" mode.
Edit in response to the added info (and code):
To the code: I'd strongly consider writing a script/program on the receiving end to talk to the sending process over the ssh pipe. That way you have full flexibility.
The simplest thing that could work, would still appear to be sending an archive over to the receiving host. On the receiving end, filter the archive with a script that
untars each file into a temporary location
tries the appending operation cat >> specific_file
prints a 'result record' to stdout as feedback to the sender
So you'd do:
tar c file1 file2 file3 |
ssh log-bot#remote /home/log-bot/handle_logappends.sh |
while read resultcode filename
do
echo "$filename" resulted in code "resultcode"
done
To handle the feedback in C/C++ you'd look at popen, that will allow you to read the streaming feedback as if from a file, simple!
An example of such a handle_logappends.sh script on the receiving end:
#!/bin/bash
set -e # bail on error
TEMPDIR="/tmp/.receiving_$RANDOM"
mkdir "$TEMPDIR"
trap "rm -rf '$TEMPDIR/'" INT ERR EXIT
tar x -v -C "$TEMPDIR/" | while read filename
do
echo "unpacked file $filename" > /dev/stderr
## implement your file append logic here :)
## e.g. (?):
cat "$TEMPDIR/$filename" >> "result/$filename"
## HERE COMES THE FEEDBACK PART: '<code> <filename>'
echo "$?" "$filename"
done
The really neat part of this is, that since everything is in streaming mode, the feedback for the first file(s) may be arriving while the sending tar is still sending the later files to the receiving host. No unnecessary delays!
I included a tiny bit of sane error handling/cleanup but I would suggest
perhaps receiving the whole archive first, then iterating through the files?
doing the appends in atomic fashion (i.e. on a copy, then move the copy into place only if the whole append operation succeeded; this prevents partially appended logs)
Hope that helps!
Older answer:
You'd usually employ devious little tricks (not) like:
tar c file1 file2 file3 | ssh user#host -i /root/.ssh/id_rsa "tar x -C result/ -"
Add a verbose flag to see progress details
tar c file1 file2 file3 | ssh user#host -i /root/.ssh/id_rsa "tar xvC result/ -"
If you want, you can substitute cpio for tar. Add options to get more functionality (-p for preserve permissions, e.g.)
To do various separate steps over a single logical connection, you can use a ssh Master connection:
ssh user#host -i /root/.ssh/id_rsa -MNf # login, master, background without a command
for specific_file in file1 file2 file3
do
cat "$specific_file" |
ssh user#host -Mi /root/.ssh/id_rsa "cat >> 'result/$specific_file'"
# check/use error code
done
How about building on libssh2 instead of scripting ssh, and using the sftp subsystem instead of building your own file-transfer system in shell?
There's an example of performing one file append in libssh2/examples/sftp_append.c, just repeat it for the multiple files you want.
if you look at the problem from a different tactical view, you could cat all the files over from another master file. That master file is a shell script that has here documents embedded with the files' contents. Then exec the master shell script and ls the files - all in one ssh session. It's not pretty or elegant but will be successful.

Resources