"Nothing in SQL buffer to run." message on executing a shell script - linux

I am running the following shell script which calls a .SQL file, that contains a list of DELETE statements. On executing ./delete_crr_input_purge.sh, I get the following message "Nothing in SQL buffer to run.". The .SQL file gets executed anyway but what is wrong with my code in the shell script?
#!/bin/ksh
#
# #(#)Copyright All Rights Reserved.
# #(#) $Id: run_drm_utility.sh $
# Setup common environment
. `dirname $0`/db_env.sh
cd `dirname $0`
echo "Enter the SHBA Atomic DB User Name:"
read USERNAME
echo "Enter the SHBA Atomic DB User Password:"
read PASS
cnt=`sqlplus -s /nolog <<-EOF
WHENEVER OSERROR EXIT 9;
WHENEVER SQLERROR EXIT SQL.SQLCODE;
connect $USERNAME/$PASS#$ORACLE_SID
SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF
#delete_crr_input_purge.sql;
commit;
EOF`
echo [$cnt]
return $?

I was able to fix this error by removing comments and occurrences of "/" from my .SQL file.
Also, I made sure that each of my SQL statements were in one line, one below the other. Also, no commit statement inside the .SQL file.

Related

How to call Oracle SQL query using bash shell script

How to call a sql query using bash shell script. I tried the below but seems there is some syntax error:
#!/bin/sh
LogDir='/albt/dev/test1/test2/logs' # log file
USER='test' #Enter Oracle DB User name
PASSWORD='test' #Enter Oracle DB Password
SID='test' #Enter SID
sqlplus -s << EOF > ${LogDir}/sql.log
${DB_USER_NAME}/${DB_PASSWORD}#${DB_SID}
SELECT count(1) FROM dual; # SQL script here to get executed
EOF
var=$(SELECT count(1) FROM dual)
I'm getting - unexpected token error
#!/bin/sh
user="test"
pass="test"
var="$1"
sqlplus -S $user/$pass <<EOF
SELECT * FROM tableName WHERE username=$var;
exit;
EOF
I'm getting - sqlplus: command not found -- when I run the above script
Can anyone guide me?
In your first script, one error is in the use of count(1). The whole line is
var=$(SELECT count(1) FROM dual)
This means that the shell is supposed to execute a program named SELECT with the parameters count(1), FROM and dual, and stores its stdout into the variable var. I think you want instead to feed a SELECT command into sqlplus, i.e. something like
var=$(sqlplus .... )
In your second script, the error message simply means that sqlplus can not be found in your path. Either add it to the PATH or provide the absolute path to the command explicitly.

how to passing variable from bash to sqlplus

I have below code in bash:
#/!bin/bash
USERS=tom1,tom2,tom3
and I want to drop all users from my variable "USERS" via sqlplus
#/!bin/bash
USERS=tom1,tom2,tom3
sqlplus -s system/*** <<EOF
*some code*
DROP USER tom1 CASCADE;
DROP USER tom2 CASCADE;
DROP USER tom2 CASCADE;
EOF
pls help
Splitting in database using sql / plsql by using the USERS variable is complex and requires understanding of REGEXP functions and I will not recommend that to you. Moreover, it would also require Dynamic Sql to execute a drop query.
I would suggest you to split it within the shell script,create a script and execute it in sqlplus
USERS="tom1,tom2,tom3"
echo ${USERS} | tr ',' '\n' | while read word; do
echo "drop user $word;"
done >drop_users.sql
sqlplus -s system/pwd <<EOF
#some code
#drop_users.sql
EOF
Your code is OK except for DDL commands for Oracle DB which are derived from USERS parameter of the script file, call userDrop.sh. To make suitable relation with USERS parameter and DB, an extra file (drpUsr.sql) should be produced, and invoked inside EOF tags with an # sign prefixing.
Let's edit by vi command :
$ vi userDrop.sh
#!/bin/bash
USERS=tom1,tom2,tom3
IFS=',' read -ra usr <<< "$USERS" #--here IFS(internal field separator) is ','
for i in "${usr[#]}"; do
echo "drop user "$i" cascade;"
done > drpUsr.sql
#--to drop a user a privileged user sys or system needed.
sqlplus / as sysdba <<EOF
#drpUsr.sql;
exit;
EOF
Call as
$ . userDrop.sh
and do not forget to come back to command prompt by an exit command of DB.

how to use /dev/null for bash command

I am using below command to get result for my SQL query.
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER"
This works fine on my server but on different machine it is printing bash warning with output of SQL query.
For example -
/etc/profile: line 46: HISTSIZE: readonly variable
/etc/profile: line 50: HISTCONTROL: readonly variable
/etc/profile.d/20-tmout.sh: line 1: TMOUT: readonly variable
/etc/profile.d/history.sh: line 6: hcmnt_tty: readonly variable
name
abc
Please let me know anyway so that I can skip above warning messages and only get data.
If I would like to use /dev/null in this case how to modify above command to get data only.
if what you mean is "how to discard only error output?", the way to go is to redirect the standard error stream to oblivion (/dev/null), like so:
your-command 2>/dev/null
that way, if the command outputs data to standard out, it passes through, but any output to the standard error socket is discarded, so you won't see these error messages.
by the way, 2 here is a shorthand file descriptor for the standard error.
Sorry this is untested, but I hit this same error, your db session isn't read/write. You can echo the statements to psql to force a proper session as follows. I'm unsure as to how stdin may be effected
echo 'SET TRANSACTION READ WRITE; SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE ; COPY ( my SQL query ) TO STDOUT WITH CSV HEADER' | su - postgres -c 'psql -d dbname' with stdin
caution - bash hack
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER" | grep -v "readonly"

Logging sql error from inside shell script

i am trying to implement a error logging mechanism in shell script which will also give me errors inside a sql statement
I am planning to call my script from inside another script and redirecting the error to a logfile..
is there any better option ? please help.
#!/bin/sh
./test.sh 2>&1 >> log_1.log
test.sh contains the follwing code
## testing error logging in sql
result=`sqlplus -s $username/$passwd#$db <<EOF
set serveroutout on
set pagesize 0
set heading off
set echo off
set feedback off
select first_name from employees;
exit;
EOF
if [ $? -ne 0 ]; then
echo "Error exists"
else
echo "$result"
fi
--- Edited after testing the code given by Alex Pool
I did the changes but whenever I am getting an SQL error,log file is not getting generated instead the error is being shown at the command line..
Script name -test_error_log.sh
#!/bin/sh
output=`sqlplus -s -l hr/hr#xe << EOF
whenever sqlerror exit failure rollback
whenever oserror exit failure rollback
set serveroutput on
set heading off
set pagesize 0
set echo off
select **e.firs_name --- wrong field name**
,d.department_name from
employees e , departments d
where e.department_id=d.department_id ;
exit;
EOF`
echo $output
I am calling it from caller_shell.sh in the following way
script name : caller_shell.sh
#!/bin/sh
./test_error_log.sh 2>&1 log_file
When I execute ./caller_shell.sh from command line I am getting the error a
but not at the log_file but at the screen
ERROR at line 1: ORA-00904: "E"."FIRS_NAME": invalid identifier
Please let me know how to resolve this ..
You can use whenever sqlerror to make SQL*Plus exit with an error code, which the shell script will see as the return code:
result=`sqlplus -l -s $username/$passwd#$db <<EOF
whenever sqlerror exit failure rollback
whenever oserror exit failure rollback
set serveroutput on
set pagesize 0
set heading off
set echo off
set feedback off
select first_name from employees;
exit 0;
EOF`
It will exit when the error is seen, and nothing after the point of failure will be attempted. Using failure makes it a generic failure code, but you can specify a particularly value if you prefer. But be aware of shell limits if you use your own value; in most shells anything above 255 will loop back round to zero, so it's not a good idea to try to exit with the SQL error code for example as you might get a real error that happens to end up as zero after it's been mod'ed. Using rollback means that if a failure occurs partway through a script it will roll back any (uncommitted) changes already made.
This will catch SQL errors and PL/SQL errors (unless those are caught by an exception handler and not re-raised), but won't catch SQL*Plus-specific errors - i.e. those starting SP-, such as from an invalid set command.
I've added a -l flag to the sqlplus command so it only tries to connect once, which is helpful for a non-interactive script - sometimes they can hang waiting for subsequent credentials, depending on the context and what else is in the script. I've also fixed the spelling of serveroutput and added a missing backtick...

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Resources