Parsing oracle SQLPLUS error message in shell script for emailing - linux

I'm trying to extract a substring from an Oracle error message so I can email it off to an administrator using awk, this part of the code is trying to find where the important bit I want to extract.
starts here's what I have....
(The table name is incorrect to generate the error)
validate_iwpcount(){
DB_RETURN_VALUE=`sqlplus -s $DB_CRED <<END
SELECT count(COLUMN)
FROM INCORRECT_TABLE NAME;
exit
END`
a="$DB_RETURN_VALUE"
b="ERROR at line"
awk -v a="$a" -v b="$b" 'BEGIN{print index(a,b)}'
echo $DB_RETURN_VALUE
}
Strange thing is no matter how big that $DB_RETURN_VALUE is the return value from awk is always 28. Im assuming that somewhere in this error message there's something linux either thinks is a implcit delimiter of somesort and its messing with the count or something stranger. This works fine with regular strings as opposed to what oracle gives me.
Could anybody shine a light on this?
Many thanks

28 seems to be the right answer for the query you have (slightly amended to avoid an ORA-00936, and with tabs in the script). The message you're echoing includes a file expansion; the raw message is:
FROM IW_PRODUCTzS
*
ERROR at line 2:
ORA-00942: table or view does not exist
The * is expanded when you echo $DB_RETURN_VALUE, so the directory you're executing this from seem to have logs mail_files scripts in it, and they are being shown through expansion of the *. If you run it from different directories the echoed message length will vary, but the length of the actual message from Oracle stays the same - the length is changing (through the expansion) after the SQL*Plus call and after awk has done its thing. You can avoid that expansion with echo "$DB_RETURN_VALUE" instead, though I don't suppose you actually want to see that full message anyway in the end.
The substring from character 28 gives you what you want though:
validate_iwpcount(){
DB_RETURN_VALUE=`sqlplus -s $CENSYS_ORACLE_UID <<END
SELECT count(COLUMN_NAME)
FROM IW_PRODUCTzS;
exit
END`
# To see the original message; note the double-quotes
# echo "$DB_RETURN_VALUE"
a="$DB_RETURN_VALUE"
b="ERROR at line"
p=`awk -v a="$a" -v b="$b" 'BEGIN{print index(a,b)}'`
if [ ${p} -gt 0 ]; then
awk -v a="$a" -v p="$p" 'BEGIN{print substr(a,p)}'
fi
}
validate_iwpcount
... displays just:
ERROR at line 2:
ORA-00942: table or view does not exist
I'm sure that can be simplified, maybe into a single awk call, but I'm not that familiar with it.

Related

Writing to the same file with awk and truncate

My system is Arch Linux and my window manager is DWM. I use dash as my shell interpreter.
I have written this extension shell script for my timer.
xev -root |
awk -F'[ )]+' '/^KeyPress/ { a[NR+2] }
NR in a {
if ($8 == "Return") {
exit 0;
} else if ($8 == "BackSpace") {
system("truncate -s-1 timer.txt");
} else if (length($8) == 1) {
printf "%s", $8;
fflush(stdout);
}
system("pkill -RTMIN+3 dwmblocks");
}' | tee timer.txt
The timer itself sits in dwmblocks status bar. I want to name my timers first and then let it start. But I don't think that's that important.
The purpose of this script - I want to input characters into the root window of DWM and have them appear in my status bar instantly. So, xev produces the key pressed information, then awk takes that information, finds the exact key (from all the information that xev outputs) and checks. If the key is "Return", awk exits (job done). If key is "BackSpace" awk calls truncate from the system. If it's a regular character key, then awk outputs it to timer.txt with tee (I could use "> timer.txt" too, I think, but I want to see the output in my terminal for debugging.
After every relevant keypress (single character) I fflush stdout. After all of that I finally call pkill so that dwmblocks knows that it should update. (dwmblocks issues cat operation on the file)
Okay, "Return" and character input works fine. But there's a problem with "BackSpace". I've read about it a bit (I'd say I'm still a Unix newbie even though I've been using Linux for two years now) and I found out that writing to the same file from different processes is bad news. Still. Could it be done somehow? The fact is that truncate only writes to the file when awk, doesn't, so, maybe, it wouldn't be that big of a deal?
This exact script worked earlier yesterday but now it doesn't. At first, I tried using sed instead of truncate and truncate seemed to let me delete characters from timer.txt but now truncate seems to not work anymore too. Well, it kinda works. I can input my characters and then I can delete them. BUT. After pressing Backspace I can not enter any more characters. If I try to enter a character Backspace stops working too.
So yeah. I'd have several questions. First - what the hell is the problem? As I've said, it used to work and now it doesn't. Am I wandering into undefined behavior in this script?
Second - could this be done - meaning - could I somehow write and delete from the same file. Maybe with some other tool, not awk?
Thanks in advance.
This probably isn't an answer but it's too much to go in a comment. I don't know the details of most of the tools you mention, nor do I really understand what it is you're trying to do but:
A shell is a tool to manipulate files and processes and schedule calls to other tools. Awk is a tool to manipulate text. You're trying to use awk like a shell - you have it sequencing calls to truncate and pkill and calling system to spawn a subshell each time you want to execute either of them. What you should be doing, for example, is just:
shell { truncate }
but what you're actually doing is:
shell { awk { system { shell { truncate } } } }
Can you take that role away from awk and give it back to your shell? It should make your overall script simpler, conceptually at least, and probably more robust.
Maybe try something like this (untested):
#!/usr/bin/env bash
while IFS= read -r str; do
case $str in
Return ) exit 0 ;;
BackSpace ) truncate -s-1 timer.txt ;;
? ) printf "%s" "$str" | tee -a timer.txt ;;
esac
pkill -RTMIN+3 dwmblocks
done < <(
xev -root |
awk -F'[ )]+' '/^KeyPress/{a[NR+2]} NR in a{print $8; fflush()}'
)
I moved the write to timer.txt inside the loop to make sure tees not trying to write to it while you're truncating it - that may not be necessary.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

Using AWK and setting results to bash variables/arrays?

I have a file that replicates the results from show processlist command from mySQL.
The file looks like this:
*************************** 1. row ***************************
Id: 1
User: system user
Host:
db: NULL
Command: Connect
Time: 1030455
State: Waiting for master to send event
Info: NULL
*************************** 2. row ***************************
Id: 2
User: system user
Host:
db: NULL
Command: Connect
Time: 1004
State: Has read all relay log; waiting for the slave
I/O thread to update it
Info: NULL
And it keeps going on for a few more times in the same structure.
I want to use AWK to only get these parameters: Time,ID,Command and State, and store every one of these parameters into a different variable or array so that I can later use / print them in my bash shell.
The problem is, I am pretty bad with AWK, I dont know how to both seperate the parameters I want from the file and also set them as a bash variable or array.
Many thanks in advance for the help!
EDIT: Here is my code so far
echo "Enter age"
read age
cat data | awk 'BEGIN{ RS="row"
FS="\n"
OFS="\n"}
{ print $2,$7}
' | awk 'BEGIN{ RS="Id"}
{if ($4 > $age){print $2}}'
The file 'data' contains blocks like I have pasted above. The code should, if the 'age' entered is smaller than the Time parameter in the data file (which is $4 in my awk code), return the ID parameter, but it returns nothing.
If I remove the if statement and print $4 instead of $2 this is my output
Enter age
1
1030455
1004
2144
2086
0
So I was thinking maybe that blank line is somehow messing up my AWK print? Is there a simple way to ignore that blank line while keeping my other data?
This is how you'd use awk to produce the values you want as a set of tab-separated fields on each line per "row" block from the input:
$ cat tst.awk
BEGIN {
RS="[*]+ [[:digit:]]+[]. row [*]+\n"
FS="\n"
OFS="\t"
}
NR>1 {
sub(/\n$/,"") # remove the trailing newline
gsub(/\n\s+/," ") # compress all multi-line fields into single lines
gsub(OFS," ") # ensure the only OFS in the output IS between fields
delete n2v
for (i=1; i<=NF; i++) {
name = gensub(/:.*/,"","",$i)
value = gensub(/^[^:]+:\s+/,"","",$i)
n2v[name] = value
}
if (n2v["Time"]+0 > age) { # force a numeric comparison
print n2v["Time"], n2v["Id"], n2v["Command"], n2v["State"]
}
}
$ awk -v age=2000 -f tst.awk file
1030455 1 Connect Waiting for master to send event
If the target age is already stored in a shell variable just init the awk variable from the shell variable of the same name:
$ age="2000"
$ awk -v age="$age" -f tst.awk file
The above uses GNU awk for multi-char RS (which you already had), gensub(), \s, and delete array.
When you say "and store every one of these parameters into a different variable or array" it could mean one of several things so I'll leave that part up to you but you might be looking for something like:
arr=( $(awk '...') )
or
awk '...' |
while IFS="\t" read -r Time Id Command State
do
<do something with those 4 vars>
done
but by far the most likely situation is that you don't want to use shell at all but instead just stay inside awk.
Remember - every time you write a loop in shell just to manipulate text you have the wrong approach. UNIX shell is an environment from which to call UNIX tools and the UNIX tool for general text manipulation is awk.
Until you edit your question to tell us more about your problem though, we can't guess what the right solution is from this point on.
At the first level you have your shell which you use to run any other child process. It's impossible to modify parents environment from within child process. When you run your bash script file (which has +x right) it's spawned as new process (child). It can set it's own environment but when it ends its live you'll get back to the original (parent).
You can set some variables on bash and export them to it's environment. It'll be inherited by it's children. However it can't be done in opposite direction (parent can't inherit from its child).
If you wish to execute some commands from the script file in the current bash's context you can source the script file. source ./your_script.sh or . ./your_script.sh will do that for you.
If you need to run awk to filter some data for you and keep results in the bash you can do:
awk ... | read foo
This works as read is shell buildin function rather than external process (check type read, help, help read, man bash to check it by yourself).
or:
foo=`awk ....`
There are many other constructions you can use. Whatever bash script you do please compare your code with bash pitfalls webpage.

egrep command with piped variable in ssh throwing No Such File or Directory error

Ok, here I'm again, struggling with ssh. I'm trying to retrieve some data from remote log file based on tokens. I'm trying to pass multiple tokens in egrep command via ssh:
IFS=$'\n'
commentsArray=($(ssh $sourceUser#$sourceHost "$(egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))
echo ${commentsArray[0]}
echo ${commentsArray[1]}
commax=${#commentsArray[#]}
echo $commax
where $v is something like below but it's length is dynamic. Meaning it can have many file names seperated by pipe.
UserComments/propagateBundle-2013-10-22--07:05:37.jar|UserComments/propagateBundle-2013-10-22--07:03:57.jar
The output which I get is:
oracle#172.18.12.42's password:
bash: UserComments/propagateBundle-2013-10-22--07:03:57.jar/New: No such file or directory
bash: line 1: UserComments/propagateBundle-2013-10-22--07:05:37.jar/nouserinput: No such file or directory
0
Thing worth noting is that my log file data has spaces in it. So, in the code piece I've given, the actual comments which I want to extract start after the jar file name like : UserComments/propagateBundle-2013-10-22--07:03:57.jar/
The actual comments are 'New Life Starts here' but the logs show that we are actually getting it till 'New' and then it breaks at space. I tried giving IFS but of no use. Probably I need to give it on remote but I don't know how should I do that.
Any help?
Your command is trying to run the egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log on the local machine, and pass the result of that as the command to run via SSH.
I suspect that you meant for that command to be run on the remote machine. Remove the inner $() to get that to happen (and fix the quoting):
commentsArray=($(ssh $sourceUser#$sourceHost "egrep '$v' '/$INSTALL_DIR/$PROP_BUNDLE.log'"))
You should use fgrep to avoid regex special interpretation from your input:
commentsArray=($(ssh $sourceUser#$sourceHost "$(fgrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))

Resources