Run cron job in non-silent mode? - cron

I created a simple linux script that essentially calls sqlplus and puts the results in variable X. I then analyze X and determine whether or not I need to send out a syslog message.
The script works perfectly when I run it from the command line as "oracle"; however when I use crontab as "oracle" and add it to my job, X isn't getting filled.
I could be wrong, but I believe the issue is since cron runs things in silent mode, X isn't actually getting filled, but when I run it manually it is.
Here's my crontab -l result (as oracle):
0,30 * * * * /scripts/isOracleUp.sh syslog
Here's my full script:
#Created by: hatguy
#Created date: May 8, 2012
#File Attributes: Must be executable by "oracle"
#Description: This script is used to determine if Oracle is up
# and running. It does a simple select on dual to check this.
DATE=`date`
USER=$(whoami)
if [ "$USER" != "oracle" ]; then
#note: $0 is the full path of whatever script is being run.
echo "You must run this as oracle. Try \"su - oracle -c $0\" instead"
exit;
fi
X=`sqlplus -s '/ as sysdba'<<eof
set serveroutput on;
set feedback off;
set linesize 1000;
select count(*) as count_col from dual;
EXIT;
eof`
#This COULD be more elegant. The issue I'm having is that I can't figure out
#which hidden characters are getting fed into X, so instead what I did was
#check the string legth (26) and checked that COUNT_COL and 1 were where I
#expected.
if [ ${#X} -eq 26 ] && [ ${X:1:10} = "COUNT_COL" ] && [ ${X:24:3} = "1" ] ; then
echo "Connected"
#log to a text file that we checked and confirmed connection
if [ "$1" == "syslog" ]; then
echo "$DATE: Connected" >> /scripts/log/isOracleUp.log
fi
else
echo "Not Connected"
echo "Details: $X"
if [ "$1" == "syslog" ]; then
echo "Sending this to syslog"
echo "==========================================================" >> /scripts/log/isOracleUp.log
echo "$DATE: Disconnected" >> /scripts/log/isOracleUp.log
echo "Message from sqlplus: $X" >> /scripts/log/isOracleUp.log
/scripts/sendMessageToSyslog.sh "PROD Oracle is DOWN!!!"
/scripts/sendMessageToSyslog.sh "PROD Details: $X"
fi
fi
Here's output when run as oracle from terminal:
Wed May 9 10:03:07 MDT 2012: Disconnected
Message from sqlplus: select count(*) as count_col from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
Here's my log output when run through oracle's crontab job:
Wed May 9 11:00:04 MDT 2012: Disconnected
Message from sqlplus:
And to syslog:
PROD Details:
PROD Oracle is DOWN!!!
Any help would be appreciated as I'm a new linux user and this is my first linux script.
Thanks!

My Oracle db skills are pretty limited but dont you need to set ORACLE_SID and ORACLE_HOME ?
Check these variables from the command lines and set these variables within cron and retry.

Related

Parse args passed from aws System Manager send_command

I have a long running job that cannot finish within lambda 15 minutes limit. So, I decided to use an EC2 worker instance to run the job. The job needs to be kicked off from a lambda function. I am using the following Python code to send the command to EC2 instance.
ssm.send_command(
InstanceIds=['*****'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': [f'/home/ssm-user/get_cert_attributes.sh --doc_id={doc_id}']})
Shell script is getting called. However, I am unable to parse the args --doc_id. I am using the below code block to parse the arg. doc_id is coming blank. Any help in this regard would be highly appreciated.
#!/bin/bash
while [ "${1:-}" != "" ]; do
case "$1" in
"-d" | "--doc_id")
shift
doc_id=$1
;;
esac
shift
done
echo $doc_id
I resolved the issue by creating my own ssm document:
schemaVersion: "2.2"
description: "Get document attributes"
parameters:
docid:
type: "String"
description: "Document id to be processed"
mainSteps:
- action: "aws:runShellScript"
name: "GetDocAttr"
inputs:
runCommand:
- "/home/ssm-user/get_cert_attributes.sh --doc_id {{docid}}"
On the shell script side, I had to use export doc_id to export the environment variable to use it in subsequent child sessions.
#!/bin/bash
while [ "${1:-}" != "" ]; do
case "$1" in
"-d" | "--doc_id")
shift
doc_id=$1
;;
esac
shift
done
echo $doc_id
export doc_id
I'm able to run a shell script with arguments like this. You don't need all the messaging and other information, but it's there just in case.
commands = ['sudo -u ec2-user ./p1_consolidate.sh 0 1 0']
instanceid = 'i-0780a999exxxdxxx'
ssmc = boto3.client('ssm')
response_send = ssmc.send_command(
DocumentName="AWS-RunShellScript",
Parameters={'commands': commands,
'workingDirectory': ['/home/ec2-user'],
'executionTimeout': ['14400']},
OutputS3BucketName='xxxxx-data-files-for-functions',
OutputS3KeyPrefix='ssm-outputfiles-automation/',
InstanceIds=[instanceid],
ServiceRoleArn='arn:aws:iam::xxxxx6657583:role/SNS-Publish-SSM-Statuses',
NotificationConfig={
'NotificationArn': 'arn:aws:sns:us-east-1:xxxxx6657583:your-sns',
'NotificationEvents': ['All'],
'NotificationType': 'Command'}
)
p1_consolidate.sh stored in /home/ec2-user/ directory
Just takes the three arguments sent via commands below. Then the python file runs with those arguments.
s=$1
e=$2
q=$3
nohup python /home/ec2-user/code/mypythonfile.py -s $s -e $e -q $q &

Bash script with multiline heredoc doesn't output anything

I'm writing a script to send SQL output in mail, but it is not executing successfully and is not generating the output I want.
The query generates two columns with multiple rows. How can I generate the output in table format as below?
Below is my code:
#!/bin/bash
ORACLE_HOME= **PATH
export ORACLE_HOME
PATH=$PATH:$ORACLE_HOME/bin
export PATH
TNS_ADMIN= ** PATH
export TNS_ADMIN
today=$(date +%d-%m-%Y)
output=$(sqlplus -S user/pass#service <<EOF
set heading off;
SELECT distinct list_name ,max(captured_dttm) as Last_received FROM db.table1
group by list_name having max(captured_dttm) <= trunc(sysdate - interval '2' hour);
EOF)
if [ -z "$output" ];
then
echo"its fine"
exit
else
echo "
Dear All,
Kindly check we've not received the list for last 2 hour : $output
Regards,
Team" | mailx -S smtp=XX.XX.X.XX:XX -s "URGENT! Please check list FOR $today" user#abc.com
fi
When using a here document, the closing string can't be followed by anything but a newline. Move the closing parenthesis to the next line:
output=$(sqlplus -S user/pass#service <<EOF
...
EOF
)

Bash script Output to file fails from script but works from bash

I have this script that checks if the dcos auth login works, but the file i am redirecting the output to is always zero size, when i run the script from bash shell the file is greater than zero . what am i doing wrong ?? , the two functions i use below:
try_to_login()
{
# first needs to be logged as skyusr
# try to login and log the result to tmp file
# Sometimes the file is empty so we try again to login
# if the second time is OK it jumps to check the output
cd /home/skyusr/scripts/
dcos auth login --username=admin --password=admin > /home/skyusr/scripts/tmp.sal
}
check_login_result()
{
# Checks if the output of the login is "Login Successful!"
# If YES then writes to log file, if not sends mail and writes to log.
#export mail_to="salim.bisharat#amdocs.com,anis.faraj#amdocs.com"
export mail_to="salim.bisharat#amdocs.com"
now=$(date)
text_to_check=$(cat /home/skyusr/scripts/tmp.sal)
if [ -s /home/skyusr/scripts/tmp.sal ]
then
if [ "$text_to_check" = "Login successful!" ]
then
echo "$now - Check Successful" >> /home/skyusr/scripts/logs/login_log.log
else
cat /home/skyusr/scripts/logs/mail_temp.log | mailx -s "!!! CRITITCAL -- Check DCOS login !!!" $mail_to
echo "$now - !! ERROR ! Sent mail !! " >> /home/skyusr/scripts/logs/login_log.log
fi
fi
}
In this script you define, but you do not call the functions. Simply append function calls:
# ...
echo "$now - !! ERROR ! Sent mail !! " >> /home/skyusr/scripts/logs/login_log.log
fi
fi
} # ... the last line of your script here
try_to_login # calls here ...
check_login_result

How do I manage log verbosity inside a shell script?

I have a pretty long bash script that invokes quite a few external commands (git clone, wget, apt-get and others) that print a lot of stuff to the standard output.
I want the script to have a few verbosity options so it prints everything from the external commands, a summarized version of it (e.g. "Installing dependencies...", "Compiling...", etc.) or nothing at all. But how can I do it without cluttering up all my code?
I've thought about to possible solutions to this: One is to create a wrapper function that runs the external commands and prints what's needed to the standard output, depending on the options set at the start. This ones seems easier to implement, but it means adding a lot of extra clutter to the code.
The other solution is to send all the output to a couple of external files and, when parsing the arguments at the start of the script, running tail -f on that file if verbosity is specified. This would be very easy to implement, but seems pretty hacky to me and I'm concerned about the performance impact of it.
Which one is better? I'm also open to other solutions.
Improving on #Fred's idea a little bit more, we could build a small logging library this way:
declare -A _log_levels=([FATAL]=0 [ERROR]=1 [WARN]=2 [INFO]=3 [DEBUG]=4 [VERBOSE]=5)
declare -i _log_level=3
set_log_level() {
level="${1:-INFO}"
_log_level="${_log_levels[$level]}"
}
log_execute() {
level=${1:-INFO}
if (( $1 >= ${_log_levels[$level]} )); then
"${#:2}" >/dev/null
else
"${#:2}"
fi
}
log_fatal() { (( _log_level >= ${_log_levels[FATAL]} )) && echo "$(date) FATAL $*"; }
log_error() { (( _log_level >= ${_log_levels[ERROR]} )) && echo "$(date) ERROR $*"; }
log_warning() { (( _log_level >= ${_log_levels[WARNING]} )) && echo "$(date) WARNING $*"; }
log_info() { (( _log_level >= ${_log_levels[INFO]} )) && echo "$(date) INFO $*"; }
log_debug() { (( _log_level >= ${_log_levels[DEBUG]} )) && echo "$(date) DEBUG $*"; }
log_verbose() { (( _log_level >= ${_log_levels[VERBOSE]} )) && echo "$(date) VERBOSE $*"; }
# functions for logging command output
log_debug_file() { (( _log_level >= ${_log_levels[DEBUG]} )) && [[ -f $1 ]] && echo "=== command output start ===" && cat "$1" && echo "=== command output end ==="; }
log_verbose_file() { (( _log_level >= ${_log_levels[VERBOSE]} )) && [[ -f $1 ]] && echo "=== command output start ===" && cat "$1" && echo "=== command output end ==="; }
Let's say the above source is in a library file called logging_lib.sh, we could use it in a regular shell script this way:
#!/bin/bash
source /path/to/lib/logging_lib.sh
set_log_level DEBUG
log_info "Starting the script..."
# method 1 of controlling a command's output based on log level
log_execute INFO date
# method 2 of controlling the output based on log level
date &> date.out
log_debug_file date.out
log_debug "This is a debug statement"
...
log_error "This is an error"
...
log_warning "This is a warning"
...
log_fatal "This is a fatal error"
...
log_verbose "This is a verbose log!"
Will result in this output:
Fri Feb 24 06:48:18 UTC 2017 INFO Starting the script...
Fri Feb 24 06:48:18 UTC 2017
=== command output start ===
Fri Feb 24 06:48:18 UTC 2017
=== command output end ===
Fri Feb 24 06:48:18 UTC 2017 DEBUG This is a debug statement
Fri Feb 24 06:48:18 UTC 2017 ERROR This is an error
Fri Feb 24 06:48:18 UTC 2017 WARNING This is a warning
Fri Feb 24 06:48:18 UTC 2017 FATAL This is a fatal error
As we can see, log_verbose didn't produce any output since the log level is at DEBUG, one level below VERBOSE. However, log_debug_file date.out did produce the output and so did log_execute INFO, since log level is set to DEBUG, which is >= INFO.
Using this as the base, we could also write command wrappers if we need even more fine tuning:
git_wrapper() {
# run git command and print the output based on log level
}
With these in place, the script could be enhanced to take an argument --log-level level that can determine the log verbosity it should run with.
Here is a complete implementation of logging for Bash, rich with multiple loggers:
https://github.com/codeforester/base/blob/master/lib/stdlib.sh
If anyone is curious about why some variables are named with a leading underscore in the code above, see this post:
Correct Bash and shell script variable capitalization
You already have what seems to be the cleanest idea in your question (a wrapper function), but you seem to think it would be messy. I would suggest you reconsider. It could look like the following (not necessarily a full-fledged solution, just to give you the basic idea) :
#!/bin/bash
# Argument 1 : Logging level for that command
# Arguments 2... : Command to execute
# Output suppressed if command level >= current logging level
log()
{
if
(($1 >= logging_level))
then
"${#:2}" >/dev/null 2>&1
else
"${#:2}"
fi
}
logging_level=2
log 1 command1 and its args
log 2 command2 and its args
log 3 command4 and its args
You can arrange for any required redirection (with file descriptors if you want) to be handled in the wrapper function, so that the rest of the script remains readable and free from redirections and conditions depending on the selected logging level.
Solution 1.
Consider using additional file descriptors.
Redirect required file descriptors to STDOUT or /dev/null depending on selected verbosity.
Redirect output of every statement in your script to a file descriptor corresponding to its importance.
Have a look at https://unix.stackexchange.com/a/218355 .
Solution 2.
Set $required_verbosity and pipe STDOUT of every statement in your script to a helper script with two parameters, something like this:
statement | logger actual_verbosity $required_verbosity
In a logger script echo STDIN to STDOUT (or log file, whatever) if $actual_verbosity >= $required_verbosity.

Generate time serie in iso-8601 format using date command, how to deal with server system date origin offset?

I have the following bash function that generate an epoch list in iso-8601 format on a machine that runs ubuntu and it works fine. (where isdate and isint bash functions to test the input)
gen_epoch()
{
## USAGE: gen_epoch [start_date_iso] [end_date_iso] [increment_in_seconds]
##
## TASK : generate an epoch list (epoch list in isodate format).
## result on STDOUT: [epoch_list]
## error_code : 2 0
## test argument
if [ "$#" -ne 3 ]; then echo "$FUNCNAME: input error [nb_of_input]"; return 2
elif [ $( isdate $1 &> /dev/null; echo $? ) -eq 2 ]; then echo "$FUNCNAME: argument error [$1]"; return 2
elif [ $( isdate $2 &> /dev/null; echo $? ) -eq 2 ]; then echo "$FUNCNAME: argument error [$2]"; return 2
elif [ $( isint $3 &> /dev/null; echo $? ) -eq 2 ]; then echo "$FUNCNAME: argument error [$3]"; return 2
else local beg=$( TZ=UTC date --date="$1" +%s ); local end=$( TZ=UTC date --date="$2" +%s ); local inc=$3; fi
## generate epoch
while [ $beg -le $end ]
do
local date_out=$( TZ=UTC date --date="UTC 1970-01-01 $beg secs" --iso-8601=seconds ); beg=$(( $beg + $inc ))
echo ${date_out%+*}
done
}
It generates the expected values for this command line example:
gen_epoch 2014-04-01T00:00:00 2014-04-01T07:00:00 3600
expected values:
2014-04-01T00:00:00
2014-04-01T01:00:00
2014-04-01T02:00:00
2014-04-01T03:00:00
2014-04-01T04:00:00
2014-04-01T05:00:00
2014-04-01T06:00:00
2014-04-01T07:00:00
However i have tried this function on a server where i have no root privileges and i have found the following results:
2014-03-31T17:00:00
2014-03-31T18:00:00
2014-03-31T19:00:00
2014-03-31T20:00:00
2014-03-31T21:00:00
2014-03-31T22:00:00
2014-03-31T23:00:00
2014-04-01T00:00:00
and i have seen that the server time origin is not at 1970-01-01T00:00:00.
typing TZ=UTC date --date="1970-01-01T00:00:00" +%s command gives the value of -25200 which corresponds to a 7 hours lag while it should give 0.
My question is how this problem could be corrected on the server?
Could You help me to find an equivalent solution for this function assuming that i don't know on which machine i am running it, so i have know apriori knowledge if the system time is correct or not?
Not a complete answer but too long for a comment.
I guess that this particular server is incorrectly configured upon setup. The problem is that BIOS clocks are set to localtime time, while the systems thinks it's in UTC (or vise versa) (use hwclock to query hardware clocks settings).
If the system is configured incorrectly and you can't fix it for any reason (don't have superuser account or whatever), I'd suggest to provide a "fixing timezone description file with your software and specify it in TZ variable like this: TZ=:/path/to/fixing/timezone date --date="1970-01-01T00:00:00" +%s. Obviously you have to pre-calculate which TZ description file fixes the problem and use a proper one. Usually available timezones are stored in /usr/share/zoneinfo

Resources