Bash script on db2 - linux

I have an bash script that will run db2 command (list active databases) and redirect the output to a file then I will use this file in another script.
Script:
/DB2RM1/db2rm1/sqllib/bin/db2 list active databases > /home/occddma/scripts/data
Note: I have put the location to the binaries of db2 before the db2 command to able to run it from crontab.
Then I put the script in crontab job to update datadb file every minute as shown below.
* * * * * /DB2RM1/db2rm1/mon_db2.sh
When I run the script form the command line it works fine but when it runs from the crotab job it redirects the below error to datadb file.
SQL10007N Message "-1390" could not be retrieved. Reason code: "3".

You have not charge the db2profile. This file is in the sqllib directory of the instance's home directory.
Let's suppose your instance is db2inst1, then you need to call:
. ~db2inst1/sqllib/db2profile
Once the db2profile is loaded, you can see the value of the DB2INSTANCE environment variable.
One way to check if the environment is correctly loaded is to check if the DB2INSTANCE is set.
if [[ -z ${DB2INSTANCE} ]] ; then
echo "ERROR"
exit 1
fi
BTW, the 1390 is this error:
SQL1390C The environment variable DB2INSTANCE is not defined or is
invalid.

Most probably you have to load your bash profile in your script. Like:
source /etc/profile
or
source ~/.bash_profile

Related

Error while trying set Environment variables in linux using shell script

So I am trying to set environment variables using my shell Script. The script takes some inputs from the user and then i have to set those inputs in the environment variable. I am using two shell Script for the same but i am getting permission denied errors.
Script 1
DEFAULT_NAME="sample"
read -p "Enter your Name: [$DEFAULT_NAME]: " USER_NAME
if [ -z "$USER_NAME" ]
then
USER_NAME=$DEFAULT_NAME
else
USER_NAME=$USER_NAME
fi
source setEnv.sh
Script 2
echo -e "export NAME=${USER_NAME}" >> /etc/profile.d/nameenv.sh
First, the if condition in Script 1 is wrong, you probably want to test $USER_NAME instead.
If you are using bash, you can replace the whole if statement with:
USER_NAME=${USER_NAME:-$DEFAULT_NAME}
In Script 2 are you sure that you want to append a new line to /etc/profile.d/nameenv.sh, every time you execute the script? The last declaration will hide the preceding ones.
Finally, note that you need root privilege to write in /etc/profile.d. Are you running the script as a privileged user?
[Edit] Trying to guess what you are trying to do here. If you need that USER_NAME is redefined for the user's current session (and not system-wide), just replace the last line in Script 1 with:
export USER_NAME
and remove Script 2. If you want to make it permanent (again, for the current user only), modify Script 2 to write the variable declaration in ~/.bash_profile instead.

cronjob does not execute a script that works fine standalone

I have my php script file in /var/www/html/dbsync/index.php. When cd /var/www/html/dbsync/ and run php index.php it works perfectly.
I want to call PHP file through sh file, the location of SH file is as below
/var/www/html/dbsync/dbsync.sh
This is the content of the dbsync.sh file is:
/usr/bin/php /var/www/html/dbsync/index.php >> /var/www/html/dbsync/myscript.log 2>&1 -q -f
When I cd /var/www/html/dbsync/ and run ./dbsync.sh it works perfectly as well.
Now if I set up crontab as below:
1 * * * * /var/www/html/dbsync/dbsync.sh /var/www/html/dbsync
However, this crontab is not working as expected.
What can be wrong?
As seen in comments, the problem is that you are not defining what program should be used to execute the script. Take into account that a cronjob is executed in a tiny environment; there, not much can be assumed. This is why we define full paths, etc.
So you need to say something like:
1 * * * * /bin/sh /var/www/html/dbsync/dbsync.sh /var/www/html/dbsync
# ^^^^^^^
/bin/sh being the binary you want to use to execute the script.
Otherwise, you can set execution permissions to the script and add a shell-script header telling it what interpreter to use:
#!/bin/sh
If you do this, adding the path of the binary is not necessary.
From Troubleshooting common issues with cron jobs:
Using relative paths. If your cron job is executing a script of some
kind, you must be sure to use only absolute paths inside that script.
For example, if your script is located at /path/to/script.phpand
you're trying to open a file called file.php in the same directory,
you cannot use a relative path such as fopen(file.php). The file must
be called from its absolute path, like this: fopen(/path/to/file.php).
This is because cron jobs do not necessarily run from the directory in
which the script is located, so all paths must be called specifically.
Also, I understand you want to run this every minute. If so, 1 * * * * won't do. Intead, it will run at every 1st minute past every hour. So if you want to run it every minute, say * * * * *.
It is important to understand "login shell" and "interactive shell" what they means.
login shell: is briefly when you sign in with ssh session and get a terminal window where you can enter shell commands. After login the system executes some files(.bashrc) and sets some environment variables such as the PATH variable for you.
interactive shell :After login on a system, you can startup manually shell terminal(s). The system executes some profile file assigned to your account (.bash_profile, .bash_login,.profile). This files also sets some environment variables and initialize PATH variable for your manually opened shell session.
By OS started shell scripts and cron jobs does not fit in above mentioned way for starting a shell. Therefore no any system scripts(.bashrc) or user profiles are executed. This means our PATH variable is not initialized. Shell commands could not found because PATH variable does not point to right places.
This explains why your script runs successfully if you start it manually but fails when you start it via crontab.
Solution-1:
Use absolute path of every shell command instead of only the command name used in your script file(s).
instead of "awk" use "/usr/bin/awk"
instead of "sed" use "/bin/sed"
Solution-2: Initialize environment variables and especially the PATH variable before executing shell scripts!
method 1, add this header in your dbsync.sh:
#!/bin/bash -l
method 2, add bash -l in your cron file:
1 * * * * bash -l /var/www/html/dbsync/dbsync.sh /var/www/html/dbsync

How to make my BASH script know the path that is saved

I'm going to mk a script that when you run it, it creates a html page with user's computer details... But I only have one problem...
If the user put that script in a folder and set it as a cronjob, when the crontab execute it, the script makes a folder in home directory and that is bad, because, I want the script to make the folder with the HTML docs at the same dir that the script is... what can I do??
thnx ;-)
I would usually do the following:
#!/bin/bash
# sample script to change directory and save a file
WORKDIR=/home/file/n34_panda
cd $WORKDIR
echo "this "> newfile.txt
This is a simple answer without reviewing the rest of your code. To be honest the simplest method would be simply adding the command:
cd /path/to/html/docs
Then after that you can add the rest of your code
Try this script from several directories. It will always return its location.
#!/bin/bash
echo ${0%/*}
You can simply set cron as below :
# m h dom mon dow command
* * * * * cd /path-where-you-want-to-save/ && /path/of/yourscript.sh
As suggested in the comments, there are ways of a script knowing what directory it is stored in. However, I would recommend a simpler solution. Use absolute file paths in the script. The destination directory could be an argument to the script. This make the script more flexible; it means that you don't have to have many copies stored in separate directories if you want to do the same task in more than one place.
#!/bin/bash
output_dir="$1" # first argument to the script
echo "blah blah" > "$output_dir"/filename
Using bash parameter expansion (https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html) to extract the path from the special variable $0 (which stores the name of the script) if executed from a remote folder and the current directory otherwise ($0 will not contain full path if executed from current directory).
#!/bin/bash
if [[ "$0" == */* ]]; then
script_path=${0%/*}
else
script_path=$(pwd)
fi
I am testing if the special variable $0 contains a /. if so then use parameter expansion to truncate everything following the last /. If it does not contain / then we must be in the current directory, so store the result of pwd.
From my testing this allowed me to get the same output if executed as /pathTo/script.sh or pushd /pathTo; ./script.sh; popd
I tested this in Cygwin with GNU bash, version 4.3.42

Crontab for script

My script is under /u01/software/aditya/script/ directory. Name of script is myscript.sh. I am able to run this script and getting output too. I am trying to set a cronjob for this script at 6.30 daily morning. I am doing this as root user. I have done following steps but not getting output.
crontab -e
30 06 * * * sh /u01/software/aditya/script/myscript.sh >> /u01/software/aditya/hello.log
:wq
but not getting any update in hello.log file :( . please help….
First check your cron log file which is usually in /var/log/syslog. There should be entries similar to
Sep 17 06:30:01 localhost CRON[17725]: (root) CMD (sh /u01/software/aditya/script/myscript.sh >> /u01/software/aditya/hello.log)
If not, your script has never been run. This could be due to a broken crontab file. You should make sure that this file always ends with a newline, better insert more than one at the end so that deleting one accidentally won't break the file.
If this line exists in the log file then your script has been run, but didn't generate any output. This can happen due to a different environment when being run via cron.
Also note that >> only redirects stdout, not stderr. If you want to redirect stderr too, then add 2>&1 at the end of the line.
Normally this is caused by a PATH problem. There is a very good chance that myscript.sh calls a command that is not available in the PATH that cron runs with. Some options to fix this are:
Make sure that every command in myscript.sh is a full path-reference (tedious)
Add source ~/.bashrc to the top of myscript.sh
Add export PATH=$PATH:<colon delimited list of paths necessary for myscript.sh to run correctly>
Pick one of the above, or you could also choose one of the options here: Hourly cron job did not run

Linux/Unix Scripting - strangest behaviour ever in a few lines - variable set but empty

I can tell you this is the craziest thing I have seen in a long time.
I have this (part of) sh script running on CentOS 5.4:
# Check GOLD_DIR`
echo $GOLD_DIR"<--"
#export GOLD_DIR=/share/apps/GOLD_Suite/GOLD <uncommenting this line works!!
if [ "X$GOLD_DIR" = "X" ] ; then
echo "ERROR: GOLD_DIR is (probably) not set on host ${HostName}" >> ${3}
exit 1
fi
And this gives the following output:
/share/apps/GOLD_Suite/GOLD<--
Waiting for 5 seconds ..... Testing output
The test script did spawn a job (i.e. PVM ran OK),
but errors were detected in the test script output
on the host machine: Below is the output
ERROR: GOLD_DIR is (probably) not set on host xxx.yyy.local
As you can see the GOLD_DIR variable is set (the script finds it as shown by the output with postfixed "<--") ! If I uncomment the export of the GOLD_DIR variable in the script code (first snippet) everything works.
EDIT: GOLD_DIR is exported in /etc/profile (using export GOLD_DIR=/share/apps/GOLD_Suite/GOLD)
Any ideas why?
Note1: I don't know if this is important but this is a spawn script on PVM.
Note2: The script is written in sh #!/bin/sh but I am using bash...
Edit3: I GOT IT TO WORK BUT I DONT KNOW WHY! - Ok so what I did was rename the hostname (with sudo hostname abc) to the name of the machine I ssh into (e.g. abc). Before the PVM was listing the full name of the machine abc.mycompany.local. Note that both abc.mycompany.local and abc are the same machine.
So the var is set. If you just do export GOLD_DIR instead of commented line (without setting the value), will it work?
Also. It's an isolated case? Is it bash there on CentOS? Try to use [[ ]] to check what's working wrong.
I believe that it might have something to do with the non-interactive nature of the job. Jobs run from shell scripts won't necessarily source /etc/profile, so they might not be picking up your ${GOLD_DIR} variable. (Unless you've explicitly changed its behavior, bash will only source /etc/profile for a login shell.)
Try adding:
. /etc/profile
in the beginning of your script just to see if that changes anything. If not, when you echo the error statement, add in ${GOLD_DIR} somewhere to see if the variable is still available in that statement.

Resources