IBM Db2: How to automatically activate databases after (re)boot? - cron

I have several Db2 databases that I want to automatically activate after a system reboot. Restarting the Db2 service after a reboot is not a problem, but activating the databases requires access to the instance profile.
Service start/stop is crontrolled by system / systemctl. Including some user-controlled setup scripts into those scripts doesn't seem like a good idea. I briefly looked into enable-linger for the Db2 instance user or to use EnvironmentFile to set up the instance profile.
How do you activate all or a set of databases after reboot? Do you use user/group/EnvironmentFile with systemd? Do you enable linger or do you have any other method?

Here is a simple script which must be run from the Db2 instance owner.
It assumes, that Db2 instance is auto started. If it's not the case, just comment out db2gcf -s and uncomment db2gcf -u.
The script waits for the instance startup a configured number of seconds, and activates all local databases found in the Db2 instance system directory.
The script may be scheduled to run at the OS startup via Db2 instance owner's crontab entry as shown.
Log file (see the ${LOG} variable) with commands history is created in the Db2 instance owner's home directory.
#!/bin/sh
#
# Function: Activates all local DB2 databases
# Crontab entry:
# #reboot /home/scripts/db2activate.sh >/dev/null 2>&1
#
TIMEOUT=300
VERBOSE=${1:-"noverbose"}
export LC_ALL=C
if [ ! -x ~/sqllib/db2profile ]; then
echo "Must be run by a DB2 instance onwer" >&2
exit 1
fi
[ -z ${DB2INSTANCE} ] && . ~/sqllib/db2profile
if [ "${VERBOSE}" != "verbose" ]; then
LOG=~/.$(basename $0).log
exec 1>>${LOG}
exec 2>>${LOG}
fi
set -x
printf "\n*** %s ***\n" $(date +"%F-%H.%M.%S")
# Wait for the instance startup
# (or even start it with 'db2gcf -u' instead of checking status: 'db2gcf -s')
TIME1=${SECONDS}
while [ $((SECONDS-TIME1)) -le ${TIMEOUT} ]; do
db2gcf -s
# db2gcf -u
rc=$?
[ ${rc} -eq 0 ] && break
sleep 5
done
if [ ${rc} -ne 0 ]; then
echo "Instance startup timeout of ${TIMEOUT} sec reached" >&2
exit 2
fi
for dbname in $(db2 list db directory | awk -v RS='' '/= Indirect/' | grep '^ Database alias' | sort -u | cut -d'=' -f2); do
db2 activate db ${dbname}
done

Simple script which must be run from the Db2 instance owner.
su - <INSTANCE>
db2iauto -on <INSTANCE>
Exiting
exit
run user root
./<INSTANCE>/sqllib/bin/db2fmcu -d;
cd /<INSTANCE>/sqllib/bin/
./db2fmcu -u -p /opt/ibm/db2/<VERSION DB2>/bin/db2fmcd
./db2fm -i <INSTANCE> -U
./db2fm -i <INSTANCE> -u
./db2fm -i <INSTANCE> -f on
ps -ef | grep db2fm|grep <INSTANCE>
Done

Related

DOCKER_OPTS are reset after system reboot

I am specifying my TLS certs in /etc/default/docker, like this:
DOCKER_OPTS="-H=unix:// --tlsverify --tlscacert=/etc/docker/mynewca.pem
--tlscert=/etc/docker/mynewcert.pem
--tlskey=/etc/docker/mynewkey.pem -H=0.0.0.0:2376"
However, every time my Docker host restarts, my settings are overridden with the defaults:
DOCKER_OPTS="-H=unix:// --tlsverify --tlscacert=/etc/docker/ca.pem
--tlscert=/etc/docker/cert.pem
--tlskey=/etc/docker/key.pem -H=0.0.0.0:2376"
This means that I can not communiate with the Docker daemon remotely until I reconfigure DOCKER_OPTS and run
sudo service restart docker
upstart is starting the Docker daemon, and it looks like the script section of /etc/init/docker.conf is overriding DOCKER_OPTS, although I can't find where it's getting the defaults from.
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKERD=/usr/bin/dockerd
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKERD" $DOCKER_OPTS --raw-logs
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
DOCKER_SOCKET=
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
if ! printf "%s" "$DOCKER_OPTS" | grep -qE -e '-H|--host'; then
DOCKER_SOCKET=/var/run/docker.sock
else
DOCKER_SOCKET=$(printf "%s" "$DOCKER_OPTS" | grep -oP -e '(-H|--host)\W*unix://\K(\S+)' | sed 1q)
fi
if [ -n "$DOCKER_SOCKET" ]; then
while ! [ -e "$DOCKER_SOCKET" ]; do
initctl status $UPSTART_JOB | grep -qE "(stop|respawn)/" && exit 1
echo "Waiting for $DOCKER_SOCKET"
sleep 0.1
done
echo "$DOCKER_SOCKET is up"
fi
end script
Which
You may want to use the docker configuration file that is usually located in /etc/docker/daemon.json. See here for more information on the configuration:
https://docs.docker.com/engine/reference/commandline/dockerd//#daemon-configuration-file
In your case, the "tlscacert" option might be of special interest.
Nevertheless, the location of the configuration file may really depend on the OS and distribution (I remember the famous Gentoo /etc/conf.d/ directory)

ssh to different nodes using shell scripting

I am using below code to ssh to different nodes and find if an user exists or not. If the user doesn't exist it will create it.
The script works fine if I don't do ssh but it fails if I do ssh.
How can I go through different nodes using this script?
for node in `nodes.txt`
usr=root
ssh $usr#$node
do
if [ $(id -u) -eq 0 ]; then
read -p "Enter username : " username
read -s -p "Enter password : " password
egrep "^$username" /etc/passwd >/dev/null
if [ $? -eq 0 ]; then
echo "$username exists!"
exit 1
else
pass=$(perl -e 'print crypt($ARGV[0], "password")' $password)
useradd -m -p $pass $username
[ $? -eq 0 ] && echo "User has been added to system!" || echo "F
ailed to add a user!"
fi
else
echo "Only root may add a user to the system"
exit 2
fi
done
Your script has grave syntax errors. I guess the for loop at the beginning is what you attempted to add but you totally broke the script in the process.
The syntax for looping over lines in a file is
while read -r line; do
.... # loop over "$line"
done <nodes.txt
(or marginally for line in $(cat nodes.txt); do ... but this has multiple issues; see http://mywiki.wooledge.org/DontReadLinesWithFor for details).
If the intent is to actually run the remainder of the script in the ssh you need to pass it to the ssh command. Something like this:
while read -r node; do
read -p "Enter user name: " username
read -p -s "Enter password: "
ssh root#"$node" "
# Note addition of -q option and trailing :
egrep -q '^$username:' /etc/passwd ||
useradd -m -p \"\$(perl -e 'print crypt(\$ARGV[0], \"password\")' \"$password\")" '$username'" </dev/null
done <nodes.txt
Granted, the command you pass to ssh can be arbitrarily complex, but you will want to avoid doing interactive I/O inside a root-privileged remote script, and generally make sure the remote command is as quiet and robust as possible.
The anti-pattern command; if [ $? -eq 0 ]; then ... is clumsy but very common. The purpose of if is to run a command and examine its result code, so this is better and more idiomatically written if command; then ... (which can be even more succinctly written command && ... or ! command || ... if you only need the then or the else part, respectively, of the full long-hand if/then/else structure).
Maybe you should only do the remote tasks via ssh. All the rest runs local.
ssh $user#$node egrep "^$username" /etc/passwd >/dev/null
and
ssh $user#$node useradd -m -p $pass $username
It might also be better to ask for username and password outside of the loop if you want to create the same user on all nodes.

Working around sudo in shell script child process

So the reason I am asking this is because I'm running two programs simultaneously that are persistent, on the child process a programm is running that requires sudo rights.
#!/bin/bash
echo "Name the file:"
read filename
while [[ 1 -lt 2 ]]
do
if [ -f /home/max/dump/$filename.eth ]; then
echo "File already exist."
read filename
else
break
fi
done
#Now calling a new terminal for dumping
gnome-terminal --title="tcpdump" -e "sh /home/max/dump/dump.sh $filename.eth"
ping -c 1 0 > /dev/null **Waiting for tcpdump to create file**
#Packet analysis program is being executed
Script dump.sh
#!/bin/bash
filename=$1
echo password | sudo tcpdump -i 2 -s 60000 -w /home/max/dump/$filename -U
host 192.168.3.2
#Sudo still asks me for my password though password is piped into stdin

Crontab executing script differently

I have a script that checks if MySQL service is running on my Linux server.
If I run the script manually it works fine, but when crontab runs the script it gets different results..
This is my script:
#! /bin/sh
TODAY=$(/bin/date)
UP=$(/sbin/service mysql status| /bin/grep 'SUCCESS' | /usr/bin/wc -l);
if [ "$UP" -ne 1 ];
then
echo "mysql not working, Date: $TODAY" >> /scripts/sql_log.txt;
sudo /bin/mail -s "MySql is DOWN" mail#mail.com < /dev/null
sudo /sbin/service mysql start
else
echo "mysql is working, Date: $TODAY" >> /scripts/sql_log.txt;
fi
I am using the full path of the commands..the only part that I do not understand 100% is:
if [ "$UP" -ne 1 ];
What is this -ne 1?
So in this case MySQL is running:
If I run the script manually it writes that MySQL is working in the log file.
But Crontab just write that MySQL is not running in the log file (even if it is running) and it does not send any mail or something
If mysql service is stopped and I run the script manually, it send me an email and start the service as it should...
Any idea?
Now it works..looks like that the problem was caused because I did not write the full path of the command..This is the script that I am using now and it is working:
#! /bin/sh
UP=$(/sbin/service mysql status| /bin/grep 'SUCCESS' | /usr/bin/wc -l);
if [ "$UP" -ne 1 ];
then
sudo /bin/mail -s "MySql is DOWN" mail#mail.com < /dev/null
sudo /sbin/service mysql start
fi

shell script ssh command not working

I have a small list of servers, and I am trying to add a user on each of these servers. I can ssh individually to each server and run the command.
sudo /usr/sbin/useradd -c "Arun" -d /home/amurug -e 2014-12-12 -g users -u 1470 amurug
I wrote a script to loop through the list and run this command but I get some errors.
#!/bin/bash
read -p "Enter server list: " file
if [[ $file == *linux* ]]; then
for i in `cat $file`
do
echo "creating amurug on" $i
ssh $i sudo /usr/sbin/useradd -c "Arun" -d /home/amurug -e 2014-12-12 -g users -u 1470 amurug
echo "==============================================="
sleep 5
done
fi
When I run the script it does not execute the command.
creating amurug on svr102
Usage: useradd [options] LOGIN
Options:
What is wrong with my ssh crommand in my script?
Try this script:
#!/bin/bash
read -p "Enter server list: " file
if [[ "$file" == *linux* ]]; then
while read -r server
do
echo "creating amurug on" "$server"
ssh -t -t "$server" "sudo /usr/sbin/useradd -c Arun -d /home/amurug \
-e 2014-12-12 -g users -u 1470 amurug"
echo "==============================================="
sleep 5
done < "$file"
fi
As per man bash:
-t
Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

Resources