I am running a script to install Oracle DB 11g with silent option and response file.
I noticed the shell after executing the command
$ /directory_path/runInstaller -silent -responseFile responsefilename
The install session just closes just giving me a log file location.
The installation process is active in the background. But for me no way to no about the progress and what is going on... till the prompt to run root scripts come. What if I close the putty windows etc?
Any way to keep the installer session active till finished ? and show some sort of progress on-screen ?
Any way to keep the installer session active till finished ?
Yes, you can wait for oracle silent install to finish on linux eg in a shell script as follows.
(Below was with oracle 11g release 2 for Redhat Enterprise Linux.)
You can wait for it to finish by doing:
$ /directory_path/runInstaller -silent -responseFile responsefilename |
while read l ;
do
echo "$l" ;
done
(this relies on the fact that even though the java universal installer runs in the background it keeps using stdout, so "read l" keeps succeeding until the background universal installer process exits)
and show some sort of progress on-screen ?
A bit more tricky but we can do it by figuring out the name of the logfile from the output of runInstaller before it exits. The output contains a line like:
Preparing to launch Oracle Universal Installer from /tmp/xxxxOraInstallTTT. ...
... where the TTT is a timestamp which leads us to the correct log file, /opt/oraInventory/logs/installActionsTTT.log.
Something like (I have not tested this because in my install I do not need progress output):
$ /directory_path/runInstaller -silent -responseFile responsefilename |
(
while read l ;
do
echo "$l" &&
if expr "$l" : "Preparing to launch Oracle Universal Installer from " >/dev/null
then
t=$(expr "$1" : ".*OraInstall\([^.]*\)") &&
log="/opt/oraInventory/logs/installActions${t}.log" &&
tail -f "$log" &
tpid=$!
fi
done
if [ -n "$tpid" ]
then
kill $tpid
fi
#[1]
)
... we can also tell if the install succeeded, because the universal installer always puts its exit status into the log via the two lines:
INFO: Exit Status is 0
INFO: Shutdown Oracle Database 11g Release 2 Installer
... so by adding to the above at #[1] ...
exitStatus=$(expr $(grep -B1 "$log" | head -1) : "INFO: Exit Status is\(.*\)") &&
exit $exitStatus
... the above "script" will exit with 0 status only if the oracle instal completes successfully.
(Note that including the space in the status captured by expr just above is deliberate, because bizarrely expr exits with status 1 if the matched substring is literally "0")
It is astounding that oracle would go to so much trouble to "background" the universal installer on linux/unix because:
it is trivial for the customer to generically run a script in the background:
runInstaller x y z &
... or...
setsid runInstaller x y z
it is very difficult (as we can see above) to wait for a "buried" background process to finish, and it cannot be done generically
Oracle would have saved themselves and everyone else lots of effort by just running the universal installer syncronously from runInstaller/.oui.
One useful option I found is -waitforcompletion and -nowait for windows. setup.exe will wait for completion instead of
spawning the java engine and exiting.
But still trying to figure out a way to tail that log file automatically when running the runInstaller.
UPDATE: Finally this is the only solution that worked for me.
LOGFILE=$(echo /path/oraInventory/logs/$(ls -t /path/oraInventory/logs | head -n 1))
./runInstaller -silent -responseFile /path/db.rsp -ignorePrereq
grep -q 'INFO: Shutdown Oracle Database' $LOGFILE
while [[ $? -ne 0 ]] ; do
tail -1 $LOGFILE
grep -q 'INFO: Shutdown Oracle Database' $LOGFILE
done
Instead of tailing the latest log in oraInventory, kept the process active till the last message is written to the log file. Old school but works.
It's been a while since I did an installation, but I don't think you can make it stay in the foreground; it just launches the JVM and that doesn't (AFAIK) produce any console output until the end, and you see that anyway, so it wouldn't help you much.
You should have plenty to look at though, in the logs directory under your oraInventory directory, which is identified in the oraInst.loc file you either created for the installation or already had, and might have specified on the command line. You mentioned it told you where the log file is when it started, so that would seem a good place to start. The installActions log in that directory will give you more detail than you probably want.
If you do close the terminal, the silentInstall and oraInstall logs tell you when the root scripts need to be run, and the latter even gives you the progress as a percentage - though I'm not sure how reliable that is.
Tailing the oraInstall or installActions files is probably the only way you can show some sort of progress on screen. 'Silent' works both ways, I suppose.
You probably want to "tail" the most up to date logs(ls -t would give that) generated under /tmp/OraInstallXXX directory.
Related
I am currently using the following command to run reboot
sudo shutdown -r now
however, I would need to run it for 5 loops before and after executing some other programs. Was wondering if it is possible to do it in MINT environment?
First a disclaimer: I haven't tried this because I don't want to reboot my machine right now...
Anyway, the idea is to make a script that can track it's iteration progress to a file as #david-c-rankin suggested. This bash script could look like this (I did test this):
#!/bin/sh
ITERATIONS="5"
TRACKING_FILE="/path/to/bootloop.txt"
touch "$TRACKING_FILE"
N=$(cat "$TRACKING_FILE" | wc -c)
if [ "$N" -lt "$ITERATIONS" ]; then
printf "." >> "$TRACKING_FILE"
echo "rebooting (iteration $N)"
# TODO: this is where you put the reboot command
# and anything you want to run before rebooting each time
else
rm "$TRACKING_FILE"
# TODO: other commands to resume anything required
fi
Then add a call to this script somewhere where it will be run on boot. eg. cron (#reboot) or systemd. Don't forget to remove it from a startup/boot command when you're finished or next time you reboot, it will reboot N times.
Not sure exactly how you are planning on using it, but the general workflow would look like:
save script to /path/to/reboot_five_times.sh
add script to run on boot (cron, etc.)
do stuff (manually or in a script)
call the script
computer reboots 5 times
anything in the second TODO section of the script is then run
go back to step 3, or if finished remove from cron/systemd so it won't reboot when you don't want it to.
First create a text document wherever you want,I created one on Desktop,
Then use this file as a physical counter and write a daemon file to run things at startup
For example:
#!/bin/sh
var=$(cat a.txt)
echo "$var"
if [ "$var" != 5 ]
then
var=$((var+1))
echo "$var" > a.txt
echo "restart here"
sudo shutdown -r now
else
echo "stop restart"
echo 0 > a.txt
fi
Hope this helps
I found a way to create a file at startup for my reboot script. I incorporated it with the answers provided by swalladge and also shubh. Thank you so much!
#!/bin/bash
#testing making a startup application
echo "
[Desktop Entry]
Type=Application
Exec=notify-send success
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name[en_CA]=This is a Test
Name=This is a Test
Comment[en_CA]=
Comment=" > ~/.config/autostart/test.desktop
I create a /etc/rc.local file to execute user3089519's script, and this works for my case. (And for bootloop.txt, I put it here: /usr/local/share/bootloop.txt )
First: sudo nano /etc/rc.local
Then edit this:
#!/bin/bash
#TODO: things you want to execute when system boot.
/path/to/reboot_five_times.sh
exit 0
Then it works.
Don't forget edit /etc/rc.local and remove /path/to/reboot_five_times.sh when you done reboot cycling.
I am currently working on a NFC system using the ACR122U reader and not using the constructor drivers which lead to some occasionnal crashes of those drivers.
The problem here is that when it crashes, the whole process isn't crashed, my program keep running but the drivers don't. (No need to say that it makes my code useless)
I am aware of the ways to restart a crashed program but not crashed drivers. I thought of using a watchdog to hard reset the raspberry but needless to say that a reboot isn't the best choice because of the time it takes. ( I am using the very first Raspberry).
So, is there a way to reboot only the driver and more important, detect when it fails ?
I found a solution to my own problem after many hours of research and trials. The solution is actually very simple : just a background running script (my program in my case), and a check using grep, every two seconds :
#!/usr/bin/env bash
command="/your/path/to/your_script"
log="prog.log"
match="error libnfc"
matchnosp="$(echo -e "${match}" | tr -d '[:space:]')"
$command > "$log" 2>&1 &
pid=$!
while sleep 2
do
if fgrep --quiet "$matchnosp" "$log"
then
echo "SOME MESSAGE"
kill $(pidof your_script)
$command > "$log" 2>&1 &
sleep 5
truncate -s 0 $log
echo "SOME OTHER MESSAGE..."
fi
done
This restart the program when some message matching "error libnfc" is found in the log file.
I need to run a jar file (datacollector.jar), say having path (/home/vagrant/datatool) on a Linux machine for a fixed interval of time. For example, from 18:00 10-06-2017 to 03:00 11-06-2017 in some future time. After that, the process should be killed.
I want to write a shell script for this which takes two arguments starting and ending time.
The script should also inform if its already running or not, and I should be able to manually stop it before ending time.
I am unable to figure this out after spending some researching online. How can I achieve this?
I come up with a solution in which myshell.sh take 4 arguments
[start_date] [start_time] [end_date] [end_time].
and script code :
#!/bin/bash
JAR_PATH=/home/vagrant/datatool/datacollector.jar
PID_PATH=/tmp/datacollector-id
#calculate wait time to run the jar
wait=$(($(date -d "$1 $2" "+%s")-$(date "+%s")))
wait_mins=$((wait/60))
#calculate time for which jar file should be executed
run_interval=$(($(date -d "$3 $4" "+%s")-$(date -d "$1 $2" "+%s")))
run_interval_mins=$((run_interval/60))
echo "tool will satrt after $wait_mins"
#wait before running jar file
sleep "$wait_mins"m
#run the jar file
nohup java -jar $JAR_PATH /tmp 2>> /dev/null >> /dev/null &
echo $! > $PID_PATH
#wait for jar execution time
sleep "$run_interval_mins"m
#kill the jar process
PID=$(cat $PID_PATH);
echo "tool process killed"
kill $PID;
echo "program terminated"
and I am running the code with command:
$ nohup ./myshell.sh 2017-06-08 20:07:00 2017-06-08 20:10:00 >> scriptoutput.txt 2>> /dev/null &
and scriptoutput.txt contains:
tool will satrt after 3
tool process killed
program terminated
Where I need to improve my code?
Is there any better way to do it?
There are no wonders: considering your post, you are probably not allowed to modify its source code and recompile it. Thus, you need an external thing, some other process what
starts the jar with a JVM
stops it
informs the user about its start and stop.
The simplest way to do that are the
at command (you can start something in the future with it)
cron command (you can periodically can execute commands with it in the background)
and, the shell scripts.
If it is only a single-time execution, the "at" command would be the best.
Learn about the linux shellscripting by googling for "linux shell scripting tutorial". You can have more specific answers about your problem on the http://unix.stackexchange.com .
fileexist=0
mv /data/Finished-HADOOP_EXPORT_&Date#.done /data/clv/daily/archieve-wip/
fileexist=1
--some other script below
Above is the shell script I have in which in the for loop, I am moving some files. I want to notify myself via email if something wrong got happened in the moving process, as I am running this script on the Hadoop Cluster, so it might be possible that cluster went down while this was running etc etc. So how can I have better error handling mechanism in this shell script? Any thoughts?
Well, atleast you need to know "What are you expecting to go wrong". based on that you can do this
mv ..... 2> err.log
if [ $? -ne 0 ]
then
cat ./err.log | mailx -s "Error report" admin#abc.com
rm ./err.log
fi
Or as William Pursell suggested, use-
trap 'rm -f err.log' 0; mv ... 2> err.log || < err.log mailx ...
mv may return a non-zero return code upon error, and $? returns that error code. If the entire server goes down then unfortunately this script doesn't run either so that's better left to more advanced monitoring tools such as Foglight running on a different monitoring server. For more basic checks, you can use method above.
I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.