Uninstall softwares one after another using shell script - linux

I want to uninstall two software's based on completion of first uninstaller. Mean to say , I don't want to start second uninstaller until we complete the first uninstaller.
Can anyone please suggest me how can I achieve this scenario.
This is what I followed now.
uninstall.sh:
if [ $exitval -eq 0 ] then
./uninstall1.sh
else
echo uninstall1.sh else loop
fi
result=$?
if [ $result -eq 0 ]
./uninstall2.sh
else
echo uninstall2.sh else loop
fi
Here the issue is , uninstaller1 will launch one UI. Before completion of uninstaller1, uninstaller2 UI will get launch. This is what I don't want.
Want to launch uninstall2 when uninstall1 gets finish.
Update : After goggling came to know that we can achieve this by using wait command. But, still struggling with the same issue .
Thanks In Advance.

Anyhow I'd just post my pending suggestion:
SomeLauncher1.sh
PID=$! ## Not really the way to do it but this is one way how.
while kill -s 0 "$PID"; do ## If true, process is still running.
sleep 1s ## Keep waiting.
done
SomeLauncher2.sh
... ## Perhaps do the same thing again.

Related

BASH loop usage

I'm currently learning BASH scripting and I have a question about the IF / WHILE / UNTIL loop statement. I'm trying to learn which one of those is more efficient for checking the contents of a variable for a text-statement, and if not found Can I use the until statement to check a variable? Example usage is like this (I'm using it to check if a system is updated, if not, it updates):
#!/bin/bash
# Flush the YUM cache since we've added new directories and repo's
flushcache=$(yum clean all);
# Does our system need to be updated?
checkupdate=$(yum update | grep -i "No Packages marked for Update");
# This will update the system
updatesystem=$(yum update -y);
# Flush the YUM cache, to make sure we get the newest package list
echo "$flushcache";
# Using a LOOP (until-logic), lets make sure we're all updated!
if [[ $checkupdate != "No packages marked for update" ]]
then
echo "$updatesystem"
else
echo "They system is already updated";
fi
exit 0;
The script exists normally, so that's good, but I'm wanting to know if I'm implementing my new learnz in the most efficient way possible. Also, will this loop around until $checkupdate is a true statement? I'd love to hear some professional input!
Any help is still help! Thanks for indulging me!

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

shell script to check the sybase iq status

I am writing a script to check whether sybase is running on my server. If it is not running, i want to start the service. If it is running, i want to stop the sybase iq.
Please help me doing the same.
The logic i have written is :
if(sybaseiq = active)
then
stop_iq
else
start_iq ".cfg" ".db"
Below is the code which I found on internet.But i am not able to understand what they are doing there. Please answer me with explanation.
isql -U${USERNAME} -P${PASSWORD} -S${SQL_SERVER} -w1000 << ! > ${LOG_FILE}
exit
!
if [[ $? != 0 ]]
then
msg="`date` ${SQL_SERVER} problem. ${SQL_SERVER} on ${HOST} is down or cannot be accessed"
cat ${LOG_FILE}|/usr/bin/mailx -s "${msg}" ${SUPPORT}
}
exit 1
fi
Thanks a lot in advance
The script is fairly straight forward
First the script logs into the server via isql, redirecting the output to a log file. If it's able to connect, it issues all the commands between the exclamation points, which is an exit in this case.
Next the if statement checks the error status of the last command run $?. 0 indicates no error, anything else indicates an error. So if the error is not 0, then create a message, then send that message, along with the log file to someone.
You will have to set the values for $USERNAME, $PASSWORD, $SQL_SERVER, $LOG_FILE, $HOST and $SUPPORT somewhere in your script.
If you are not familiar with shell scripts, I would recommend you read up a bit. It's quite easy to get into, but they are quite powerful for managing *nix systems.

wanted to know pid for outcome of script

./abcd.sh #this script is responsible to run a java code for creating a zip file in /tmp/abcd/
Some times abcd.sh script takes 30 seconds to create a zip file and some times it takes 60 seconds to create.
Since i don't have permission to edit abcd.sh file, I wrote this code to get pid for ./abcd.sh, but dont know how to get the pid for its child process.
./abcd.sh &
pid=$!
wait $pid
This code is waiting till the ./abcd.sh executes, but its not waiting till the zip file is complete.
Is there any way where it can wait till the zip file creates? my idea is, if we get to know the pid of zip file creation we can use wait $zipfilepid, but not sure how to get the pid for zip file creation.
.abcd.sh
sleep 60
I know sleep is an alternative for this, but i don't want to wait even if the zip file is created.
If you know the name of the "zip" program, you can find it in the process list and then determine its process ID. Assuming the name of the process you're waiting on is simply "zip", you can use something like this:
PROCNAME="zip"
PROCRUN=`ps h -ef|egrep -e "$PROCNAME"|grep -v " $$ "`
PIDS=`echo "$PROCRUN"|tr -s " "|cut -d" " -f2`
wait $PIDS
This will work for multiple PIDs. you may need to alter the PROCNAME variable's value to better target a specific process, such as:
PROCNAME=" zip "
PROCNAME="zip name-of-file-to-zip"
PROCNAME="zipscriptname"
...etc. Hope this helps.
After doing some research, i came to a conclusion that this approach may not be possible because i dont have control over the java code that creating a zip file and shell script(abcd.sh) that is instantiating to run the java code.
The approach that i took is:
find out whether zip file is still writing/open using /usr/sbin/lsof /path/to/zip/abcd.zip, with this command i will $? 0 if file is writing/open, i get $? 1 if file completed/close.
put $? into while loop and if $? -eq 0 go for sleep else exit the program.
/usr/sbin/lsof /path/to/zip/abcd.zip
rc=$?
while [ $rc -eq 0 ]
do
echo "Zip file is still creating, sleeping for 3 seconds"
sleep 3
done
Please let me know if you have better approach.

Re-run bash script if another instance was invoked

I have a bash script that may be invoked multiple times simultaneously. To protect the state information (saved in a /tmp file) that the script accesses, I am using file locking like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
Now any other instance that was invoked when this script was running exits. I want the script to run only one extra time if there were n simultaneous invocations, not n-times, something like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
// check if another instance was invoked, if so re-run do_something again
if [ condition ]; then
do_something
fi
How can I go about doing this? Touching a file inside the flock before quitting and having that file as the condition for the second if doesn't seem to work.
Have one flag (lockfile) to signal that a things needs doing, and always set it. Have a separate flag that is unset by the execution part.
REQUEST_FILE=/tmp/please_do_something
LOCK_FILE=/tmp/doing_something
# request running
touch $REQUEST_FILE
# lock and run
if ln -s /proc/$$ $LOCK_FILE 2>/dev/null ; then
while [ -e $REQUEST_FILE ]; do
do_something
rm $REQUEST_FILE
done
rm $LOCK_FILE
fi
If you want to ensure that "do_something" is run exactly once for each time the whole script is run, then you need to create some kind of a queue. The overall structure is similar.
They're not everone's favourite, but I've always been a fan of symbolic links to make lockfiles, since they're atomic. For example:
lockfile=/var/run/`basename $0`.lock
if ! ln -s "pid=$$ when=`date '+%s'` status=$something" "$lockfile"; then
echo "Can't set lock." >&2
exit 1
fi
By encoding useful information directly into the link target, you eliminate the race condition introduced by writing to files.
That said, the link that Dennis posted provides much more useful information that you should probably try to understand before writing much more of your script. My example above is sort of related to BashFAQ/045 which suggests doing a similar thing with mkdir.
If I understand your question correctly, then what you want to do might be achieved (slightly unreliably) by using two lock files. If setting the first lock fails, we try the second lock. If setting the second lock fails, we exit. The error exists if the first lock is delete after we check it but before check the second existant lock. If this level of error is acceptable to you, that's great.
This is untested; but it looks reasonable to me.
#!/usr/local/bin/bash
lockbase="/tmp/test.lock"
setlock() {
if ln -s "pid=$$" "$lockbase".$1 2>/dev/null; then
trap "rm \"$lockbase\".$1" 0 1 2 5 15
else
return 1
fi
}
if setlock 1 || setlock 2; then
echo "I'm in!"
do_something_amazing
else
echo "No lock - aborting."
fi
Please see Process Management.

Resources