Need to run ksh script in windows korn shell - linux

I am new to korn shell, I am trying to run ksh script that to kill all 3 days older process in my server, that works good for direct input, but when I put this in a for look script I got error, someone please help.
FYI, korn shell is installed in windows server.
> cat test.ksh
#! /usr/bin/ksh
for i in {ps -eo etime,pid,args | awk -F- '$1>3{print}' | grep -i read_ini | awk '{print $2}'}
do
kill -9 $i
done
LCQU#SETOPLCORA01Q [/dev/fs/E/home/serora]
> ./test.ksh
./test.ksh[3]: syntax error: `|' unexpected
LCQU#SETOPLCORA01Q [/dev/fs/E/home/serora]
> ksh test.ksh
test.ksh[3]: syntax error: `|' unexpected
LCQU#SETOPLCORA01Q [/dev/fs/E/home/serora]
> ls -l test.ksh
-rwxrwx--- 1 jagadee Domain Users 133 Dec 24 13:16 test.ksh

Do not use {} but $() for a subprocess:
for i in $(ps -eo etime,pid,args | awk -F- '$1>3{print}' | grep -i read_ini | awk '{print $2}')
do
kill -9 $i
done

Related

Problems with accessing job PID in LINUX shellscript

if I run the expression
ps -fu $USER| grep 'mount' | grep -v 'grep' | awk '{print $2}'
in the command line, I get - as expected - the PID of the processes containing "mount" in their description.
I want to achieve the following to kill certain background processes programmatically. The following code in the shell script:
#!/usr/bin/env bash
mountcmd="ps -fu $USER| grep 'mount' | grep -v 'grep' | awk '{print $2}' "
mountpid=$(eval "$mountcmd")
echo "Found existing background job PID: " "$mountpid"
does not provide the PID, but the output of echo is:
Found existing background job PID: wgeithne 6284 1 0 17:09 pts/3 00:00:00 minikube mount /u/wgeithne/bin/grafana/config:/grafana
How do I get the only the PID as output of my script?
The stupid eval trick requires additional escaping of the dollar sign in the Awk script. But really, a massively superior solution is to avoid stupid eval tricks.
Perhaps see also https://mywiki.wooledge.org/BashFAQ/050
If you really need to reinvent pidof, probably get rid of the antipatterns.
mountpids=$(ps -fu "$USER" | awk '/[m]ount/ { print $2 }')

Awk not working inside bash script

Im trying to write a bash script and trying to take input from user and executing a kill command to stop a specific tomcat.
...
read user_input
if [ "$user_input" = "2" ]
then
ps -ef | grep "search-tomcat" |awk {'"'"'print $2'"'"'}| xargs kill -9
echo "Search Tomcat Shut Down"
fi
...
I have confirmed that the line
ps -ef | grep "search-tomcat"
works fine in script but:
ps -ef | grep "search-tomcat" |awk {'"'"'print $2'"'"'}
doesnt yield any results in script, but gives desired output in terminal, so there has to be some problem with awk command
xargs can be tricky - Try:
kill -9 $(ps -ef | awk '/search-tomcat/ {print $2}')
If you prefer using xargs then check man page for options for your target OS (i.e. xargs -n.)
Also noting that 'kill -9' is a non-graceful process exit mechanism (i.e. possible file corruption, other strangeness) so I suggest only using as a last resort...
:)

Why part of the script cannot execute in the crontab

I have a script stopping the application and zipping some files:
/home/myname/project/stopWithZip.sh
With the properties below:
-rwxrwxr-x. 1 myname myname778 Jun 25 13:48 stopWithZip.sh
Here is the content of the script:
ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15
month=`date +%m`
year=`date +%Y`
fixLogs=~/project/log/fix/$year$month/*.log.*
errorLogs=~/project/log/error/$year$month/log.*
for log in $fixLogs
do
if [ ! -f "$log.gz" ];
then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived fix log files done"
for log in $errorLogs
do
if [ ! -f "$log.gz" ]; then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived errorlog files done"
The problem is except this ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15 command, other gzip commands are not executed. I totally don't understand why.
I cannot see any compression of the logs in the directory.
BTW, when I execute the stopWithZip.sh explicitly in command line, it works perfectly fine.
In crontab:
00 05 * * 2-6 /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1 (NOT work)
In command line:
/home/myname/project>./stopWithZip.sh (work)
Please help
The script fails when run under cron because your script is invoked with project in its path, so the kill pipeline kills the script too.
You could prove (or disprove) this by adding some tracing. Log the output of ps and of awk to log files:
ps -ef |
tee /tmp/ps.log.$$ |
grep project |
grep -v grep |
awk '{print $2}' |
tee /tmp/awk.log.$$ |
xargs kill -15
Review the logs and see that your script is one of the processes being killed.
The crontab entry contains:
/home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
When ps lists that, it contains 'project' and does not contain 'grep' so the kill in the script kills the script itself.
When you run it from the command line (using a conventional '$' as the prompt), you run:
$ ./stopWithZip.sh
and when ps lists that, it does not contain 'project' so it is not killed.
If you ran:
$ /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
from the command line, like you do with cron (crontab), you would find it fails.

Linux Command within Command

My use case is to run a command on multiple servers remotely. I have trust set up between hosts.
So i have one command like this:
COMMAND 1:
for i in 11 12 13 14 15 16; do echo host-name-dev-$i; ssh -q host-name-dev-$i "nohup bash -c 'Place Ur command Here' > foo.out 2> foo.err < /dev/null &"; done
And another as:
COMMAND 2:
rm -rf /floderPath
When i combine(COMMAND 1 + COMMAND 2) these two it works fine and subsequent folder is deleted from all hosts.
for i in 11 12 13 14 15 16; do echo host-name-dev-$i; ssh -q host-name-dev-$i "nohup bash -c 'rm -rf /floderPath' > foo.out 2> foo.err < /dev/null &";done
Now i have another command. If i run this command on all the hosts individually, it works fine and kill all the java processes.
COMMAND 3:
for i in `ps -ef | grep -v grep | grep java | awk '{print $2}'`; do kill -9 $i; echo "Process id $i is killed"; done
But now when i combine COMMAND 1 and COMMAND 3 it doesn't work at all. What i am trying to do here is KILLING all JAVA process on all the hosts.
for i in 11 12 13 14 15 16; do echo host-name-dev-$i; ssh -q host-name-dev-$i "nohup bash -c 'for j in `ps -ef | grep -v grep | grep java | awk '{print $2}'`; do kill -9 $j; echo "Process id $j is killed"; done' > foo.out 2> foo.err < /dev/null &";done
I can guess that there might be improper use of quotes, but i have tried various combinations and it didn't work for me.
I don't have much experience in scripting so pardon for obvious errors.
I think the following quoting should work...
for i in 11 12 13 14 15 16; do
echo host-name-dev-$i
ssh -q host-name-dev-$i "nohup bash -c \"for j in \\\`ps -ef | grep -v grep | grep java | awk '{print \\\$2}'\\\`; do kill -9 \\\$j; echo \\\"Process id \\\$j is killed\\\"; done\" > foo.out 2> foo.err < /dev/null &"
done
Update: Please do not kill yourself over the amount of escape characters.
bash or any other shell can't take a good proccess in remote Interaction action.
use expect language to do what you want. http://expect.sourceforge.net/
We use expect in more than 1000 hosts, It works fine, try :)

How to expand variables in bash jobs list

Is it possible to have the variable names in the bash jobs list (jobs command) expanded. E.g. I get job list such as
[1] Angehalten vi $file
[2] Angehalten vi $file
which refer to vi sessions with two different values of $file. Now I want to bring the vi window with a specific file to the front but don't know which job number it has.
You can use jobs -l to get the PID of each, and then look them up in the output of ps, which will show the expanded string:
$ jobs -l
[2]- 6445 Suspended: 18 vi $file
[3]+ 6473 Suspended: 18 vi $file
$ ps | grep vi
6445 ttys000 0:00.03 vi x
6473 ttys000 0:00.03 vi y
6485 ttys000 0:00.00 grep --color vi
Following command should work (tested with bash 4.2.10 on ubuntu 11.10)
paste <(jobs) <(jobs -p | xargs ps -p | tail -n+2 | awk '{print substr($0, index($0, $5))}')
output:
[1] Stopped vi $A vi a
[2]- Stopped vi $A vi b
[3]+ Stopped vi $A vi c
The idea is same as Carl's - extract the command by process id
Similar to Alex's answer but without all the paste and subprocess tricks. It worked for me on FreeBSD 10.0 with bash v4.3.30
jobs -p | xargs ps -p | tail -n+2 | awk '{print substr($0, index($0, $5))}'
If you can try and expand the variables before you execute the command.
Maybe there is a way of doing this without ever writing to file.
If not use mktemp to start the temp file.
cmd1=sleep
echo "$cmd1 10 &" > /tmp/veryscary.sh && source /tmp/veryscary.sh
jobs -p | while read pid; do ps auxwq $pid; done | grep -v USER
Further to arj's answer but without the need for a temp file, how about:
cmd1=sleep
eval `echo "$cmd1 10 &"`

Resources