Hide command execution detection in ps - linux

I have a bash script with contents-
#!/bin/bash
while true;do
netstat -antp | grep LISTEN | tr -s ' ' | cut -d ' ' -f 4 > /tmp/log
sleep 100
done
Say I create a service which executes the script on boot.But when I use ps -eo command I'm able to see the commands being executed.For eg -
netstat -antp
grep LISTEN
tr -s ' '
cut -d ' ' -f 4
But I wish to suppress this output and hide the execution of these commands.Is there a way to do it?
Any other suggestions are welcome too.Thanks in advance!

You can't hide running processes from the system, at least not without some kernel hooks. Doing so is something typically only found in malware, so you'll not likely get much help.
There's really no reason to hide those processes from the system. If something in your script gets hung up, you'll want to see those processes to give you an idea of what's happening.
If there's a specific problem the presence of those processes is causing, you need to detail what that is.

Related

How can I exit the 'loop' of 'top | grep user'?

I want to use top | grep user to know how many processes are running.
However, after I run top | grep user > temp_file, the command just keeps running.
How can I safely stop it with information being written to the temp_file?
You can use the -n 1 option to top to make it only do one iteration.
Probably the better tool to use would be ps, as in ps aux | grep user.
If count is all you're interested in:
pgrep -cu username

How do I tail top -c?

I have a couple ruby scripts running on my machine and some other ruby processes. The only way I can differentiate them with top is by doing top -c (so I can see the command, otherwise everything is just 'ruby').
I want to be able to watch how many scripts are running so I can restart them if one fails.
I am thinking I can do this with top -c -n 1 | grep "script-name" but I can't figure out how to tail -f that or if that command is the best way to do it in the first place.
I think that top it's not the best choice here, because it's an interactive command and you can't really pipe its whole output (probably there is a way). One of the fair enough ways to do it would be using ps:
ps -e -o pid,cmd | grep "script-name"
If you want to periodically investigate this, you can also use watch:
watch 'ps -e -o pid,cmd | grep "script-name"'
In general, it's a bad practice to grep ps output, but I suppose in your case will work. If you only want the number of running processes that match against a pattern or you just want their PIDs, you'd better go with pgrep.
pgrep "script-name"

Why `read -t` is not timing out in bash on RHEL?

Why read -t doesn't time out when reading from pipe on RHEL5 or RHEL6?
Here is my example which doesn't timeout on my RHEL boxes wile reading from the pipe:
tail -f logfile.log | grep 'something' | read -t 3 variable
If I'm correct read -t 3 should timeout after 3 seconds?
Many thanks in advance.
Chris
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
The solution given by chepner should work.
An explanation why your version doesn't is simple: When you construct a pipe like yours, the data flows through the pipe from the left to the right. When your read times out however, the programs on the left side will keep running until they notice that the pipe is broken, and that happens only when they try to write to the pipe.
A simple example is this:
cat | sleep 5
After five seconds the pipe will be broken because sleep will exit, but cat will nevertheless keep running until you press return.
In your case that means, until grep produces a result, your command will keep running despite the timeout.
While not a direct answer to your specific question, you will need to run something like
read -t 3 variable < <( tail -f logfile.log | grep "something" )
in order for the newly set value of variable to be visible after the pipeline completes. See if this times out as expected.
Since you are simply using read as a way of exiting the pipeline after a fixed amount of time, you don't have to worry about the scope of variable. However, grep may find a match without printing it within your timeout due to its own internal buffering. You can disable that (with GNU grep, at least), using the --line-buffered option:
tail -f logfile.log | grep --line-buffered "something" | read -t 3
Another option, if available, is the timeout command as a replacement for the read:
timeout 3 tail -f logfile.log | grep -q --line-buffered "something"
Here, we kill tail after 3 seconds, and use the exit status of grep in the usual way.
I dont have a RHEL server to test your script right now but I could bet than read is exiting on timeout and working as it should. Try run:
grep 'something' | strace bash -c "read -t 3 variable"
and you can confirm that.

Get pid of last started instance of a certain process

I have several instances of a certain process running and I want to determine the process id of the one that has been started last.
So far I came to this code:
ps -aef | grep myProcess | grep -v grep | awk -F" " '{print $2}' |
while read line; do
echo $line
done
This gets me all process ids of myProcess. Somehow I need to compare now the running times of this pids and find out the one with the smallest running time. But I don't know how to do that...
An easier way would be to use pgrep with its -n, --newest switch.
Select only the newest (most recently started) of the matching
processes.
Alternatively, if you don't want to use pgrep, you can use ps and sort by start time:
ps -ef kbsdstart
Use pgrep. It has a -n (newest) option for that. So just try
pgrep -n myProcess

Why does my process counting script give false positives?

I have the following bash script, that lists the current number of httpd processes, and if it is over 60, it should email me. This works 80% of the time but for some reason sometimes it emails me anyways when it is not over 60. Any ideas?
#!/bin/bash
lines=`ps -ef|grep httpd| wc -l`
if [ "$lines" -gt "60" ]
then
mailx -s "Over 60 httpd processes" me#me.com < /dev/null
fi
There is a delay between checking and emailing. In that time, some httpd processes might finish, or start, or both. So, the number of processes can be different.
You are including the grep process in the processes (most of the time, it could happen that the ps finishes before grep starts). An easy way to avoid that is to change your command to ps -ef | grep [h]ttpd. This will make sure that grep doesn't match grep [h]ttpd.
On linux, you have pgrep, which might be better suited for your purposes.
grep ... | wc -l can usually be replaced with grep -c ....
If you want to limit the number of httpd requests, I am sure you can set it in apache configuration files.
You've probably thought of this, but ...
At time t0, there are 61.
At time t1, when you read the email, there are 58.
Try including the value of $lines in the email and you'll see.
Or try using /proc/*/cmdline, it might be more reliable.
grep httpd finds all processes that include httpd in their name, including possibly grep httpd itself, and perhaps other ones.
"ps -ef|grep httpd" doesn't find just httpd processes, does it? It finds processes whose full (-f) listing in ps includes the string "httpd".
This probably doesn't solve your issue but you could simplify things by using pgrep instead.
you can do it this way too, reducing the use of grep and wc to just one awk.
ps -eo args|awk '!/awk/&&/httpd/{++c}
END{
if (c>60){
cmd="mailx -s \047Over 60\047 root"
cmd | getline
}
}'

Resources