I have a centos server in which I have install perl package to run some perl scripts. today I run some perl scripts, and when I run ps -ef | grep perl it shows nothing although the scripts are working properly.
When I use pkill -f (name_of_script) the perl process stopped however they are not shown at all.
Note that yesterday I deleted a user ( X ) which was affected to folder /home/scripts. What do you think the problem is from?
The reason for not showing the process in ps -ef might be due to the process being run as a background process. In that case, the process won't be associated with the terminal where you started it from and thus won't be shown in the output of ps -ef. To see all processes running on the system, including background processes, you can use ps aux instead.
Related
The problem is that I cannot find the process ID of a script that has been executed using source. I am able to do so when they are launched with bash using ps -ef.
If I run a script using bash, I can figure the process ID using ps -ef | grep "test1.sh" | grep -v "grep". However, if I run the script using source, I cannot search for it and hence cannot find the process ID.
I have read the difference between the bash and source commands from this link.
This is my testing procedure:
I have 2 terminals. In one of them, I am searching for process IDs using ps -ef. In the other one, I run a script which prints 'Hello' every one second (an infinite while loop with sleep of 1 second). With bash, PID is searchable, but with source, grep doesn't get any results.
I am working on an Ubuntu 18.04.2 LTS machine
If you do not want to terminate the sourcing bash and are satisfied with the script being stopped only after a command (such as sleep) finishes, you can kill -INT the bash process.
I'm working on MINI2440 and building a custom OS for it using buildroot, but for testing purpose I'm using OS downloaded from official website.
So the problem is, I'm using usbpush to push OS images in MINI2440 through USB, but it popups the message when I enter below commond
sudo ./usbpush supervivi-128M 0x30008000
Unable to claim usb interface 1 of device: could not claim interface 0: Device or resource busy
I don't understand one concept that, whenever I assign executable permission to usbpush, it runs automatically in background. It's clearly seen below
ps -ef | grep usb*
silicod+ 2431 2207 0 10:25 pts/10 00:00:00 grep --color=auto usbpush
I tried to kill using
sudo kill -9 2431
But it creates new pid and again run itsellf in background. I tried googling but nothing works for me.
=============================================================
Well, I got my solution. I don't know what is the problem with my usbpush tool, but I downloaded another tool and it works very well. Here is the link to that tool , may it help someone
Friendly_ARM_Mini2440_USBPUSH
Cheers....!
lovely ;-)
well I guess it is actually not running..
ps -ef will give you details about all running processes
grep usb* - (loose the *) will find any lines containing usb
the way unix/linux does it is that grep gets started first and then the "|" connects output of ps -ef to grep's input
so what you are finding is the grep command itself
what you want is ps -ef | grep -v grep | grep usb - this will work unless your "usb" command is something like grepusb or usbgrep or the line contains grep..
I have seafile (http://www.seafile.com/en/home/) running on my NAS and I set up a cron tab that runs a script every few minutes to check if the seafile server is up, and if not, it will start it
The script looks like this:
#!/bin/bash
# exit if process is running
if ps aux | grep "[s]eafile" > /dev/null
then exit
else
# restart process
/home/simon/seafile/seafile-server-latest/seafile.sh start
/home/simon/seafile/seafile-server-latest/seahub.sh start-fastcgi
fi
running /home/simon/seafile/seafile-server-latest/seafile.sh start and /home/simon/seafile/seafile-server-latest/seahub.sh start-fastcgi individually/manually works without a problem, but when I try to manually run this script file, neither of those lines execute and seafile/seahub do not start
Is there an error in my script that is preventing execution of those 2 lines? I've made sure to chmod the script file to 755
The problem is likely that when you pipe commands into one another, you don't guarentee that the second command doesn't start before the first (it can start, but not do anything while it waits for input). For example:
oj#ironhide:~$ ps -ef | grep foo
oj 8227 8207 0 13:54 pts/1 00:00:00 grep foo
There is no process containing the word "foo" running on my machine, but the grep that I'm piping ps to appears in the process list that ps produces.
You could try using pgrep instead, which is pretty much designed for this sort of thing:
if pgrep "[s]eafile"
Or you could add another pipe to filter out results that include grep:
ps aux | grep "[s]eafile" | grep -v grep
If the name of this script matches the regex [s]eafile it will trivially always take the exit branch.
You should probably be using pidof in preference of reinventing the yak shed anyway.
turns out the script itself was working ok, although the change to using pgrep is much nicer. the problem was actually in the crontab (didn't include the sh in the command)
I am very new to shell scripting, can anyone help to solve a simple problem: I have written a simple shell script that does:
1. Stops few servers.
2. Kills all the process by user1
3. Starts few servers .
This script runs on the remote host. so I need to ssh to the machine copy my script and then run it. Also Command I have used for killing all the process is:
ps -efww | grep "user1"| grep -v "sshd"| awk '{print $2}' | xargs kill
Problem1: since user1 is used for ssh and running the script.It kills the process that is running the script and never goes to start the server.can anyone help me to modify the above command.
Problem2: how can I automate the process of sshing into the machine and running the script.
I have tried expect script but do I need to have a separate script for sshing and performing these tasksor can I do it in one script itself.
any help is welcomed.
Basically the answer is already in your script.
Just exclude your script from found processes like this
grep -v <your script name>
Regarding running the script automatically after you ssh, have a look here, it can be done by a special ssh configuration
Just create a simple script like:
#!/bin/bash
ssh user1#remotehost '
someservers stop
# kill processes here
someservers start
'
In order to avoid killing itself while stopping all user's processes try to add | grep -v bash after grep -v "sshd"
This is a problem with some nuance, and not straightforward to solve in shell.
The best approach
My suggestion, for easier system administration, would be to redesign. Run the killing logic as root, for example, so you may safely TERMinate any luser process without worrying about sawing off the branch you are sitting on. If your concern is runaway processes, run them under a timeout. Etc.
A good enough approach
Your ssh login shell session will have its own pseudo-tty, and all of its descendants will likely share that. So, figure out that tty name and skip anything with that tty:
TTY=$(tty | sed 's!^/dev/!!') # TTY := pts/3 e.g.
ps -eo tty=,user=,pid=,cmd= | grep luser | grep -v -e ^$TTY -e sshd | awk ...
Almost good enough approaches
The problem with "almost good enough" solutions like simply excluding the current script and sshd via ps -eo user=,pid=,cmd= | grep -v -e sshd -e fancy_script | awk ...) is that they rely heavily on the accident of invocation. ps auxf probably reveals that you have a login shell in between your script and your sshd (probably -bash) — you could put in special logic to skip that, too, but that's hardly robust if your script's invocation changes in the future.
What about question no. 2? (How can I automate sshing...?)
Good question. Off-topic. Try superuser.com.
I have this running:
if (open(PS_ELF, "/bin/ps -eLf|")) {
while (<PS_ELF>) {
if ($_ =~ m/some regex/) {
# do some stuff
}
}
}
If called locally, the loop runs just fine, once for every output line of ps -eLf
Now if the same script is called from Nagios via NRPE, PS_ELF does only contain one line (the first line output by ps).
This puzzles me; what could be the reason?
Maybe this is not limited to/caused by Nagios at all, I just included it for the sake of completeness.
I'm on SUSE Enterprise Linux 10 SP2 and perl v5.8.8.
Although this problem is very old, I experienced the exact same problem today.
So I thought I share what I found.
The problem is that processes created by the NRPE daemon (can) have a different environment than processes you execute directly in the shell as the NRPE daemon user.
I created the following bash script:
#!/bin/bash
echo `env | grep COLUMNS`
This gives me the environment variable COLUMN of the current process, which has the same environment as the parent process (the process forked by the NRPE daemon).
When I execute this script as the NRPE daemon user
$ /tmp/check_env.sh
COLUMNS=174
it gives me the value of my current shell window.
But when I execute this script via NRPE, I get:
nagios-server $ check_nrpe -H client -c check_env
COLUMNS=80
Which is why ps -eaf output is limited to 80 characters unless you use the ww parameter for unlimited width, which ignores the COLUMNS environment variable.
I changed 'ps -eLf' to 'ps -eLfww' (ww for unlimited output) and this fixed the problem even if I don't understand why there is a difference when called remotely.
It's probably more something to do with how NRPE plugins work than Perl itself.
Your plugin is working like explained here (return code + output) ?